Search results for

Popping Sound

19,737 results found

Post

Replies

Boosts

Views

Activity

Question about "Notification (NSE) filtering" capability request
We are developing a messaging app which sends End-to-End encrypted data. The application supports multiple types of E2EE data, including text messages and voice over IP calls. Apple's article titled “Sending End-to-End Encrypted VoIP calls” (https://developer.apple.com/documentation/callkit/sending-end-to-end-encrypted-voip-calls) states that the following steps are required to support E2EE VoIP calls: Request permission to receive remote notifications through the User Notifications framework Register for VoIP calls using PuskKit Add a Notification Service Extension target to your app. Add the com.apple.developer.usernotifications.filtering entitlement to the NSE target’s entitlements file. We have completed steps one through three. We are still missing the filtering entitlement. As of right now the system does not allow us to use reportNewIncomingVoIPPushPayload(_:completion:) method because of the missing entitlement.
 Below is a short description of how our messaging app works: User sends a message to anot
2
0
502
Nov ’25
UIModernBarButton causing a lot of console constraint warning and eventual animation glitches
is there anything I can do about this? It's in a navigation controller, and I get a lot of these when I pop and push. eventually it causes glitches with zoom animations making for some really loopy zooms. I posted a movie to FB20439774. If there's anything I can do to fix it would be great. Unable to simultaneously satisfy constraints. ( , , , , , ) Will attempt to recover by breaking constraint
2
0
188
Nov ’25
Concerning Socket Disconnection Issues in iPhone VoIP Applications
We are encountering the following issue with our VoIP application for iPhone, published on the App Store, and would appreciate your guidance on possible countermeasures. The VoIP application (callee side) utilizes a Wi-Fi network. The sequence leading to the issue is as follows: VoIP App (callee): Launches iPhone (callee): Locks (e.g., by short-pressing the power button) VoIP App (callee): Transitions to a suspended state VoIP App (caller): Initiates a VoIP call VoIP App (callee): Receives a local push notification VoIP App (callee): Creates a UDP socket for call control (for SIP send/receive) VoIP App (callee): Creates a UDP socket for audio stream (for RTP send/receive) VoIP App (callee): Exchanges SIP messages (INVITE, 100 Trying, 180 Ringing, etc.) using the call control UDP socket VoIP App (callee): Answers the incoming call VoIP App (callee): Executes performAnswerCallAction() Immediately after executing performAnswerCallAction() in the above sequence, the sendto() function for both the UDP soc
6
0
236
Nov ’25
Ventura Hack for FireWire Core Audio Support on Supported MacBook Pro and others...
Hi all, Apple dropping on-going development for FireWire devices that were supported with the Core Audio driver standard is a catastrophe for a lot of struggling musicians who need to both keep up to date on security updates that come with new OS releases, and continue to utilise their hard earned investments in very expensive and still pristine audio devices that have been reduced to e-waste by Apple's seemingly tone-deaf ignorance in the cries for on-going support. I have one of said audio devices, and I'd like to keep using it while keeping my 2019 Intel Mac Book Pro up to date with the latest security updates and OS features. Probably not the first time you gurus have had someone make the logical leap leading to a request for something like this, but I was wondering if it might be somehow possible of shoe-horning the code used in previous versions of Mac OS that allowed the Mac to speak with the audio features of such devices to run inside the Ventura version of the OS.
65
0
34k
Nov ’25
Local Wi-Fi UDP discovery works in Debug but stops working in TestFlight (React Native app)
Hi everyone, I am building a React Native iOS app that discovers audio devices on the local Wi-Fi network using UDP broadcast + mDNS/Bonjour lookup (similar to the “4Stream” app). The app works 100% perfectly in Debug mode when installed directly from Xcode. But once I upload it to TestFlight, the local-network features stop working completely: UDP packets never arrive Device discovery does not work Bonjour/mDNS lookup returns nothing Same phone, same Wi-Fi, same code → only Debug works, TestFlight fails react-native-udp for UDP broadcast react-native-dns-lookup for resolving hostnames react-native-xml2js for parsing device responses
1
0
91
Nov ’25
Reply to Mac App Packaging
Inno Setup was a sweet app. I used to use that when I made Windows software. I don't know anything about Filemaker. A quick search says that runtimes were deprecated and removed years ago. Apparently Filemaker 18 was the last version to support them. There is some kind of iOS App SDK that may still be supported. It doesn't sound like this would be a quick and/or easy solution, but that seems to be all there is. After Hypercard, there was never the same kind of custom database app community (Clipper, Access, Paradox, etc.) as on PCs. I don't know what you mean by inherited icons. A DMG is just a disk image. Its use in installing software is problematic. Why use a zip files or pkg installer when you can use DMGs that make it 3 times more difficult? Look at the pkgutil tool. All you need is a folder with the app you want to install. Create a directory tree of all the locations where you want to install files. Then use pkgutil to create an installer for that. Forget everything I said about the Mac App St
Nov ’25
Reply to Check whether app is built in debug or release mode
Earlier I wrote: But, honestly, it sounds like a fun weekend project And indeed it was (-: Pasted below is some iOS code that is able to detect how your code is signed using only public APIs. To do this, it uses a sneaky combination of XPC loopback and XPC peer requirement checking. This code comes with a bunch of caveats. Read the doc comment before you use it [1]. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com [1] By my count the doc comments represent well over half the total number of lines (-: import Foundation extension CheckSelfEntitlement { /// Checks whether the current process claims the get-task-allow /// entitlement. /// /// - warning: As explained below, you shouldn’t use this routine but /// instead should use ``isGetTaskAllowTrue()``. This routine exists solely /// to illustrate the following point. /// /// This routine checks for the presence of the entitlement, rather than /// checking for it being present with a p
Nov ’25
Reply to PushToTalk session sometimes returns silence data after activation
I have a working walkie-talkie app based on the PushToTalk framework. Everything works fine except for an intermittent bug that I face from time to time on different devices with different iOS versions, from iOS 18 to iOS 26.2 Beta. The next time this occurs, please capture a sysdiagnose, file a bug on this, then post the bug number back here. What you're describing sounds similar to a CallKit issue (r.157725305). A fix is being investigated for the CallKit issue, but if it's happening to the PTT system, then that's something we'd need to address. Once I leave the channel and rejoin it again, the issue is fixed and I start to receive non silent buffers of varying size, as expected. Assuming this is similar to the CallKit issue, this works because it's updating the audio session ID that callservicesd uses to activate your audio session. Unfortunately, that also means that it's basically the only thing that WILL work. __ Kevin Elliott DTS Engineer, CoreOS/Hardware
Topic: App & System Services SubTopic: General Tags:
Nov ’25
PushToTalk session sometimes returns silence data after activation
Hello! Thank you for bringing the new iPhone experience with the PushToTalk framework. I have a working walkie talkie app based on the PushToTalk framework. Everything works fine except for an intermittent bug that I face from time to time on different devices with different iOS versions, from iOS 18 to iOS 26.2 Beta. Sometimes the app goes into a state where the AVAudioInputNode input node tap returns buffers with a constant size that contain only silence. Leaving and rejoining a channel helps, but relaunching or reinstalling (from Xcode) the app does not. Rebooting the device or deleting and reinstalling the app also helps. I do not activate the audio session in my app. I only configure it on launch using setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetooth]) So the flow is: channelManager?.requestBeginTransmitting(channelUUID: globalChannelUUID) func channelManager( _ channelManager: PTChannelManager, channelUUID: UUID, didBeginTransmittingFrom source: PTChannelTransmitRequestS
1
0
156
Nov ’25
[26] audioTimeRange would still be interesting for .volatileResults in SpeechTranscriber
So experimenting with the new SpeechTranscriber, if I do: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults], attributeOptions: [.audioTimeRange] ) only the final result has audio time ranges, not the volatile results. Is this a performance consideration? If there is no performance problem, it would be nice to have the option to also get speech time ranges for volatile responses. I'm not presenting the volatile text at all in the UI, I was just trying to keep statistics about the non-speech and the speech noise level, this way I can determine when the noise level falls under the noisefloor for a while. The goal here was to finalize the recording automatically, when the noise level indicate that the user has finished speaking.
6
0
706
Nov ’25
How to safely switch between mic configurations on iOS?
I have an iPadOS M-processor application with two different running configurations. In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz. In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attachi
1
0
167
Nov ’25
Reply to Check whether app is built in debug or release mode
[quote='866973022, SoumyaMahunt, /thread/807924?answerId=866973022#866973022, /profile/SoumyaMahunt'] I think more appropriate solution for me will be deciding based on the way app was signed [/quote] OK. That’s easy to do on macOS, using SecCodeCopySigningInformation. On iOS the story isn’t as rosey. Historically there’s been no supported way to do this on iOS. Thinking about this today, I believe that recent iOS API additions make it possible, albeit in a non-obvious way. I got a proof of concept working today, and it looks promising. Unfortunately I don’t have time to flesh it out. But, honestly, it sounds like a fun weekend project so, with any luck, I’ll have an answer for you on Monday. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com
Nov ’25