Search results for

“Popping Sound”

20,039 results found

Post

Replies

Boosts

Views

Activity

Show / Hide HAL Virtual Audio Device Based on App State
I am developing a macOS virtual audio device using an Audio Server Plug-In (HAL). I want the virtual device to be visible to all applications only when my main app is running, and completely hidden from all apps when the app is closed. The goal is to dynamically control device visibility based on app state without reinstalling the driver.What is the recommended way for the app to notify the HAL plug-in about its running or closed state ? Any guidance on best-practice architecture for this scenario would be appreciated.
1
0
223
Jan ’26
Reply to Show / Hide HAL Virtual Audio Device Based on App State
I am developing a macOS virtual audio device using an Audio Server Plug-In (HAL). I want the virtual device to be visible to all applications only when my main app is running, and completely hidden from all apps when the app is closed. The goal is to dynamically control device visibility based on app state without reinstalling the driver. What is the recommended way for the app to notify the HAL plug-in about its running or closed state? Any guidance on best-practice architecture for this scenario would be appreciated. So, full disclaimer, I don't have a lot of direct experience with AudioServer plugins, so the advice I'm giving is coming from a general device management/architecture perspective, NOT from detailed experience with this particular plugin type. In any case, the place I would probably start with here is the sample project Building an Audio Server Plug-in and Driver Extension. You don't need any kind of DEXT for this, but that sample does how to deal with device appearan
Topic: App & System Services SubTopic: Drivers Tags:
Jan ’26
Activating a Container App from a Custom Keyboard Extension to Enable Continuous Voice Input While Preserving the Original Typing Context
Project Background: I am developing a third-party custom keyboard for iOS whose primary feature is real-time voice input. In my current design, responsibilities are split as follows: 1. The container (main) app is responsible for: Audio recording Speech recognition (ASR) 2. The keyboard extension is responsible for: Providing the keyboard UI Initiating the voice input workflow Receiving transcription results via an App Group Inserting recognized text into the active text field using textDocumentProxy.insertText(_:) Intended User Flow The intended workflow is: The user is typing in a third-party app (for example, WeChat) using my custom keyboard. The user taps a “Voice Input” button in the keyboard extension. The keyboard extension activates the container app so that audio recording and ASR can begin. After recording has started, control returns to the original app where the user was typing. The container app continues running in the background, maintaining active audio recording and
3
0
193
Jan ’26
Reply to CoreBluetooth multi-peripheral high-frequency BLE streaming shows uneven packet distribution and lag on some A16/A17 iPads
This all sounds like an issue with specific hardware units. Let's go the bug report route on this. First we need a Bluetooth diagnostic log. Please go to https://developer.apple.com/bug-reporting/profiles-and-logs/ and follow the instructions for Bluetooth for iOS to install a logging profile on your device. Then, once the logging profile is installed: reproduce the problem, keeping track of the actual time of the actions you take and the result you see. also include a sniffer log of the same if you have it. Please include the actual log, and an ASCII export of it from the logger. make sure there aren’t any extraneous BLE devices around, and no other apps are trying to connect to some other BLE device while you are conducting this test. Once the problem is reproduced, follow the instructions at the above link to trigger a sysdiagnose Please also let me know the BLE hardware/firmware/SDK you are using, along with version info. Once you have these, along with any other information you think would be us
Topic: App & System Services SubTopic: Core OS Tags:
Jan ’26
Building a Full Space app that enables sharing a visionOS experience with nearby users.
Hello, I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users. Intended Features A Mixed Full Space app in which dozens of 3D models are placed in the space. These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines. The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space. Some media elements play automatically, while others are triggered by user interaction. However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as: audio only spatial audio video without audio video with audio I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario. Questions Is it technically possi
1
0
370
Jan ’26
iOS React Native: Can two WebRTC stacks (Wazo & Jitsi) share media?
Hi everyone, I’m building a React Native iOS app where I’m integrating Wazo (native WebRTC) and Jitsi (WebView / WebRTC). Use case: Wazo is used to maintain a background call session (mainly signaling + audio keep-alive). Jitsi is used in the foreground for video calls. Problem: When Jitsi starts, it takes control of the microphone and camera. The Wazo call disconnects after ~5 minutes (likely due to media / audio session conflict). Even if Wazo audio/video is muted or tracks are disabled, the session still drops. My questions: Is it officially supported or recommended to run two WebRTC stacks (Wazo + Jitsi) simultaneously on iOS? Can Wazo stay connected without active audio/video tracks while Jitsi uses mic/camera? Is there a way to release Wazo media streams temporarily (but keep signaling alive) while Jitsi is loading or active? Are there any AVAudioSession / background mode limitations on iOS that make this impossible by design? If this is not supported, what is the rec
1
0
358
Jan ’26
CGContextDrawShading broken on MacOS Tahoe for apps built with MacOS SDK older than 14.5
On MacOS Tahoe (26.0 26.1 or 26.2), when loading an application that was built with an SDK older than version SDK 14.5, all CGContextDrawShading calls to draw onto the screen (inside of NSView drawRect:) fail silently, filling the path with a single color instead of a gradient. If rendering into a local CGBitmapContext instead of the NSView context on Tahoe, CGShading works as expected. On MacOS 15 and earlier, CGShading works as expected too. If the app is built with SDK version 14.5 or newer, CGShading works normally on MacOS Tahoe. For recent applications, they can of course be rebuilt with a more recent version of the SDK, which fixes the problem. However for Audio Units or any other type of plug-in, even if they are built with the appropriate SDK, if they are loaded inside of a legacy application that was built with an older SDK, the problem arises, which customers complain about and do not understand. I have noticed that there had been a few changes in MacOS Tahoe regarding the CGShading APIs,
Topic: UI Frameworks SubTopic: AppKit
1
0
161
Jan ’26
Reply to iOS UserDefaults Intermittently Returns Nil After getPreferences() Call - Race Condition?
Looking at the code you posted, the only way [1] that preferences can be nil at line 15 is if your decoded object code fails. You didn’t show that code, so it’s hard to tell what’s going on there. It sounds like you can reproduce this reasonably reliably. Given that, you can debug this using logging. I have general advice on that topic in Testing and Debugging Code Running in the Background. IMPORTANT It’s time to stop using NSLog habitually. Rather, use print(…) for transient debugging and Logger for debugging that you want to keep in your codebase (or for cases like this, where you need to debug without Xcode being attached). Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com [1] Well, not the only way. You could be dealing with an issue that’s outside of the scope of the Objective-C / Swift execution models, like a memory corruption problem. However, let’s rule out the easy things first (-:
Topic: App & System Services SubTopic: General Tags:
Jan ’26
iOS UserDefaults Intermittently Returns Nil After getPreferences() Call - Race Condition?
iOS Intermittent Bug: UserDefaults Preferences Loading Issue Problem Summary We're experiencing an intermittent issue where UserPreferences.shared.preferences returns inconsistent values even after calling getPreferences(). The behavior is unpredictable and affects critical functionality. Environment iOS Version: 15+ Language: Objective-C with Swift interop Storage: UserDefaults with App Group (group.com.jci.tyco.glss) Architecture: Singleton pattern for UserPreferences (Swift class) The Issue When a push notification arrives and triggers the showEvent: method, user preferences are sometimes loaded correctly and sometimes return nil or default values: Scenario A (Works - ~60% of time): Scenario B (Fails - ~40% of time): Observed Pattern From extensive logging over multiple test runs: Key Observation: At app launch: Preferences often load successfully Seconds later when push arrives: Same preferences become unavailable User navigates to another screen and back: Preferences suddenly work again Code Structure Ob
1
0
116
Jan ’26
iPhone 14 Pro: External USB mic not available in AVAudioSession for call apps, but works in Voice Memos & Instagram Live
I’m facing a strange audio routing issue that seems specific to iPhone 14 Pro / Pro Max. I’m using LiveKit (WebRTC) in a React Native app, which uses AVAudioSession internally for audio capture (VoIP / call-style usage). 🔍 What’s happening: I’m using an external USB microphone. On these devices: iPhone 11 → ✅ USB mic works iPhone 13 → ✅ USB mic works iPhone 17 Pro → ✅ USB mic works iPhone 14 Pro Max → ❌ USB mic does NOT work On iPhone 14 Pro Max: The same USB mic: ✅ Works in Voice Memos ✅ Works in Instagram Live ❌ Does NOT appear as an input option in my app ❌ Does NOT work in WhatsApp / Instagram calls Also: In my app on iPhone 14 Pro Max, iOS does not show the audio input selector UI On iPhone 17 Pro, the same app and same build does show the selector and the USB mic works ⚙️ My audio session config ( LiveKit ): await AudioSession.setAppleAudioConfiguration({ audioCategory: 'playAndRecord', audioMode: 'default', audioCategoryOptions: ['allowBluetooth', 'defaultToSpeaker'
1
0
132
Jan ’26
Reply to DesktopServicesHelper appears to delete or unlink the source file before the ESF auth event deadline is reached, rather than waiting for the full deadline window.
So, in my earlier post, I said: Misunderstanding what the system was actually doing Which now leads to: To add more context, suppose the user is copying a file from one destination to a Network location. Am I correct that you've only actually seen this on Network copies? And (possibly) only some Network copies? As a side test, I'd be curious what happens if you tested with an AFP server instead of smb. I ask because of this: If it took longer than 5 seconds to complete the file inspection (which is well below the deadline for the auth event), then DesktopServicesHelper just deletes this file created at the destination. This we are observing in macOS Tahoe only. Strictly speaking, 5s is a somewhat odd amount of time. That may not sound like a long time, but it's an eternity at the time scale the kernel operates, particularly for any kind of local file system operation. You haven't actually said this, but I suspect you've found that the timing here is fairly precise— that is, it works fine with a delay
Topic: App & System Services SubTopic: Core OS Tags:
Jan ’26
Reply to Zsh kills Python process with plenty of available VM
I see, thank you for pointing this out. So it is not a percentage, but an actual number of pages. Could you expand a little on how to interpret in your previous answer? So, stepping back for a moment, the basic issue here is deciding when should the kernel stop just blindly backing memory. It COULD (and, historically, did) just limit that to total available storage; however, in practice, that just means the machine grinds itself into a useless state without actually failing. So, what macOS does is artificially limit the VM system to ensure that the machine remains always in a functional state. The next question then becomes how to implement that limit. There are lots of places you COULD limit the VM system, but the problem is that the VM system is complicated enough that many obvious metrics don't really work. For example, purgable memory[1] means that simply dirty pages doesn't necessarily work“ - a process could have a very large number of dirty pages, but if they're all purgable, they shouldn't really coun
Jan ’26
Background Audio Recording
I have an app that uses background audio recording. From what others say, I have enabled the audio background mode to keep the audio session active, and this worked. But when submitting the app to the app store, the app was rejected because the audio background mode is only supposed to be used for audio playback. How do I create this background mode while following Apple's guidelines?
4
0
305
Jan ’26
Reply to Background Audio Recording
I can't get the audio to continue recording longer than 30 seconds while in the background. I have 'audio' in my Background Modes set and the recording/mic continues (with the orange pill) after I put the app in the background but only for 30 seconds. Then the app is suspended. How are you supposed to make this work? My app is a real-time transcription app, and I know this works just fine with the Otter app so I know it is possible.
Jan ’26
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
3
0
577
Jan ’26
Show / Hide HAL Virtual Audio Device Based on App State
I am developing a macOS virtual audio device using an Audio Server Plug-In (HAL). I want the virtual device to be visible to all applications only when my main app is running, and completely hidden from all apps when the app is closed. The goal is to dynamically control device visibility based on app state without reinstalling the driver.What is the recommended way for the app to notify the HAL plug-in about its running or closed state ? Any guidance on best-practice architecture for this scenario would be appreciated.
Replies
1
Boosts
0
Views
223
Activity
Jan ’26
Reply to Show / Hide HAL Virtual Audio Device Based on App State
I am developing a macOS virtual audio device using an Audio Server Plug-In (HAL). I want the virtual device to be visible to all applications only when my main app is running, and completely hidden from all apps when the app is closed. The goal is to dynamically control device visibility based on app state without reinstalling the driver. What is the recommended way for the app to notify the HAL plug-in about its running or closed state? Any guidance on best-practice architecture for this scenario would be appreciated. So, full disclaimer, I don't have a lot of direct experience with AudioServer plugins, so the advice I'm giving is coming from a general device management/architecture perspective, NOT from detailed experience with this particular plugin type. In any case, the place I would probably start with here is the sample project Building an Audio Server Plug-in and Driver Extension. You don't need any kind of DEXT for this, but that sample does how to deal with device appearan
Topic: App & System Services SubTopic: Drivers Tags:
Replies
Boosts
Views
Activity
Jan ’26
Activating a Container App from a Custom Keyboard Extension to Enable Continuous Voice Input While Preserving the Original Typing Context
Project Background: I am developing a third-party custom keyboard for iOS whose primary feature is real-time voice input. In my current design, responsibilities are split as follows: 1. The container (main) app is responsible for: Audio recording Speech recognition (ASR) 2. The keyboard extension is responsible for: Providing the keyboard UI Initiating the voice input workflow Receiving transcription results via an App Group Inserting recognized text into the active text field using textDocumentProxy.insertText(_:) Intended User Flow The intended workflow is: The user is typing in a third-party app (for example, WeChat) using my custom keyboard. The user taps a “Voice Input” button in the keyboard extension. The keyboard extension activates the container app so that audio recording and ASR can begin. After recording has started, control returns to the original app where the user was typing. The container app continues running in the background, maintaining active audio recording and
Replies
3
Boosts
0
Views
193
Activity
Jan ’26
Reply to CoreBluetooth multi-peripheral high-frequency BLE streaming shows uneven packet distribution and lag on some A16/A17 iPads
This all sounds like an issue with specific hardware units. Let's go the bug report route on this. First we need a Bluetooth diagnostic log. Please go to https://developer.apple.com/bug-reporting/profiles-and-logs/ and follow the instructions for Bluetooth for iOS to install a logging profile on your device. Then, once the logging profile is installed: reproduce the problem, keeping track of the actual time of the actions you take and the result you see. also include a sniffer log of the same if you have it. Please include the actual log, and an ASCII export of it from the logger. make sure there aren’t any extraneous BLE devices around, and no other apps are trying to connect to some other BLE device while you are conducting this test. Once the problem is reproduced, follow the instructions at the above link to trigger a sysdiagnose Please also let me know the BLE hardware/firmware/SDK you are using, along with version info. Once you have these, along with any other information you think would be us
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Jan ’26
Building a Full Space app that enables sharing a visionOS experience with nearby users.
Hello, I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users. Intended Features A Mixed Full Space app in which dozens of 3D models are placed in the space. These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines. The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space. Some media elements play automatically, while others are triggered by user interaction. However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as: audio only spatial audio video without audio video with audio I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario. Questions Is it technically possi
Replies
1
Boosts
0
Views
370
Activity
Jan ’26
iOS React Native: Can two WebRTC stacks (Wazo & Jitsi) share media?
Hi everyone, I’m building a React Native iOS app where I’m integrating Wazo (native WebRTC) and Jitsi (WebView / WebRTC). Use case: Wazo is used to maintain a background call session (mainly signaling + audio keep-alive). Jitsi is used in the foreground for video calls. Problem: When Jitsi starts, it takes control of the microphone and camera. The Wazo call disconnects after ~5 minutes (likely due to media / audio session conflict). Even if Wazo audio/video is muted or tracks are disabled, the session still drops. My questions: Is it officially supported or recommended to run two WebRTC stacks (Wazo + Jitsi) simultaneously on iOS? Can Wazo stay connected without active audio/video tracks while Jitsi uses mic/camera? Is there a way to release Wazo media streams temporarily (but keep signaling alive) while Jitsi is loading or active? Are there any AVAudioSession / background mode limitations on iOS that make this impossible by design? If this is not supported, what is the rec
Replies
1
Boosts
0
Views
358
Activity
Jan ’26
CGContextDrawShading broken on MacOS Tahoe for apps built with MacOS SDK older than 14.5
On MacOS Tahoe (26.0 26.1 or 26.2), when loading an application that was built with an SDK older than version SDK 14.5, all CGContextDrawShading calls to draw onto the screen (inside of NSView drawRect:) fail silently, filling the path with a single color instead of a gradient. If rendering into a local CGBitmapContext instead of the NSView context on Tahoe, CGShading works as expected. On MacOS 15 and earlier, CGShading works as expected too. If the app is built with SDK version 14.5 or newer, CGShading works normally on MacOS Tahoe. For recent applications, they can of course be rebuilt with a more recent version of the SDK, which fixes the problem. However for Audio Units or any other type of plug-in, even if they are built with the appropriate SDK, if they are loaded inside of a legacy application that was built with an older SDK, the problem arises, which customers complain about and do not understand. I have noticed that there had been a few changes in MacOS Tahoe regarding the CGShading APIs,
Topic: UI Frameworks SubTopic: AppKit
Replies
1
Boosts
0
Views
161
Activity
Jan ’26
Reply to iOS UserDefaults Intermittently Returns Nil After getPreferences() Call - Race Condition?
Looking at the code you posted, the only way [1] that preferences can be nil at line 15 is if your decoded object code fails. You didn’t show that code, so it’s hard to tell what’s going on there. It sounds like you can reproduce this reasonably reliably. Given that, you can debug this using logging. I have general advice on that topic in Testing and Debugging Code Running in the Background. IMPORTANT It’s time to stop using NSLog habitually. Rather, use print(…) for transient debugging and Logger for debugging that you want to keep in your codebase (or for cases like this, where you need to debug without Xcode being attached). Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com [1] Well, not the only way. You could be dealing with an issue that’s outside of the scope of the Objective-C / Swift execution models, like a memory corruption problem. However, let’s rule out the easy things first (-:
Topic: App & System Services SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jan ’26
iOS UserDefaults Intermittently Returns Nil After getPreferences() Call - Race Condition?
iOS Intermittent Bug: UserDefaults Preferences Loading Issue Problem Summary We're experiencing an intermittent issue where UserPreferences.shared.preferences returns inconsistent values even after calling getPreferences(). The behavior is unpredictable and affects critical functionality. Environment iOS Version: 15+ Language: Objective-C with Swift interop Storage: UserDefaults with App Group (group.com.jci.tyco.glss) Architecture: Singleton pattern for UserPreferences (Swift class) The Issue When a push notification arrives and triggers the showEvent: method, user preferences are sometimes loaded correctly and sometimes return nil or default values: Scenario A (Works - ~60% of time): Scenario B (Fails - ~40% of time): Observed Pattern From extensive logging over multiple test runs: Key Observation: At app launch: Preferences often load successfully Seconds later when push arrives: Same preferences become unavailable User navigates to another screen and back: Preferences suddenly work again Code Structure Ob
Replies
1
Boosts
0
Views
116
Activity
Jan ’26
iPhone 14 Pro: External USB mic not available in AVAudioSession for call apps, but works in Voice Memos & Instagram Live
I’m facing a strange audio routing issue that seems specific to iPhone 14 Pro / Pro Max. I’m using LiveKit (WebRTC) in a React Native app, which uses AVAudioSession internally for audio capture (VoIP / call-style usage). 🔍 What’s happening: I’m using an external USB microphone. On these devices: iPhone 11 → ✅ USB mic works iPhone 13 → ✅ USB mic works iPhone 17 Pro → ✅ USB mic works iPhone 14 Pro Max → ❌ USB mic does NOT work On iPhone 14 Pro Max: The same USB mic: ✅ Works in Voice Memos ✅ Works in Instagram Live ❌ Does NOT appear as an input option in my app ❌ Does NOT work in WhatsApp / Instagram calls Also: In my app on iPhone 14 Pro Max, iOS does not show the audio input selector UI On iPhone 17 Pro, the same app and same build does show the selector and the USB mic works ⚙️ My audio session config ( LiveKit ): await AudioSession.setAppleAudioConfiguration({ audioCategory: 'playAndRecord', audioMode: 'default', audioCategoryOptions: ['allowBluetooth', 'defaultToSpeaker'
Replies
1
Boosts
0
Views
132
Activity
Jan ’26
Reply to DesktopServicesHelper appears to delete or unlink the source file before the ESF auth event deadline is reached, rather than waiting for the full deadline window.
So, in my earlier post, I said: Misunderstanding what the system was actually doing Which now leads to: To add more context, suppose the user is copying a file from one destination to a Network location. Am I correct that you've only actually seen this on Network copies? And (possibly) only some Network copies? As a side test, I'd be curious what happens if you tested with an AFP server instead of smb. I ask because of this: If it took longer than 5 seconds to complete the file inspection (which is well below the deadline for the auth event), then DesktopServicesHelper just deletes this file created at the destination. This we are observing in macOS Tahoe only. Strictly speaking, 5s is a somewhat odd amount of time. That may not sound like a long time, but it's an eternity at the time scale the kernel operates, particularly for any kind of local file system operation. You haven't actually said this, but I suspect you've found that the timing here is fairly precise— that is, it works fine with a delay
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Jan ’26
Reply to Zsh kills Python process with plenty of available VM
I see, thank you for pointing this out. So it is not a percentage, but an actual number of pages. Could you expand a little on how to interpret in your previous answer? So, stepping back for a moment, the basic issue here is deciding when should the kernel stop just blindly backing memory. It COULD (and, historically, did) just limit that to total available storage; however, in practice, that just means the machine grinds itself into a useless state without actually failing. So, what macOS does is artificially limit the VM system to ensure that the machine remains always in a functional state. The next question then becomes how to implement that limit. There are lots of places you COULD limit the VM system, but the problem is that the VM system is complicated enough that many obvious metrics don't really work. For example, purgable memory[1] means that simply dirty pages doesn't necessarily work“ - a process could have a very large number of dirty pages, but if they're all purgable, they shouldn't really coun
Replies
Boosts
Views
Activity
Jan ’26
Background Audio Recording
I have an app that uses background audio recording. From what others say, I have enabled the audio background mode to keep the audio session active, and this worked. But when submitting the app to the app store, the app was rejected because the audio background mode is only supposed to be used for audio playback. How do I create this background mode while following Apple's guidelines?
Replies
4
Boosts
0
Views
305
Activity
Jan ’26
Reply to Background Audio Recording
I can't get the audio to continue recording longer than 30 seconds while in the background. I have 'audio' in my Background Modes set and the recording/mic continues (with the orange pill) after I put the app in the background but only for 30 seconds. Then the app is suspended. How are you supposed to make this work? My app is a real-time transcription app, and I know this works just fine with the Otter app so I know it is possible.
Replies
Boosts
Views
Activity
Jan ’26
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
Replies
3
Boosts
0
Views
577
Activity
Jan ’26