Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Posts under Core Audio tag

37 Posts

Post

Replies

Boosts

Views

Activity

block microphone and speakers due to security reason
Hello, As part of developing a DLP system, the microphone and speakers should be blocked. My solution involves muting devices by changing the property kAudioDevicePropertyMute. However, this solution allows the user to unmute the device, and the app must implement a property listener to mute the device again. The problem is that muting takes some time and the device is temporarily unmuted. Admittedly, it takes less than a second, but nevertheless, it appears insecure. Is there an Apple-recommended approach to implement such blocking more securely? Maybe some solution which is based on IOKit. Thank you in advance, Pavel
2
0
310
Aug ’25
Logic Pro cannot load v3 audio unit with framework compiled with Swift 6
Sequoia 15.4.1 (24E263) XCode: 16.3 (16E140) Logic Pro: 11.2.1 I’ve been developing a complex audio unit for Mac OS that works perfectly well in its own bespoke host app and is now well into its beta testing stage. It did take some effort to get it to work well in Logic Pro however and all was fine and working well until: The AU part is an empty app extension with a framework containing its code. The framework contains Swift code for the UI and C code for the DSP parts. When the framework is compiled using the Swift 5 compiler the AU will run in Logic with no problems. (I should also mention that AU passes the most strict auval tests). But… when the framework is compiled with Swift 6 Logic Pro cannot load it. Logic displays a message saying the audio unit could not be loaded and to contact the developer. My own host app loads the AU perfectly well with the Swift 6 version, so I know there’s nothing wrong with the audio unit. I cannot find any differences in any of the built output files except, of course, the actual binary code in the framework. I’ve worked for hours on this and cannot find a solution other than to build the framework in Swift 5. (I worked hard to get all the async code updated and working with Swift 6! so I feel a little cheated!) What is happening? Is this a bug in Logic? Is this a bug in Swift 6 compiler/linker? I’m at the Duh! hands in the air, tearing out hair stage! ( once again!)
1
0
541
Jul ’25
How to play Vorbis/OGG files with swift?
Does anyone have a working example on how to play OGG files with swift? I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C\++ to fill the PCM because this method seems to only be available in C\++ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes. I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac) I also tried using Cricket Audio framework below. https://github.com/sjmerel/ck It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue https://github.com/sjmerel/ck/issues/3 Right now I believe every player that can play OGGs on mac is written in Objective-C or C++. Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
2
0
4.4k
Jul ’25
Graceful shutdown during background audio playback.
Hello. My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background. From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case. All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction. Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole. Best, John
0
1
132
Jun ’25
AVAssetResourceLoaderDelegate for radio stream
Hi everyone, I’m trying to use AVAssetResourceLoaderDelegate to handle a live radio stream (e.g. Icecast/HTTP stream). My goal is to have access to the last 30 seconds of audio data during playback, so I can analyze it for specific audio patterns in near-real-time. I’ve implemented a custom resource loader that works fine for podcasts and static files, where the file size and content length are known. However, for infinite live streams, my current implementation stops receiving new loading requests after the first one is served. As a result, the playback either stalls or fails to continue. Has anyone successfully used AVAssetResourceLoaderDelegate with a continuous radio stream? Or maybe you can suggest betterapproach for buffering and analyzing live audio? Any tips, examples, or advice would be appreciated. Thanks!
0
0
157
Jun ’25
AudioUnit may experience silent capture issues on iPadOS 18.4.1 or 18.5.
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows: AudioSession: category:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeDefault option:77 preferredSampleRate:48000.000000 preferredIOBufferDuration:0.010000 AudioUnit format.mFormatID = kAudioFormatLinearPCM; format.mSampleRate = 48000.0; format.mChannelsPerFrame = 2; format.mBitsPerChannel = 16; format.mFramesPerPacket = 1; format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8; format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket; format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger; component.componentType = kAudioUnitType_Output; component.componentSubType = kAudioUnitSubType_RemoteIO; component.componentManufacturer = kAudioUnitManufacturer_Apple; component.componentFlags = 0; component.componentFlagsMask = 0;
0
0
148
Jun ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
0
0
189
May ’25
how to enumerate build-in media devices?
Hello, I need to enumerate built-in media devices (cameras, microphones, etc.). For this purpose, I am using the CoreAudio and CoreMediaIO frameworks. According to the table 'Daemon-Safe Frameworks' in Apple’s TN2083, CoreAudio is daemon-safe. However, the documentation does not mention CoreMediaIO. Can CoreMediaIO be used in a daemon? If not, are there any documented alternatives to detect built-in cameras in a daemon (e.g., via device classes in IOKit)? Thank you in advance, Pavel
0
0
171
May ’25
The impact of MicrophoneMode on my Mac application.
I have a 4-input, 4-output hardware device and an 8-input, 8-output virtual device, which I combine into an aggregate device. I am using the SimplyCoreAudio library to get the channel count. The code is as follows: aggregationDevice!.channels(scope: .input) =>> 12 aggregationDevice!.channels(scope: .output) =>> 12 When the program's MicrophoneMode is set to standard, the channel count is correct. However, when I set the MicrophoneMode to voiceIsolation, the channel count is incorrect: aggregationDevice!.channels(scope: .input) =>> 4 aggregationDevice!.channels(scope: .output) =>> 12 Below is the code for creating the aggregate device: func createAggregateDevice(mainDevice: AudioDevice, secondDevice: AudioDevice?, named name: String, uid: String) -> AudioDevice? { guard let mainDeviceUID = mainDevice.uid else { return nil } var deviceList: [[String: Any]] = [ [ kAudioSubDeviceUIDKey: mainDeviceUID, kAudioSubDeviceDriftCompensationKey:1 ] ] // make sure same device isn't added twice if let secondDeviceUID = secondDevice?.uid, secondDeviceUID != mainDeviceUID { deviceList.append([ kAudioSubDeviceUIDKey: secondDeviceUID, kAudioSubDeviceDriftCompensationKey:1, kAudioSubDeviceInputChannelsKey:8 ]) } let desc: [String: Any] = [ kAudioAggregateDeviceNameKey: name, kAudioAggregateDeviceUIDKey: uid, kAudioAggregateDeviceSubDeviceListKey: deviceList, kAudioAggregateDeviceMainSubDeviceKey: mainDeviceUID, kAudioAggregateDeviceIsPrivateKey:false, ] var deviceID: AudioDeviceID = 0 let error = AudioHardwareCreateAggregateDevice(desc as CFDictionary, &deviceID) guard error == noErr else { return nil } return AudioDevice.lookup(by: deviceID) } I hope someone can tell me the reason Thank you!
1
0
594
May ’25
How to request permission for System Audio Recording Only?
Hi community, I'm wondering how can I request the permission of "System Audio Recording Only" under the Privacy & Security -> Screen & System Audio Recording via swift? Did a bunch of search but didn't find good documentation on it. Tried another approach here https://github.com/insidegui/AudioCap/blob/main/AudioCap/ProcessTap/AudioRecordingPermission.swift which doesn't work very reliably.
2
0
794
May ’25
CoreAudio server plugin: updating kAudioStreamPropertyAvailablePhysicalFormats
Hi, our CoreAudio server plugin supports different clock sources. A switch might result in a change of the selectable sample rates (and other settings). On a clock source switch the plugin reconfigures the set of available kAudioStreamPropertyAvailablePhysicalFormats and announces the change via AudioServerPlugInHostInterface::PropertiesChanged(). However at least the Audio MIDI Setup seems to ignore to update it's UI. The changes are first reflected after selecting another device and re-selecting the device of interest. (Latest macOS, M4 macMini) Is this a bug? Or is our CoreAudio server plugin required to indicate the change in the list of available audio formats differently? Thanks!
0
0
124
May ’25
Cleaning up CoreAudio preferences
Hi, our virtual CoreAudio server plugin creates and removes dynamically CoreAudio devices. Each time it does so it leaves traces in /Library/Preferences/Audio com.apple.audio.DeviceSettings.plist com.apple.audio.SystemSettings.plist The files on the test machine now have become >1Mb and the system keeps recreating them. How can I manually remove/cleanup these files? (This is for development only. It already became pretty tedious to evaluate current device settings for debugging purposes.) How can the CoreAudio server plugin make sure once a device has been removed also its entries are removed from the .plist (It already removes it's storage, but the system still keeps other settings.) Is there some documentation about what gets stored and how the settings are organized in these preferences? (This is also for development and debugging only. We are not intending to access these settings directly ) Thanks!
5
0
307
Apr ’25
CoreAudio server plugin gaining write access with SystemConfiguration.framework functions
Hi, our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings. While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate. Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this? Thanks!
2
0
187
Apr ’25
Is PushToTalk Framework Half-Duplex Only, or Does It Include Built-in Full-Duplex Audio Capabilities?
Hello everyone, Our team is currently developing an iOS application requiring real-time audio communication and evaluating the most suitable frameworks. Options include CallKit, custom solutions using AVAudioEngine/Audio Units, and the PushToTalk framework. Regarding the PushToTalk framework, we have some questions about its core design and capabilities that we'd appreciate clarification on from the community or Apple engineers. Based on the PushToTalk framework documentation, its API design (e.g., methods like requestBeginTransmission, endTransmission which imply explicitly requesting transmission rights), and its system UI integration, it strongly appears oriented towards half-duplex communication scenarios, similar to traditional walkie-talkies where only one participant transmits audio at a time. Is this understanding accurate? Is the PushToTalk framework's design strictly limited to managing half-duplex audio interactions? Or, does the framework itself also provide built-in mechanisms or APIs to manage simultaneous, bi-directional (full-duplex) audio streaming between participants? To be clear, we are asking about the inherent capabilities of the PushToTalk framework itself. We understand it's possible to use PushToTalk for signaling and UI management, and separately implement the actual full-duplex audio stream using AVAudioEngine or other audio APIs. However, we want to confirm if the framework itself is designed to support or simplify full-duplex audio communication. Have other developers investigated the specific limitations or capabilities of the PushToTalk framework regarding audio transmission modes (half-duplex vs. full-duplex)? Are there any official documentation references or WWDC sessions that explicitly clarify the framework's support (or lack thereof) for full-duplex operation? If PushToTalk is indeed limited to half-duplex, what are the generally accepted best practices for apps requiring full-duplex calls – transitioning directly to CallKit (where applicable) or building custom audio processing pipelines? Clarifying this point is crucial for us to select the correct technology stack for our application. Any relevant insights, documentation pointers, or shared development experiences would be greatly appreciated. Thank you for your help!
1
0
229
Apr ’25
Does USB Audio in macOS support 12 channels with 16bit PCM samples?
I have a custom USB Audio Class 2 (UAC2) compatible device. When I connect this custom device to a MacBook with a configuration of up to 10 channels (16-bit), everything seems to work fine. However, when I increase the channel count to 12, the MacBook does not recognize the 12 channels. It only shows the channel count as 0. TN2274 is the only source where I found some information about Apple's Audio Class Drivers, but it doesn't mention any limitations regarding channel counts. Could you let me know the current limitations of the Audio Class Drivers on the latest macOS versions? What configuration should I use to get 12 channels working? P.S. I also found that a 12-channel, 8-bit configuration is detected by the MacBook, bit I want it to work with 16bits. For more detail please check FB17098863
6
0
360
Apr ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
121
Apr ’25
block microphone and speakers due to security reason
Hello, As part of developing a DLP system, the microphone and speakers should be blocked. My solution involves muting devices by changing the property kAudioDevicePropertyMute. However, this solution allows the user to unmute the device, and the app must implement a property listener to mute the device again. The problem is that muting takes some time and the device is temporarily unmuted. Admittedly, it takes less than a second, but nevertheless, it appears insecure. Is there an Apple-recommended approach to implement such blocking more securely? Maybe some solution which is based on IOKit. Thank you in advance, Pavel
Replies
2
Boosts
0
Views
310
Activity
Aug ’25
Logic Pro cannot load v3 audio unit with framework compiled with Swift 6
Sequoia 15.4.1 (24E263) XCode: 16.3 (16E140) Logic Pro: 11.2.1 I’ve been developing a complex audio unit for Mac OS that works perfectly well in its own bespoke host app and is now well into its beta testing stage. It did take some effort to get it to work well in Logic Pro however and all was fine and working well until: The AU part is an empty app extension with a framework containing its code. The framework contains Swift code for the UI and C code for the DSP parts. When the framework is compiled using the Swift 5 compiler the AU will run in Logic with no problems. (I should also mention that AU passes the most strict auval tests). But… when the framework is compiled with Swift 6 Logic Pro cannot load it. Logic displays a message saying the audio unit could not be loaded and to contact the developer. My own host app loads the AU perfectly well with the Swift 6 version, so I know there’s nothing wrong with the audio unit. I cannot find any differences in any of the built output files except, of course, the actual binary code in the framework. I’ve worked for hours on this and cannot find a solution other than to build the framework in Swift 5. (I worked hard to get all the async code updated and working with Swift 6! so I feel a little cheated!) What is happening? Is this a bug in Logic? Is this a bug in Swift 6 compiler/linker? I’m at the Duh! hands in the air, tearing out hair stage! ( once again!)
Replies
1
Boosts
0
Views
541
Activity
Jul ’25
How to play Vorbis/OGG files with swift?
Does anyone have a working example on how to play OGG files with swift? I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C\++ to fill the PCM because this method seems to only be available in C\++ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes. I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac) I also tried using Cricket Audio framework below. https://github.com/sjmerel/ck It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue https://github.com/sjmerel/ck/issues/3 Right now I believe every player that can play OGGs on mac is written in Objective-C or C++. Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
Replies
2
Boosts
0
Views
4.4k
Activity
Jul ’25
Graceful shutdown during background audio playback.
Hello. My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background. From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case. All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction. Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole. Best, John
Replies
0
Boosts
1
Views
132
Activity
Jun ’25
Is there any way to disable PHASE/CoreAudio logging?
Is there a way to permanently disable PHASE SDK logging? It seems to be a lot chattier than Apple's other SDKs. While developing a RealityKit app that uses AudioPlaybackController, I must manually hide the PHASE SDK log output several times each day so I can see my app's log messages. Thank you.
Replies
0
Boosts
0
Views
356
Activity
Jun ’25
AVAssetResourceLoaderDelegate for radio stream
Hi everyone, I’m trying to use AVAssetResourceLoaderDelegate to handle a live radio stream (e.g. Icecast/HTTP stream). My goal is to have access to the last 30 seconds of audio data during playback, so I can analyze it for specific audio patterns in near-real-time. I’ve implemented a custom resource loader that works fine for podcasts and static files, where the file size and content length are known. However, for infinite live streams, my current implementation stops receiving new loading requests after the first one is served. As a result, the playback either stalls or fails to continue. Has anyone successfully used AVAssetResourceLoaderDelegate with a continuous radio stream? Or maybe you can suggest betterapproach for buffering and analyzing live audio? Any tips, examples, or advice would be appreciated. Thanks!
Replies
0
Boosts
0
Views
157
Activity
Jun ’25
AudioUnit may experience silent capture issues on iPadOS 18.4.1 or 18.5.
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows: AudioSession: category:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeDefault option:77 preferredSampleRate:48000.000000 preferredIOBufferDuration:0.010000 AudioUnit format.mFormatID = kAudioFormatLinearPCM; format.mSampleRate = 48000.0; format.mChannelsPerFrame = 2; format.mBitsPerChannel = 16; format.mFramesPerPacket = 1; format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8; format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket; format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger; component.componentType = kAudioUnitType_Output; component.componentSubType = kAudioUnitSubType_RemoteIO; component.componentManufacturer = kAudioUnitManufacturer_Apple; component.componentFlags = 0; component.componentFlagsMask = 0;
Replies
0
Boosts
0
Views
148
Activity
Jun ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
Replies
0
Boosts
0
Views
189
Activity
May ’25
how to enumerate build-in media devices?
Hello, I need to enumerate built-in media devices (cameras, microphones, etc.). For this purpose, I am using the CoreAudio and CoreMediaIO frameworks. According to the table 'Daemon-Safe Frameworks' in Apple’s TN2083, CoreAudio is daemon-safe. However, the documentation does not mention CoreMediaIO. Can CoreMediaIO be used in a daemon? If not, are there any documented alternatives to detect built-in cameras in a daemon (e.g., via device classes in IOKit)? Thank you in advance, Pavel
Replies
0
Boosts
0
Views
171
Activity
May ’25
The impact of MicrophoneMode on my Mac application.
I have a 4-input, 4-output hardware device and an 8-input, 8-output virtual device, which I combine into an aggregate device. I am using the SimplyCoreAudio library to get the channel count. The code is as follows: aggregationDevice!.channels(scope: .input) =>> 12 aggregationDevice!.channels(scope: .output) =>> 12 When the program's MicrophoneMode is set to standard, the channel count is correct. However, when I set the MicrophoneMode to voiceIsolation, the channel count is incorrect: aggregationDevice!.channels(scope: .input) =>> 4 aggregationDevice!.channels(scope: .output) =>> 12 Below is the code for creating the aggregate device: func createAggregateDevice(mainDevice: AudioDevice, secondDevice: AudioDevice?, named name: String, uid: String) -> AudioDevice? { guard let mainDeviceUID = mainDevice.uid else { return nil } var deviceList: [[String: Any]] = [ [ kAudioSubDeviceUIDKey: mainDeviceUID, kAudioSubDeviceDriftCompensationKey:1 ] ] // make sure same device isn't added twice if let secondDeviceUID = secondDevice?.uid, secondDeviceUID != mainDeviceUID { deviceList.append([ kAudioSubDeviceUIDKey: secondDeviceUID, kAudioSubDeviceDriftCompensationKey:1, kAudioSubDeviceInputChannelsKey:8 ]) } let desc: [String: Any] = [ kAudioAggregateDeviceNameKey: name, kAudioAggregateDeviceUIDKey: uid, kAudioAggregateDeviceSubDeviceListKey: deviceList, kAudioAggregateDeviceMainSubDeviceKey: mainDeviceUID, kAudioAggregateDeviceIsPrivateKey:false, ] var deviceID: AudioDeviceID = 0 let error = AudioHardwareCreateAggregateDevice(desc as CFDictionary, &deviceID) guard error == noErr else { return nil } return AudioDevice.lookup(by: deviceID) } I hope someone can tell me the reason Thank you!
Replies
1
Boosts
0
Views
594
Activity
May ’25
How to request permission for System Audio Recording Only?
Hi community, I'm wondering how can I request the permission of "System Audio Recording Only" under the Privacy & Security -> Screen & System Audio Recording via swift? Did a bunch of search but didn't find good documentation on it. Tried another approach here https://github.com/insidegui/AudioCap/blob/main/AudioCap/ProcessTap/AudioRecordingPermission.swift which doesn't work very reliably.
Replies
2
Boosts
0
Views
794
Activity
May ’25
CoreAudio server plugin: updating kAudioStreamPropertyAvailablePhysicalFormats
Hi, our CoreAudio server plugin supports different clock sources. A switch might result in a change of the selectable sample rates (and other settings). On a clock source switch the plugin reconfigures the set of available kAudioStreamPropertyAvailablePhysicalFormats and announces the change via AudioServerPlugInHostInterface::PropertiesChanged(). However at least the Audio MIDI Setup seems to ignore to update it's UI. The changes are first reflected after selecting another device and re-selecting the device of interest. (Latest macOS, M4 macMini) Is this a bug? Or is our CoreAudio server plugin required to indicate the change in the list of available audio formats differently? Thanks!
Replies
0
Boosts
0
Views
124
Activity
May ’25
Cleaning up CoreAudio preferences
Hi, our virtual CoreAudio server plugin creates and removes dynamically CoreAudio devices. Each time it does so it leaves traces in /Library/Preferences/Audio com.apple.audio.DeviceSettings.plist com.apple.audio.SystemSettings.plist The files on the test machine now have become >1Mb and the system keeps recreating them. How can I manually remove/cleanup these files? (This is for development only. It already became pretty tedious to evaluate current device settings for debugging purposes.) How can the CoreAudio server plugin make sure once a device has been removed also its entries are removed from the .plist (It already removes it's storage, but the system still keeps other settings.) Is there some documentation about what gets stored and how the settings are organized in these preferences? (This is also for development and debugging only. We are not intending to access these settings directly ) Thanks!
Replies
5
Boosts
0
Views
307
Activity
Apr ’25
CoreAudio server plugin gaining write access with SystemConfiguration.framework functions
Hi, our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings. While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate. Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this? Thanks!
Replies
2
Boosts
0
Views
187
Activity
Apr ’25
Is PushToTalk Framework Half-Duplex Only, or Does It Include Built-in Full-Duplex Audio Capabilities?
Hello everyone, Our team is currently developing an iOS application requiring real-time audio communication and evaluating the most suitable frameworks. Options include CallKit, custom solutions using AVAudioEngine/Audio Units, and the PushToTalk framework. Regarding the PushToTalk framework, we have some questions about its core design and capabilities that we'd appreciate clarification on from the community or Apple engineers. Based on the PushToTalk framework documentation, its API design (e.g., methods like requestBeginTransmission, endTransmission which imply explicitly requesting transmission rights), and its system UI integration, it strongly appears oriented towards half-duplex communication scenarios, similar to traditional walkie-talkies where only one participant transmits audio at a time. Is this understanding accurate? Is the PushToTalk framework's design strictly limited to managing half-duplex audio interactions? Or, does the framework itself also provide built-in mechanisms or APIs to manage simultaneous, bi-directional (full-duplex) audio streaming between participants? To be clear, we are asking about the inherent capabilities of the PushToTalk framework itself. We understand it's possible to use PushToTalk for signaling and UI management, and separately implement the actual full-duplex audio stream using AVAudioEngine or other audio APIs. However, we want to confirm if the framework itself is designed to support or simplify full-duplex audio communication. Have other developers investigated the specific limitations or capabilities of the PushToTalk framework regarding audio transmission modes (half-duplex vs. full-duplex)? Are there any official documentation references or WWDC sessions that explicitly clarify the framework's support (or lack thereof) for full-duplex operation? If PushToTalk is indeed limited to half-duplex, what are the generally accepted best practices for apps requiring full-duplex calls – transitioning directly to CallKit (where applicable) or building custom audio processing pipelines? Clarifying this point is crucial for us to select the correct technology stack for our application. Any relevant insights, documentation pointers, or shared development experiences would be greatly appreciated. Thank you for your help!
Replies
1
Boosts
0
Views
229
Activity
Apr ’25
Does USB Audio in macOS support 12 channels with 16bit PCM samples?
I have a custom USB Audio Class 2 (UAC2) compatible device. When I connect this custom device to a MacBook with a configuration of up to 10 channels (16-bit), everything seems to work fine. However, when I increase the channel count to 12, the MacBook does not recognize the 12 channels. It only shows the channel count as 0. TN2274 is the only source where I found some information about Apple's Audio Class Drivers, but it doesn't mention any limitations regarding channel counts. Could you let me know the current limitations of the Audio Class Drivers on the latest macOS versions? What configuration should I use to get 12 channels working? P.S. I also found that a 12-channel, 8-bit configuration is detected by the MacBook, bit I want it to work with 16bits. For more detail please check FB17098863
Replies
6
Boosts
0
Views
360
Activity
Apr ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
Replies
0
Boosts
0
Views
121
Activity
Apr ’25