On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely.
Any ideas what could be going wrong?
Here's a minimum code example to demonstrate:
#include <AudioToolbox/AudioToolbox.h>
#include <stdint.h>
#define RENDER_BUFFER_COUNT 2
#define RENDER_FRAMES_PER_BUFFER 128
// mono linear PCM audio data at 48kHz
#define RENDER_SAMPLE_RATE 48000
#define RENDER_CHANNEL_COUNT 1
#define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32))
void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount)
{
static bool isInverted = false;
float scalar = isInverted ? -1.f : 1.f;
for (uint32_t frame = 0; frame < frameCount; ++frame)
{
for (uint32_t channel = 0; channel < channelCount; ++channel)
{
// series of ramps, alternating up and down.
outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount);
}
}
isInverted = !isInverted;
}
AudioStreamBasicDescription coreAudioDesc = { 0 };
AudioQueueRef coreAudioQueue = NULL;
AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL };
void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer)
{
// 0's here indicate no fancy packet magic
AudioQueueEnqueueBuffer(queue, buffer, 0, 0);
}
int main(void)
{
const UInt32 BytesPerSample = sizeof(float);
coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE;
coreAudioDesc.mFormatID = kAudioFormatLinearPCM;
coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample;
coreAudioDesc.mFramesPerPacket = 1;
coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample;
coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT;
coreAudioDesc.mBitsPerChannel = BytesPerSample * 8;
coreAudioQueue = NULL;
OSStatus result;
// most of the 0 and NULL params here are for compressed sound formats etc.
result = AudioQueueNewOutput(&coreAudioDesc, &coreAudioCallback, NULL, 0, 0, 0, &coreAudioQueue);
if (result != noErr)
{
assert(false == "AudioQueueNewOutput failed!");
abort();
}
for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
{
uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER;
result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &(coreAudioBuffers[i]));
if (result != noErr)
{
assert(false == "AudioQueueAllocateBuffer failed!");
abort();
}
}
for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
{
RenderAudioSaw(coreAudioBuffers[i]->mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT);
coreAudioBuffers[i]->mAudioDataByteSize = coreAudioBuffers[i]->mAudioDataBytesCapacity;
AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0);
}
AudioQueueStart(coreAudioQueue, NULL);
sleep(10); // some time to hear the audio
AudioQueueStop(coreAudioQueue, true);
AudioQueueDispose(coreAudioQueue, true);
return 0;
}
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
We are working on a prototype app for Apple Vision Pro that is similar in functionality to Omegle or Chatroulette, but exclusively for Vision Pro owners.
The core idea is:
– a matching system where one user connects to another through a virtual persona;
– real-time video and audio transmission;
– time limits for sessions with the ability to extend them;
– users can skip a match and move on to the next one.
We have explored WebRTC and Twilio, but unfortunately, they don’t fit our use case.
Question:
What alternative services or SDKs are available for implementing real-time video/audio communication on Vision Pro that would work with this scenario?
Has anyone encountered a similar challenge and can recommend which technologies or tools to use?
Thanks in advance!
I have a memory leak, when using AVAudioPlayer. I managed to narrow down the issue into a very simple app, which code I paste in at the end.
The memory leak start immediately when I start playing sound, but only in the emylator. On the real iPhone there is no memory leak.
The memory leak on the Simulator looks like this:
import SwiftUI
import AVFoundation
struct ContentView_Audio: View {
var sound: AVAudioPlayer?
init() {
guard let path = Bundle.main.path(forResource: "cd201", ofType: "mp3") else { return }
let url = URL(fileURLWithPath: path)
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.mixWithOthers])
} catch {
return
}
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
return
}
do {
sound = try AVAudioPlayer(contentsOf: url)
} catch {
return
}
}
var body: some View {
HStack {
Button {
playSound()
} label: {
ZStack {
Circle()
.fill(.mint.opacity(0.3))
.frame(width: 44, height: 44)
.shadow(radius: 8)
Image(systemName: "play.fill")
.resizable()
.frame(width: 20, height: 20)
}
}
.padding()
Button {
stopSound()
} label: {
ZStack {
Circle()
.fill(.mint.opacity(0.3))
.frame(width: 44, height: 44)
.shadow(radius: 8)
Image(systemName: "stop.fill")
.resizable()
.frame(width: 20, height: 20)
}
}
.padding()
}
}
private func playSound() {
guard sound != nil else { return }
sound?.volume = 1
// sound?.numberOfLoops = -1
sound?.play()
}
func stopSound() {
sound?.stop()
}
}
Hi all,
with my app ScreenFloat, you can record your screen, along with system- and microphone audio.
Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on.
Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one.
So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app.
But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow.
My approach is this:
"Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession
Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession
For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback.
I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest).
So, my question is, is there a faster way?
The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data).
All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal.
I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps.
I'd appreciate any ideas or pointers.
Thank you kindly,
Matthias
Will the restrictions on callkit be lifted in China in 2025? If so, how can you legally use callkit in China?
I'm developing a photo backup app.
To detect newly added or edited photos since the app launched, I keep a local dictionary in the format [localIdentifier: modification_date].
However, PHAsset.modificationDate is not reliable.
It often changes unexpectedly, possibly due to system operations like iCloud metadata updates.
Is there a more reliable way to detect whether a photo has been modified by user since the last app launch?
I'm thinking about using content hash instead, but I'm not sure how heavy this operation is in terms of performance.
Hello Apple Developer Community,
I am seeking clarification on the intended display behavior of HLS audio tracks within the iOS 26 (or current beta) native player, specifically concerning the NAME and LANGUAGE attributes of the EXT-X-MEDIA tag.
In our HLS manifests, we define alternative audio tracks using EXT-X-MEDIA tags, like so:
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="ja",NAME="AUDIO-1",DEFAULT=YES,AUTOSELECT=YES,URI="audio_ja.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="ja",NAME="AUDIO-2",URI="audio_en.m3u8"
Our observation is that when an audio track is selected and its name is displayed in the native iOS media controls (e.g., Control Center or within a full-screen video player's UI), the value specified in the NAME attribute ("AUDIO-1", "AUDIO-2") does not seem to be used. Instead, the display appears to derive from the LANGUAGE attribute ("ja", "en"), often showing the system's localized string for that language (e.g., "Japanese", "English").
We would like to understand the official or intended behavior regarding this.
Is it the expected behavior for the iOS native player to prioritize the LANGUAGE attribute (or its localized equivalent) over the NAME attribute for displaying the selected audio track's label?
If this is the intended design, what is the recommended best practice for developers who wish to present a custom, human-readable name for audio tracks (beyond the standard language name) in the native iOS UI?
Are there any specific AVPlayer properties or AVMediaSelectionOption considerations that would allow more granular control over this display, or is this entirely managed by the system based on the LANGUAGE attribute?
Any insights or official guidance on this behavior in iOS 26 (and potentially previous versions) would be greatly appreciated.
Thank you for your time and assistance.
In the past, when using Lightning, many external devices had to go through MFi certification. However, since the iPhone 15 switched from Lightning to USB-C, is MFi certification still required?
Our company has developed several UVC devices, and we have confirmed that iPads can read frames from external cameras through the external device type in AVFoundation. However, this is not supported on iPhones.
We are currently exploring feasible ways to enable UVC device support on iPhones. Is MFi certification the only option? If so, is the MFi certification process for USB-C the same as it was for Lightning? Does it still require purchasing an MFi chip and manufacturing specially designed USB-C cables?
In MusicKit Web the playback states are provided as numbers.
For example the playbackStateDidChange event listener will return:
{oldState: 2, state: 3, item:...}
When the state changes from playing (2) to paused (3).
Those are pretty easy to guess, but I'm having a hard time with some of the others: completed,
ended,
loading,
none,
paused,
playing,
seeking,
stalled,
stopped,
waiting.
I cannot find a mapping of states to numbers documented anywhere. I got the above states from an enum in a d.ts file that is often incorrect/incomplete.
Can someone help out pointing to the docs or provide a mapping?
Thanks.
When using AVSampleBufferDisplayLayer to play uncompressed H.264 and H.265 video with B-frames more than 7, frame drops occur. The more B-frames there are, the more noticeable the frame drops become, for example 15 bframes.
Use FFmpeg to transcode a video file with visible timestamps and frame numbers (x264 or x265 ):
ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx264 -x264-params "bframes=15:b-adapt=0" -crf 30 -y x264_bf15.mp4
ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx265 -x265-params "bframes=15:b-adapt=0" -crf 30 -y x265_bf15.mp4
Use the demo player from this repository to reproduce the issue: https://github.com/msfrms/CustomPlayer
frame drops can be observed. And following log can be found in devices console.
mediaserverd <<<< IQ-CA >>>> piqca_gmstats_dump: FIQCA(0x1266f4000) recent frames: enqueued: 184, displayed: 138, dropped: 42, flushed: 0, evicted: 3, >16ms late: 2
PS. I was using iphone11 iOS14.6, to replay this issue.
May I ask why frame drops occur in this case?
Is there any configuration or API usage change that could help fix the frame drop issue?
Many thanks!
Add RPSystemBroadcastPickerView to the app,
After clicking, no method of SampleHandler is triggered
Hello,
I have an existing AUv3 instrument plugin. In the plug in, users can access files (audio files, song projects) via a UIDocumentPickerViewController
In Logic Pro, (and some other hosts, but not all), the document picker is unable to receive touches, while a keyboard case is attached to the iPad.
Removing the case (this is an Apple brand iPad case) allows the interactions to resume and allows me to pick files in the usual way.
One of my users reports this non-responsive behavior occurs even after disconnecting their keyboard.
I have fiddled with entitlements all day, and have determined that is not the issue, since the keyboard disconnection appears to fix it every time for me.
Here is my, very boilerplate, presentation code :
guard let type = UTType("com.my.type") else {
return
}
let fileBrowser = UIDocumentPickerViewController(forOpeningContentTypes: [type])
fileBrowser.overrideUserInterfaceStyle = .dark
fileBrowser.delegate = self
fileBrowser.directoryURL = myFileFolderURL()
self.present(fileBrowser, animated: true) {
Hi guys,
Can I use CMIO to achieve the following feature on macOS when a USB device (Camera/Mic/Speaker) is connected:
When a third-party video conferencing app is not in a meeting, ensure the app defaults to using the USB device (Camera/Mic/Speaker).
When a third-party conferencing app is in a meeting, ensure the app automatically switches to the USB device (Camera/Mic/Speaker).
AVCaptureVideoDataOutput.preparesCellularRadioForNetworkConnection requires com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning. But I cannot acquire its entitlement. I can't find its entitlement on 'Certificates, Identifiers & Profiles'. Any solutions?
Provisioning profile "iOS Team Provisioning Profile: ......" doesn't include the com.apple.developer.avfoundation.video-data-output-prepares-cellular-radio-for-machine-readable-code-scanning entitlement.
On an iPhone running iOS 26 beta 5, url(for: FilePath("subdir/asset.mov")) most always throws this error:
The URL for “subdir/asset.mov” couldn’t be retrieved: “asset.mov” couldn’t be copied to “subdir” because an item with the same name already exists.
Yet, contents(at: FilePath("subdir/asset.mov")) always returns Data for a playable AVMovie.
How can I avoid this url(for:) error?
The asset pack in question is downloaded. The error persists even after pack deletion, redownload, relaunch, and combinations of that.
// Assets repo root
subdir.aar
subdir/asset.mov
subdir/asset_thumb.heic
subdir/Manifest.json
// Manifest.json
{
"assetPackID": "subdir",
"downloadPolicy": {
"onDemand": {}
},
"fileSelectors": [
{
"directory": "subdir",
},
],
"platforms": [
"iOS",
"visionOS"
]
}
xcrun ba-package subdir/Manifest.json -o subdir.aar
xcrun ba-serve --host 192.168.0.10 -p 443 subdir.aar
The documentation for the Apple Music API indicates that the genreNames field for a given artist (see https://developer.apple.com/documentation/applemusicapi/artists/attributes-data.dictionary) is an array of strings. However, it only appears as though you return ONE SINGLE GENRE per Artist, regardless of how many genres might be attached to that artist's albums.
Am I missing something? Is there an artist where multiple genres may be returned, or is this a bug in the documentation?
Hello everyone,
I’m working on an iOS app that fetches videos from the "Recently Deleted" album using the Photos framework in Swift. However, I’m unable to fetch any videos, even though the "Recently Deleted" album contains 233 items (including videos), as seen in the Photos app.
Environment:
iOS Version: 18.3.1
Xcode Version: 16.2
Swift Version: Swift 5
Device: iPhone (simulator and physical device both tested)
Photo Library Permission: "All Photos" access granted
Recently Deleted Lock: Face ID/Passcode is disabled for "Recently Deleted"
The presentation "create audio drivers with DriverKit" from WWDC 2021 demonstrates how to use a dext to implement a virtual audio driver. It also says " If a virtual audio driver or device is all that is needed, the audio server plug-in driver model should continue to be used".
Indeed, in AudioDriverKit/AudioDriverKitTypes.h, there is no IOUserAudioTransportType Virtual, although CoreAudio/AudioHardwareBase.h includes kAudioDeviceTransportTypeVirtual.
For one of our products, we require virtual devices to implement a software loopback "cable". We've implemented this using the "traditional" HAL plugin, and as a proof-of-concept, also using a dext. In the dext, I tried setting the transport type to 'virt', which seems to only have the effect of changing the icon shown in Audio Midi Setup.
HAL plugins require an installer, and the installer has to kill coreaudiod in a post-install script. You have to turn off SIP to debug them. Just like AudioDriverKit drivers, they are out-of-process and run in a process not owned by the hosting app. Our HAL plugin's interface is property based; we had to write a lot of boiler-plate code to implement required properties. Writing an AudioDriverKit driver is in most respects easier - a lot of the scaffolding is implemented in the base driver, which we only alter where required. Debugging and installation is much easier.
The dext works just fine, as far as we can ascertain, just as well as a HAL plugin.
So, my question is - is the advice to use a HAL plugin for a virtual device still correct in 2025? And if so, what's the objection? We'd really prefer to ship the AudioDriverKit virtual audio device.
I'm developing an iOS app that requires continuous audio recording.
Currently, when a phone call comes in, the AVAudioSession is interrupted and recording stops completely during the ringing phase.
While I understand recording should stop if the call is answered, my app needs to continue recording while the phone is merely ringing.
I've observed that Apple's Voice Memos app maintains recording during incoming call rings. This indicates the hardware and iOS are capable of supporting this functionality.
Request
Please advise on any available AVAudioSession configurations or APIs that would allow my app to:
Continue recording during an incoming call ring
Only stop recording if/when the call is actually answered
Impact
This interruption significantly impacts the user experience and core functionality of my app. Workarounds like asking users to enable airplane mode are impractical and create a poor user experience.
Questions
Is there an approved way to maintain microphone access during call rings?
If not currently possible, could this capability be considered for addition to a future iOS SDK?
Are there any interim solutions or best practices Apple recommends for this use case?
Thank you for your help.
SUPPORT INFORMATION
Did someone from Apple ask you to submit a code-level support request?
No
Do you have a focused test project that demonstrates your issue?
Yes, I have a focused test project to submit with my request
What code level support issue are you having?
Problems with an Apple framework API in my app
TL;DR How to solve possible racing issue of EXT-X-SESSION-KEY request and encrypted media segment request?
I'm having trouble using custom AVAssetResourceLoaderDelegate with video manifest containing VideoProtectionKey(VPK). My master manifest contains rendition manifest url and VPK url. When not using custom resource delegate, everything works fine.
My custom resource delegate is implemented in way where it first append prefix to scheme of the master manifest url before creating the asset. And during handling master manifest, it puts back original scheme, make the request, modify the scheme for rendition manifest url in the response content by appending the same prefix again, so that rendition manifest request also goes into custom resource loader delegate. Same goes for VPK request. The AES-128 key is stored in memory within custom resource loader delegate object. So far so good.
The VPK is requested before segment request. But the problem comes where the media segment requests happen. The media segment request url from rendition manifest goes into custom resource loader as well and those are encrypted. I can see segment request finish first then the related VPK requests kick in after a few seconds. The previous VPK value is cached in memory so it is not network causing the delay but some mechanism that I'm not aware of causing this.
So could anyone tell me what would be the proper way of handling this situation? The native library is handling it well so I just want to know how. Thanks in advance!