I’ve built a custom media player using AVSampleBufferAudioRenderer and AVSampleBufferRenderSynchronizer, and overall, it works great!
However, I’ve noticed some unusual logs popping up:
Domain: NSOSStatusErrorDomain
Error Codes: -16384, -16155, -16512
*That error -16512 keeps happening repeatedly for one of our users, preventing them from playing any media at all.
I’ve searched around but can’t find any documentation explaining what these errors mean.
Has anyone run into this issue or have any suggestions? Any help would be hugely appreciated!
Thanks!
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I successfully retrieved strings, arrays, and other data through a custom AudioObjectPropertySelector, but I can only get fixed returns. Whenever I modify it to use dynamic data, it results in an error. Below is my code.
case kPlugIn_CustomPropertyID:
{
*((CFStringRef*)outData) = CFSTR("qin@@@123");
*outDataSize = sizeof(CFStringRef);
}
break;
case kPlugIn_ContainDic:
{
CFMutableDictionaryRef mutableDic1 = CFDictionaryCreateMutable(kCFAllocatorDefault,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(mutableDic1, CFSTR("xingming"), CFSTR("qinmu"));
*((CFDictionaryRef*)outData) = mutableDic1;
*outDataSize = sizeof(CFPropertyListRef);
// *((CFPropertyListRef*)outData) = mutableDic;
}
break;
case kPlugIn_ContainArray:
{
CFMutableArrayRef mutableArray = CFArrayCreateMutable(kCFAllocatorDefault, 0, &kCFTypeArrayCallBacks);
CFArrayAppendValue(mutableArray, CFSTR("Hello"));
CFArrayAppendValue(mutableArray, CFSTR("World"));
*((CFArrayRef*)outData) = mutableArray;
*outDataSize = sizeof(CFArrayRef);
}
break;
These are fixed returns, and there are no issues when I retrieve the data.
When I change the return data in kPlugIn_ContainDic to the following, the first time I restart the CoreAudio service and retrieve the data, it works fine. However, when I attempt to retrieve it again, it results in an error:
case kPlugIn_ContainDic:
{
*outDataSize = sizeof(CFPropertyListRef);
*((CFPropertyListRef*)outData) = mutableDic;
}
break;
error code:
HALC_ShellDevice::CreateIOContextDescription: failed to get a description from the server
HAL_HardwarePlugIn_ObjectGetPropertyData: no object
HALPlugIn::ObjectGetPropertyData: got an error from the plug-in routine, Error: 560947818 (!obj)
The declaration and usage of mutableDic are as follows:
static CFMutableDictionaryRef mutableDic;
static OSStatus BlackHole_Initialize(AudioServerPlugInDriverRef inDriver, AudioServerPlugInHostRef inHost)
{
OSStatus theAnswer = 0;
gPlugIn_Host = inHost;
if (mutableDic == NULL){
mutableDic = CFDictionaryCreateMutable(kCFAllocatorDefault,
100,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
}
}
static OSStatus BlackHole_AddDeviceClient(AudioServerPlugInDriverRef inDriver, AudioObjectID inDeviceObjectID, const AudioServerPlugInClientInfo* inClientInfo)
{
CFStringRef string = CFStringCreateWithFormat(kCFAllocatorDefault, NULL, CFSTR("%u"), inClientInfo->mClientID);
CFMutableDictionaryRef dic = CFDictionaryCreateMutable(kCFAllocatorDefault,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dic, CFSTR("clientID"), string);
CFDictionarySetValue(dic, CFSTR("bundleID"), inClientInfo->mBundleID);
CFDictionarySetValue(mutableDic, string, dic);
}
Can someone tell me why
I'm using an AUGraph to mix audio from different sources for a real time streaming application.
Whenever the audio device used as the graph's output device is also the Mac's default output device, the measured latency increases by about 35 milliseconds for wired devices.
Any idea why this is?
Is there a way around this besides nagging the user to not the use the system output in our app?
Topic:
Media Technologies
SubTopic:
Audio
I’m working on a memo app that records audio from the iPhone’s microphone (and other devices like MacBook or iPad) and processes it in 10-second chunks at a target sample rate of 16 kHz. However, I’ve encountered limitations with installTap in AVAudioEngine, which doesn’t natively support configuring a target sample rate on the mic input (the default being 44.1 kHz).
To address this, I tried using AVAudioMixerNode to downsample the mic input directly. Although everything seems correctly configured, no audio is recorded—just a flat signal with zero levels. There are no errors, and all permissions are granted, so it seems like an issue with downsampling rather than the mic setup itself.
To make progress, I implemented a workaround by tapping and resampling each chunk tapped using installTap (every 50ms in my case) with AVAudioConverter. While this works, it can introduce artifacts at the beginning and end of each chunk, likely due to separate processing instead of continuous downsampling.
Here are the key issues and questions I have:
1. Can we change the mic input sample rate directly using AVAudioSession or another native API in AVAudio? Setting up the desired sample rate initially would be ideal for my use case.
2. Are there alternatives to installTap for recording audio at a different sample rate or for continuously downsampling the live input without chunk-based artifacts?
This issue seems longstanding, as noted in a 2018 forum post:
https://forums.developer.apple.com/forums/thread/111726
Any guidance on configuring or processing mic input at a lower sample rate in real-time would be greatly appreciated. Thank you!
I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18):
Issue 1: Performing I/O on the main thread can cause slow launches.
Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
Issue 3: Interprocess communication on the main thread can cause non-deterministic delays.
Here is the code:
@Published var cameraAuthorization:AVAuthorizationStatus
@Published var micAuthorization:AVAuthorizationStatus
@Published var photoLibAuthorization:PHAuthorizationStatus
@Published var locationAuthorization:CLAuthorizationStatus
var locationManager:CLLocationManager
override init() {
// Issue 1 (Performing I/O on the main thread can cause slow launches.)
cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio)
photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly)
//Issue 1: Performing I/O on the main thread can cause slow launches.
locationManager = CLLocationManager()
locationAuthorization = locationManager.authorizationStatus
super.init()
//Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
locationManager.delegate = self
}
And also in route Change notification handler of AVAudioSession.routeChangeNotification,
//Issue 3: Hangs - Interprocess communication on the main thread can cause non-deterministic delays.
let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback)
I wonder how checking authorisation status can give these issues? What is the fix here?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Core Location
AVFoundation
Concurrency
Xcode Sanitizers and Runtime Issues
I noticed the following behavior with CallKit when receiving a VolP push notification:
When the app is in the foreground and a CallKit incoming call banner appears, pressing the answer button directly causes the speaker indicator in the CallKit interface to turn on. However, the audio is not actually activated (the iPhone's orange microphone indicator does not light up).
In the same foreground scenario, if I expand the CallKit banner before answering the call, the speaker indicator does not turn on, but the orange microphone indicator does light up, and audio works as expected.
When the app is in the background or not running, the incoming call banner works as expected: I can answer the call directly without expanding the banner, and the speaker does not turn on automatically. The orange microphone indicator lights up as it should.
Why is there a difference in behavior between answering directly from the banner versus expanding it first when the app is in the foreground? Is there a way to ensure consistent audio activation behavior across these scenarios?
I tried reconfiguring the audio when answering a call, but an error occurred during setActive, preventing the configuration from succeeding.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setActive(false)
try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker])
try audioSession.setActive(true, options: [])
} catch {
print("Failed to activate audio session: \(error)")
}
action.fulfill()
}
Error Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
I have 14 pro max after update 18, sound problem in AirPods I can’t hear proper sound, I am a Pubg gamer after update I can’t hear right left front back sound anyone help me please
Topic:
Media Technologies
SubTopic:
Audio
When is playbackBufferEmpty triggered? Why is this property YES observed from the observer method when returning to the foreground from the background even if the buffer is not empty?
Topic:
Media Technologies
SubTopic:
Video
I'm trying to setup a listener for kAudioProcessPropertyIsRunningOutput but it's never triggered. I get calls for kAudioProcessPropertyIsRunning and kAudioProcessPropertyDevices but not for kAudioProcessPropertyIsRunningInput or kAudioProcessPropertyIsRunningOutput.
class MyDelegate: PropertyListenerDelegate {
func propertiesChanged(properties: [AudioObjectPropertyAddress]) {
print(properties)
}
}
var myDelegate = MyDelegate()
var processes = try AudioHardwareSystem.shared.processes
for process in processes {
process.delegates += [myDelegate]
try process.addListener(forProperties: [AudioObjectPropertyAddress(mSelector: kAudioPropertyWildcardPropertyID, mScope: kAudioObjectPropertyScopeWildcard, mElement: kAudioObjectPropertyElementWildcard)])
}
Xcode 16.1
macOS 15.0.1
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping.
I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo:
- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType {
switch (sampleBufferType) {
case RPSampleBufferTypeVideo: {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
int ret = 0;
uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
if (oYSize <= 672) {
return;
}
uint8_t tempValue = oYData[672];
uint8_t *tYData = malloc(oYSize);
memcpy(tYData, oYData, oYSize);
if (tYData[672] != oYData[672]) {
NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue);
}
free(tYData);
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
break;
}
default: {
break;
}
}
}
Output:
$$$$$$$$$$$$$$$$------ t:110 o:124 temp:110
$$$$$$$$$$$$$$$$------ t:111 o:133 temp:111
$$$$$$$$$$$$$$$$------ t:124 o:138 temp:124
$$$$$$$$$$$$$$$$------ t:133 o:144 temp:133
$$$$$$$$$$$$$$$$------ t:138 o:151 temp:138
$$$$$$$$$$$$$$$$------ t:144 o:156 temp:144
$$$$$$$$$$$$$$$$------ t:151 o:135 temp:151
$$$$$$$$$$$$$$$$------ t:156 o:78 temp:156
$$$$$$$$$$$$$$$$------ t:135 o:76 temp:135
$$$$$$$$$$$$$$$$------ t:78 o:77 temp:78
$$$$$$$$$$$$$$$$------ t:76 o:80 temp:76
$$$$$$$$$$$$$$$$------ t:77 o:80 temp:77
$$$$$$$$$$$$$$$$------ t:80 o:79 temp:80
$$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
After updating iOS18.1 Beta Version, I have a lot of issues with my Apple Car Play as per following.
Audio Quality Really Bad (it’s not playing with media instead voice channel.
Sometime, it’s playing with the phone speakers even though I have connected the car play via cable.
Its not operating well with car steering control such as Volume up & down, skip button.
After receiving the phone call, it’s go back to the original audio quality but when my phone screen locked, its go bad again.
I expect to fix these problems asap as I love to play music when I drive around.
Topic:
Media Technologies
SubTopic:
Audio
Hi everyone,
I’m experiencing an issue where audio interruptions (e.g., phone calls) are not being intercepted while running sound classification in an app that uses the AVAudioSession. Classification works fine, but interruptions aren’t handled, even though I’ve followed Apple’s guidelines on handling audio interruptions [1_Document].
The classification was initially based on [2_Classifer], where it worked perfectly. However, when I adopted classification in a more camera-focused app using [3_Cam], the interruption behavior stopped working. The classification setup is functioning with [3_Cam], but audio interruptions are not triggered.
The listener is invoked before starting sound analysis as suggested in [2_Classifier].
startListeningForAudioSessionInterruptions()
try startAnalyzing([(request, observer)])
FYI, one change I have made for classifications is following. This works fine in all cases.
// try audioSession.setCategory(.record, mode: .default)
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
I suspect the issue might be related to the AVAudioSession configuration or how the app handles recording and playback together. Is there anything else I should check related to AVAudioSession? Are there additional APIs I could use to pre-check or better handle audio interruptions?
Any suggestions or guidance would be greatly appreciated!
Platform: Swift 5, Xcode 16, iOS 18.
References:
Document
Classifier
Cam
Best Regards
Hello Apple Engineers,
Specific Issue:
I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far:
I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec.
The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log.
However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device."
This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding.
Here are my questions:
Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)?
There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation?
If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated.
Thank you for your help!
I tried running the latest CaptureSample, selected local/canonical HDR, add to screen output, then stop capture. It is supposed to save the file, but no file is added at all.
It works for SDR, just not for HDR. How do I get HDR working?
i use navigator.mediaDevices.getUserMedia to wake up camera, when i use constraint with '{video:{facingMode: {exact: 'user'}}}',occur a error"invalid constraint"
Hello,
I am a deaf-blind wheelchair user, and I program in Swift using a braille display.
I’m reaching out for your help on an issue I’ve been struggling to solve.
Basically, when I extract a CMSampleBuffer from an AVAsset of a video, it comes with the Audio Format ID as Linear PCM. However, when I try to pass this CMSampleBuffer to write another video using AVAssetWriter, the video ends up muted.
The audio settings of the output video are configured to MPEG-4 AAC, but the input CMSampleBuffer has the Audio Format ID as Linear PCM.
I would like to request an extension for CMSampleBuffer that converts Linear PCM audio to MPEG-4 AAC.
I’ve searched extensively and couldn’t find anything.
Looking forward to your help.
Thank you.
Hello,
This is about the Get Catalog Top Charts Genres endpoint :
GET https://api.music.apple.com/v1/catalog/{storefront}/genres
I noticed that for some storefronts, no genre is returned. You can try with the following storefront values :
France (fr)
Poland (pl)
Kyrgyzstan (kg)
Uzbekistan (uz)
Turkmenistan (tm)
Is that a bug or is it on purpose ?
Thank you.
Hi all, we are working on iOS application that includes the camera functionality. This week we have received a few customer complaints regarding the camera usage with iPhone 16/16 Pro, both of the customers said that they have an issue with the camera preview(when the camera is open) the camera preview is just freezer but any other functionally and UI works as expected. Moreover the issue happens only for back camera, the front camera works perfectly.
We have tested it in context of iOS 18 with iPhone 14/15/15 Pro/15 Pro Max but all devices with iOS 18 works perfectly without any issues. So we assumed there was no issues with iOS 18 but some breaking changes with the new iPhone 16/16 pro cameras were introduced that caused this effect. Unfortunatly, currently we can't test directly usign the iPhone 16/16 Pro since we have't these devices.
We are using SwiftUI framework and here the implementation of the camera preview:
VideoPreviewLayer
final class CameraPreviewView: UIView {
var previewLayer: AVCaptureVideoPreviewLayer {
guard let layer = layer as? AVCaptureVideoPreviewLayer else {
fatalError("Layer expected is of type VideoPreviewLayer")
}
return layer
}
var session: AVCaptureSession? {
get { return previewLayer.session }
set { previewLayer.session = newValue }
}
override class var layerClass: AnyClass { AVCaptureVideoPreviewLayer.self }
}
UIKit -> SwiftUI
struct CameraRecordingView: UIViewRepresentable {
@ObservedObject var cameraManager: CameraManager
func makeUIView(context: Context) -> CameraPreviewView {
let previewView = CameraPreviewView()
previewView.session = cameraManager.session /// AVCaptureSession
previewView.previewLayer.videoGravity = .resizeAspectFill
return previewView
}
func updateUIView(_ uiView: CameraPreviewView, context: Context) {
}
}
Setup camera input
private func saveInput(input: AVCaptureDevice) {
/// Where input is AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
do {
let cameraInput = try AVCaptureDeviceInput(device: input)
if session.canAddInput(cameraInput) {
session.addInput(cameraInput) /// session is AVCaptureSession
} else {
sendError(error: .cannotAddInput)
status = .failed
}
} catch {
if error.nsError.code == -11852 {
sendError(error: .microphoneError)
} else {
sendError(error: .createCaptureInput(error))
}
status = .failed
}
}
Does anybody have similar issues with iPhone 16/16 Pro? We would appreciate any ideas of how to potentially resolve the issue.
How can it be that you still don't have the option to move photos into an album instead of just copying them? This is a bad joke, right? The entire Photos app is absolutely untidy and a nightmare for people who like order. I want car photos in the car folder. Vacation photos in the vacation folder without them being visible in the recent folder. Cant be so difficult???
Topic:
Media Technologies
SubTopic:
Photos & Camera
Title says it all.