When attempting to play a video using AVPlayer on iOS 18.0, I am encountering an error that does not occur on versions earlier than 18.0. Could you please advise what might be causing this issue?
Error Domain=AVFoundationErrorDomain Code=-11828
Error Domain=NSOSStatusErrorDomain Code=-12847 "(null)"
This is Error code
This is the URL information I retrieved using the curl command.
HTTP/1.1 200
Content-Disposition: inline;filename="sample.mp4"
Accept-Ranges: bytes
ETag: sample.mp4
Last-Modified: Tue, 20 Jan 1970 23:52:10 GMT
Expires: Mon, 07 Oct 2024 09:15:49 GMT
Content-Range: bytes 0-987561/987561
Content-Type: application/octet-stream
Content-Length: 987561
Date: Mon, 30 Sep 2024 09:15:49 GMT
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
Since the iOS 18 update, there's been a bug that always occurs when listening to music through Apple Music. When music is playing and the iPhone enters or exits standby mode, the music pauses by itself for 1 second. The initial conditions were the same as when this problem didn’t exist.
We are using the MediaPlayer API to provide CarPlay support. Starting in iOS 18 we are having issues updating the content list. The initial list of items will populate on a fresh instance but soon there after an error will show up saying we are not entitled to "com.apple.mediaremote.external-artwork-validation". From that point onwards no changes we make to our MPPlayableContentDataSource are reflected in CarPlay. Even after restarting the device.
While the MediaPlayer API is marked as deprecated, we are still using it to provide CarPlay support going back to iOS 10.
Has anyone else run into this or have suggestions for workarounds?
When my CarPlay connects and tries to play music it plays it through the vehicles phone speakers. If I make a phone call and then hang up it pushes the sound back to the vehicles stereo speakers. Does anyone have a fix for this because this is very annoying. Sometimes it will just switch to the phone speakers and I have to complete the process again.
If a queue (ApplicationMusicPlayer.Queue) is set with both library and non-library (catalog) items, the queue will play only one kind of item (library or non-library) or will just stop playing when the next item is of a different kind.
Using both Xcode 16 beta 4 and Xcode 15.4.
The issue was present in iOS 17 and is not resolved as of iOS 18 beta 4.
FB14491999
AVAudioEngine and AVAudioSession
Welcome! I will start off with the terms AVAudioEngineImpl::Initialize(NSError**).
Why? I want to make those who run into this issue have to possibility to find this post through Search Engines!
This is short small breakdown based on what I observed while trying to use these two Components. It's not a guide that goes into all the details.
If you're trying to figure out how to fix a crash, you may can find a common way to fix it, in this post!
Is it possible to use AVAudioEngine and AVAudioSession together?
The answer is yes.
But you will face challenges regarding it. Mostly AVAudioEngine. Whatever you're trying to do, it will take a lot of testing. I don't know how it will be with an IDE. But with just .app and iPhone it will take some testing. Or a lot of testing.
Something that helped me fixing a crash was, this here: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing
This example Project by Apple, uses both AVAudioEngine and AVAudioSession.
How can I fix AVAudioEngineImpl::Initialize(NSError**) ?
I think this depends. If you're lucky and have a crash log, you may can find clues, but the stack trace sometimes doesn't really help either.
I will mention common cases that I encountered though.
inputNode
https://developer.apple.com/documentation/avfaudio/avaudioengine/1386063-inputnode
You need an inputNode apparently. You need to access it or else I think there won't be one. And if there isn't one, AVAudioEngine.start will most likely crash.
The audio engine creates a singleton on demand when first accessing this variable.
Doing this has prevented this common issue for me.
.prepare deallocates and can cause a crash if you restart your AudioEngine
Another issue I faced was handling .prepare wrong. You don't need .prepare. But if you use installTap or other things, I think you need it.
Here is a common thing to note.
If you had previous initialized inputNode. Those could be gone after using .prepare.
You have to ensure you're accessing AVAudioEngine.inputNode again before calling .start() or whatever node you need.
The Voice Processing Project, does this by creating a Managing Controller for AVAudioEngine with a sort of "setup" function, which ensures that everything is ready, before .prepare and .start get called.
AVAudioSession's setCategory
You have to experiment with it. The crashes can be very weird. Sometimes your App will only crash once, and then only after you install it again, or if you start it up.
You are actually able to use .setActive and .setCategory with AVAduioEngine. Just do not try to do .setActive(false) before you've stopped the AudioEngine, as it will fail.
Sometimes I'd run into an issue with .setActive(true) so you really have to experiment if leaving that part out resolves the issue or not.
try session.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .mixWithOthers])
Experiment with it. But these .multiRoute and .mixWithOthers have allowed me to use AVAudioEngine to make a test recording. And I can even switch the Data Sources and Polar Patterns without any issues.
Sometimes you can get away without setting .setActive at all. Not sure if AVAudioEngine does it automatically.
Short Summary
If you use .prepare and then .stop, make sure to initialize things like .inputNode before calling .prepare and .start again. (THIS CAN BE DIFFERENT)
Only call .setActive(false) after you used .stop. Otherwise I believe it has no chance to stop it.
AVAudioSession setCategory is important. Ensure you use mixRoutes or experiment with all the modes.
If you manage to solve your crash, you'll be able to indeed change the Data Sources and Polar Patterns and more!
Use isRunning before using .start, this will save you from another crash. If you use .start while it's already running, I think try and catch won't save you here, you have to ensure you're not starting it twice.
I hope that this short breakdown will help you to resolve your crash. If you get deeper into AVAudioEngine and AVAudioSession, you'll probably face more crashes. I yet, need to figure out how to solve them. I have a lot of trouble to put my Testing App on my iPhone, so I am sorry if this guide didn't cover every detail of it.
A HUGE tip from me is to check the Documentations. As example, when I read the Documentation for inputNode I learned why my app crashed, it's because I never accessed and initialized one.
The Developer Documentation can be a little bit of a laberynth, and I strongly recommend you to read every property you try to access if you believe they cause issues. And I also recommend to find example Projects like the Voice Processing ones. As there aren't any Code Examples in the Documentation.
On Xcode 16 (16A242) app execution and UI will stall / lag as soon as an AVAudioPlayer is initialized.
let audioPlayer = try AVAudioPlayer(contentsOf: URL)
audioPlayer.volume = 1.0
audioPlayer.delegate = self
audioPlayer.prepareToPlay()
Typically you would not notice this in a music app for example, but it is especially noticable in games where multiple sounds are being played using multiple instances of AVAudioPlayer. The entire app slows down because of it.
This is similar to this issue from last year.
I have reported it to Apple in FB15144369, as this messes up my production games where fps goes down to nothing when sounds are enabled.
Unfortunately I cannot find a solution. Anyone?
A couple of weeks ago I got help here to play one song and the solution to my problem was that I wasn't adding the song (Track Type) to the queue correctly, so now I want to be able to add a playlist worth of songs to the queue. The problem is when I try to add an array of the Track type I get an error. The other part of this issue for me is how do I access an individual song off of the queue after I add it? I see I can do ApplicationMusicPlayer.shared.queue.currentItem but I think I'm missing/misunderstanding something here. Anyway's I'll post the code I have to show how I'm attempting to do this at this moment.
In this scenario we're getting passed in a playlist from another view.
import SwiftUI
import MusicKit
struct PlayBackView: View {
@State var song: Track?
@State private var songs: [Track] = []
@State var playlist: Playlist
private let player = ApplicationMusicPlayer.shared
VStack {
// Album Cover
HStack(spacing: 20) {
if let artwork = player.queue.currentEntry?.artwork {
ArtworkImage(artwork, height: 100)
} else {
Image(systemName: "music.note")
.resizable()
.frame(width: 100, height: 100)
}
VStack(alignment: .leading) {
// Song Title
Text(player.queue.currentEntry?.title ?? "Song Title Not Found")
.font(.title)
.fixedSize(horizontal: false, vertical: true)
}
}
}
.padding()
.task {
await loadTracks()
// It's Here I thought I could do something like this
player.queue = tracks
// Since I can do this with one singular track
player.queue = [song]
do {
try await player.queue.insert(songs, position: .afterCurrentEntry)
} catch {
print(error.localizedDescription)
}
}
}
@MainActor
private func loadTracks() async {
do {
let detailedPlaylist = try await playlist.with([.tracks])
let tracks = detailedPlaylist.tracks ?? []
setTracks(tracks)
} catch {
print(error.localizedDescription)
}
}
@MainActor
private func setTracks(_ tracks: MusicItemCollection<Track>) {
songs = Array(tracks)
}
}
If I have bluetooth speaker connected and I have installTap called on input Node, the callback is fired for 1-2 seconds then it doesnt anymore. I dont see any route or any notification handler called in between.
engine.inputNode.removeTap(onBus: 0)
engine.inputNode.installTap(
onBus: 0,
bufferSize: 4096,
format: format
) { buffer, _ in
// 3
guard let channelData = buffer.floatChannelData else {
return
}
// This callback fails after some time.
}
Not sure if this is expected, but I noticed some other applications, they seem to work fine.
If I remove bluetooth device, my input works fine.
Also I have no issues with output on Speaker.
Is anyone experiencing an issue with initial ‘AVAudioSession.sharedInstance().outputVolume’ returning 0 after updating to iOS 18.0?
I’m observing the outputVolume of AVAudioSession, and when I adjust the device volume, the change value returns correctly. However, when I call AVAudioSession.sharedInstance().outputVolume to get the current volume before knowing the change value, it returns 0, even though the device volume is not actually 0.
I create an audiounit with AudioComponentInstanceNew with componentType = kAudioUnitType_FormatConverter and componentSubType = kAudioUnitSubType_NewTimePitch and connect it between input and output, it works well before iOS18. However, it doesn't work with iOS 18. The call of AudioUnitRender for mic return -10863 in iOS 18. Are there anything I missed?
Dear Sirs,
I've written an audio driver based on IOUserAudioDevice. In my IOOperationHandler I can receive and send the audio samples as expected. Is there any way to configure the number of samples transferred in each call? Currently it seem to be around 512 samples per call, which relates to 10.7 millisecs when operating on 48 kHz samplerate. I'd like to achieve something like 48 or 96 samples per call. I did some experiments and tried calls to SetOutputLatency() etc. but so far I didn't find the right way to change the in_io_buffer_frame_size in the callback. I'd like to do this as smaller buffer sizes would allow lower latencies for the subsequent audio processing.
Thanks and best regards,
Johannes
I have an app on which users learn choreography. To avoid copyright infringements we currently only have audio instructions and no music on the app.
Could we enable those that are subscribed to Apple Music to listen to the part of a song the corresponds to the choreography? Usually they are 60 seconds long.
The app is in React Native. Would it be possible to implement it so that opening a dance video automatically triggers the playback of that song from e.g. second 32 - 95?
Since the video is looping, could it then start playing from second 32 again?
Also looking for devs with experience in integrating the MusicKit for this usecase if it turns out to be possible.
Calls to ExtAudioFileRead are throwing OSStatus 561145203 (AVAudioSessionErrorCodeResourceNotAvailable) on iOS and iPadOS 18 -- earlier versions of iOS have not exhibited this behavior. This is a longstanding code path that has seen a spike of these error codes since iOS 18's release.
The following is also printed to the Xcode 16 console:
My app occasionally went completely silent, but I heard it briefly when I used the buttons to adjust the volume
Can someone help me, please
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers:
an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers,
and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers.
The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image.
Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized.
However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time.
On the other hand, with the same player code and network streams on macOS, the synchronization always works fine.
This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again.
So, any help / hints on this sync problem will be greatly appreciated! :)
After upgrading to iOS 18 CarPlay with 2023 Lexus and iPhone 15 Pro Max shows multiple issues:
• speakers reduced to Mono sound (going back to normal after some minutes and then reducing again)
• no speaker sound at all
• touching / moving phone while driving resulting in “on and off” sound
No Reboot / Shutdown helps
No Cable connection works
@Apple: do you test your software professionally or is this outsourced to the community? Doesn’t look at all like a professional approach?
Please solve this dangerous (traffic!) and annoying topic ASAP!
Thanks - Torsten
Hello everyone,
I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels.
Here's my program to demonstrate the issue:
// InputDeviceChannels.m
// Compile with:
// clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m
//
// On my system, this prints:
// Device Name: MacBook Pro Microphone
// Number of Channels (Stream Format): 1
// Number of Elements (Element Count): 2
#import <AudioToolbox/AudioToolbox.h>
#import <AudioUnit/AudioUnit.h>
#import <CoreAudio/CoreAudio.h>
#import <Foundation/Foundation.h>
void printDeviceInfo(AudioUnit audioUnit) {
UInt32 size;
OSStatus err;
AudioStreamBasicDescription streamFormat;
size = sizeof(streamFormat);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1,
&streamFormat, &size);
if (err != noErr) {
printf("Error getting stream format\n");
exit(1);
}
int numChannels = streamFormat.mChannelsPerFrame;
UInt32 elementCount;
size = sizeof(elementCount);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0,
&elementCount, &size);
if (err != noErr) {
printf("Error getting element count\n");
exit(1);
}
printf("Number of Channels (Stream Format): %d\n", numChannels);
printf("Number of Elements (Element Count): %d\n", elementCount);
}
void printDeviceName(AudioDeviceID deviceID) {
UInt32 size;
OSStatus err;
CFStringRef deviceName = NULL;
size = sizeof(deviceName);
err = AudioObjectGetPropertyData(
deviceID,
&(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain},
0, NULL, &size, &deviceName);
if (err != noErr) {
printf("Error getting device name\n");
exit(1);
}
char deviceNameStr[256];
if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr),
kCFStringEncodingUTF8)) {
printf("Error converting device name to C string\n");
exit(1);
}
CFRelease(deviceName);
printf("Device Name: %s\n", deviceNameStr);
}
int main(int argc, const char *argv[]) {
@autoreleasepool {
OSStatus err;
// Get the default input device ID
AudioDeviceID input_device_id = kAudioObjectUnknown;
{
UInt32 property_size = sizeof(input_device_id);
AudioObjectPropertyAddress input_device_property = {
kAudioHardwarePropertyDefaultInputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain,
};
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL,
&property_size, &input_device_id);
if (err != noErr || input_device_id == kAudioObjectUnknown) {
printf("Error getting default input device ID\n");
exit(1);
}
}
// Print the device name using the input device ID
printDeviceName(input_device_id);
// Open audio unit for the input device
AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput,
kAudioUnitManufacturer_Apple, 0, 0};
AudioComponent component = AudioComponentFindNext(NULL, &desc);
AudioUnit audioUnit;
err = AudioComponentInstanceNew(component, &audioUnit);
if (err != noErr) {
printf("Error creating AudioUnit\n");
exit(1);
}
// Enable IO for input on the AudioUnit and disable output
UInt32 enableInput = 1;
UInt32 disableOutput = 0;
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,
1, &enableInput, sizeof(enableInput));
if (err != noErr) {
printf("Error enabling input on AudioUnit\n");
exit(1);
}
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,
0, &disableOutput, sizeof(disableOutput));
if (err != noErr) {
printf("Error disabling output on AudioUnit\n");
exit(1);
}
// Set the current device to the input device
err =
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id));
if (err != noErr) {
printf("Error setting device for AudioUnit\n");
exit(1);
}
// Initialize AudioUnit
err = AudioUnitInitialize(audioUnit);
if (err != noErr) {
printf("Error initializing AudioUnit\n");
exit(1);
}
// Print device info
printDeviceInfo(audioUnit);
// Clean up
AudioUnitUninitialize(audioUnit);
AudioComponentInstanceDispose(audioUnit);
}
return 0;
}
It prints:
Device Name: MacBook Pro Microphone
Number of Channels (Stream Format): 1
Number of Elements (Element Count): 2
I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output.
Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs.
Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus.
How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
Hello,
I'm getting an unknown, never-before-seen error at application launch, when running my iOS SpriteKit game on the iOS 18 arm64 simulator from Xcode 16.0 (16A242d) —
AudioConverterOOP.cpp:847 Failed to prepare AudioConverterService: -302
This is occurs on all iOS 18 simulator devices, between application(_:didFinishLaunchingWithOptions:) and the first applicationDidBecomeActive(_:) — the SKScene object may have been already initialized by SpriteKit, but the scene's didMove(to:) method hasn't been called yet.
Also, note that the error message is being emitted from a secondary (non-main) thread, obviously not created by the app.
After the error occurs, no SKScene is able to play audio — this had never occurred on iOS versions prior to 18, neither on physical devices nor on the simulator.
Has anyone seen anything like this on a physical device running 18?
Unfortunately, at the moment I cannot test myself on an 18 device, only on the simulator...
Thank you,
D.
Hello,
I am trying to create an MP4 by obtaining the content from another source MP4.
The source MP4 would be read with AVAssetReader and the output written with AVAssetWriter.
I wanted to do partial tests: first, I placed only the video in the output MP4.
Now, I am trying to place only the audio in the output MP4.
I even managed to get the output MP4 to have the same length (in seconds) as the source MP4.
But the problem is simple: the output MP4 is simply silent.
Naturally, I want it to have audio.
Below are two excerpts from the source code.
Reading and writing.
Note: The variable videoURL is from the class where the function writeVideo() is located. Its assignment happens in another scope, already debugged.
Snippet:
let semaphore = DispatchSemaphore(value: 0)
func writeVideo() {
var audioReaderBuffers = [CMSampleBuffer]()
// File directory
url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("*****/output.mp4")
guard let url = url else { return }
try FileManager.default.createDirectory(at: url.deletingLastPathComponent(), withIntermediateDirectories: true)
if FileManager.default.fileExists(atPath: url.path()) {
try FileManager.default.removeItem(at: url)
}
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let audioOutputTest = try await audioTrack.getAudioSettings()
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings)
reader.add(readerAudioOutput)
reader.startReading()
while let sampleBuffer = readerAudioOutput.copyNextSampleBuffer() {
audioReaderBuffers.append(sampleBuffer)
}
semaphore.signal()
}
}
semaphore.wait()
let audioInput = createAudioInput(sampleBuffer: audioReaderBuffers[0])
let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4)
assetWriter.add(audioInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: .zero)
for (index, buffer) in audioReaderBuffers.enumerated() {
while !audioInput.isReadyForMoreMediaData {
usleep(1000)
}
audioInput.append(buffer)
}
assetWriter.finishWriting {
switch assetWriter.status {
case .completed:
print("Operation completed successfully: \(url.absoluteString)")
case .failed:
if let error = assetWriter.error {
print("Error description: \(error.localizedDescription)")
} else {
print("Error not found.")
}
default:
print("Error not found.")
}
}
}
Below is the createAudioInput method:
func createAudioInput(sampleBuffer: CMSampleBuffer) -> AVAssetWriterInput {
let audioSettings = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: 48000,
AVEncoderBitRatePerChannelKey: 64000,
AVNumberOfChannelsKey: 1
] as [String : Any]
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: sampleBuffer.formatDescription)
audioInput.expectsMediaDataInRealTime = false
return audioInput
}
I await your help, please.