AVAudioRecorder leaves a completely useless chunk of file if a crash happens while recording.
I need to be able to recover. I'm thinking of streaming the recording to disk. I know that is possible with AVAudioEngine but I also know that API is a headache that will lead to unexpected crashes unless you're lucky and the person who built it.
Does Apple have a recommended strategy for failsafe audio recordings? I'm thinking of chunking recordings using many instances of AVAudioRecorder and then stitching those chunks together.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
I'm trying to get the item that's assigned to the currentEntry when playing any song which is currently coming up nil when the song is playing. Note currentEntry returns:
MusicPlayer.Queue.Entry(id: "evn7UntpH::ncK1NN3HS", title: "SONG TITLE")
I'm a bit stumped on the API usage. if the song is playing, how could the queue item be nil?
if queueObserver == nil {
queueObserver = ApplicationMusicPlayer.shared.queue.objectWillChange
.sink { [weak self] in
self?.handleNowPlayingChange()
}
}
}
private func handleNowPlayingChange() {
if let currentItem = ApplicationMusicPlayer.shared.queue.currentEntry {
if let song = currentItem.item {
self.currentlyPlayingID = song.id
self.objectWillChange.send()
print("Song ID: \(song.id)")
} else {
print("NO ITEM: \(currentItem)")
}
} else {
print("No Entries: \(ApplicationMusicPlayer.shared.queue.currentEntry)")
}
}
The new iPhone 16 supports spatial audio recordings in the camera app when recording videos. Is it possible to also record spatial audio without video, and is it possible for 3rd party developers to do so? If so, how do I need to configure AVAudioSession and/or AVAudioEngine to record spatial audio in my audio recording app on iPhone 16?
Hello,
I used kAudioDevicePropertyDeviceIsRunningSomewhere to check if an internal or external microphone is being used.
My code works well for the internal microphone, and for microphones which are connected using a cable.
External microphones which are connected using bluetooth are not reporting their status.
The status is always requested successfully, but it is always reported as inactive.
Main relevant parts in my code :
static inline AudioObjectPropertyAddress
makeGlobalPropertyAddress(AudioObjectPropertySelector selector) {
AudioObjectPropertyAddress address = {
selector,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster,
};
return address;
}
static BOOL getBoolProperty(AudioDeviceID deviceID,
AudioObjectPropertySelector selector)
{
AudioObjectPropertyAddress const address =
makeGlobalPropertyAddress(selector);
UInt32 prop;
UInt32 propSize = sizeof(prop);
OSStatus const status =
AudioObjectGetPropertyData(deviceID, &address, 0, NULL, &propSize, &prop);
if (status != noErr) {
return 0; //this line never gets executed in my tests. The call above always succeeds, but it always gives back "false" status.
}
return static_cast<BOOL>(prop == 1);
}
...
__block BOOL microphoneActive = NO;
iterateThroughAllInputDevices(^(AudioObjectID object, BOOL *stop) {
if (getBoolProperty(object, kAudioDevicePropertyDeviceIsRunningSomewhere) !=
0) {
microphoneActive = YES;
*stop = YES;
}
});
What could cause this and how could it be fixed?
Thank you for your help in advance!
I'm working on a little light and sound controller in Swift, driving DMX lights and audio.
For the audio portion, I need to play a bunch of looping sounds (long-duration MP3s), and occasionally play sound effects (short-duration sounds, varying formats). I want all of this mixed into selected channels on specific devices. That is, I might have one audio stream going to the left channel, and a completely different one going to the right channel.
What's the right API to do this from Swift? Core Audio? AVPlayer stuff?
At Apple Developer documentation, https://developer.apple.com/documentation/avfoundation/avcapturedevice/discoverysession you can find the sentence
You can also key-value observe this property to monitor changes to the list of available devices.
But how to use it?
I tried it with the code above and tested on my MacBook with EarPods.
When I disconnect the EarPods, nothing was happened.
MacBook Air M2
macOS Sequoia 15.0.1
Xcode 16.0
import Foundation
import AVFoundation
let discovery_session = AVCaptureDevice.DiscoverySession.init(deviceTypes: [.microphone], mediaType: .audio, position: .unspecified)
let devices = discovery_session.devices
for device in devices {
print(device.localizedName)
}
let device = devices[0]
let observer = Observer()
discovery_session.addObserver(observer, forKeyPath: "devices", options: [.new, .old], context: nil)
let input = try! AVCaptureDeviceInput(device: device)
let queue = DispatchQueue(label: "queue")
var output = AVCaptureAudioDataOutput()
let delegate = OutputDelegate()
output.setSampleBufferDelegate(delegate, queue: queue)
var session = AVCaptureSession()
session.beginConfiguration()
session.addInput(input)
session.addOutput(output)
session.commitConfiguration()
session.startRunning()
let group = DispatchGroup()
let q = DispatchQueue(label: "", attributes: .concurrent)
q.async(group: group, execute: DispatchWorkItem() {
sleep(10)
session.stopRunning()
})
_ = group.wait(timeout: .distantFuture)
class Observer: NSObject {
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
print("Change")
if keyPath == "devices" {
if let newDevices = change?[.newKey] as? [AVCaptureDevice] {
print("New devices: \(newDevices.map { $0.localizedName })")
}
if let oldDevices = change?[.oldKey] as? [AVCaptureDevice] {
print("Old devices: \(oldDevices.map { $0.localizedName })")
}
}
}
}
class OutputDelegate : NSObject, AVCaptureAudioDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
print("Output")
}
}
Good afternoon
since I’ve installed ios 18 on me iphone 15 pro I have problems using Apple car play with my Ford Puma with Sync 3. More in detail, problems with audio commands, selecting audio track, bluetooth, etc..
Are you aware about it?
Thanks a lot
Regards
Alberto
I have a recent post kind of outlining a similar question here. This time though I'm confident that inserting an array of Track works when inserting into the ApplicationMusicPlayer.shared.queue but now I'm not sure how I can initialize the queue to display song title and artwork for example. I'm also not sure how to get the current item in the queue's artist information and album information which I feel should be easy to do so maybe I'm missing something obvious. Hope this paints of what I'm trying to do and I'm going to post the neccessary code here to help me debug/figure out this problem.
import SwiftUI
import MusicKit
struct PlayBackView: View {
@Environment(\.scenePhase) var scenePhase
@Environment(\.openURL) private var openURL
// Adding Enum Here for Question Sake
enum PlayState {
case play
case pause
}
@State var song: Track
@Binding var songs: [Track]?
@State var isShuffled = false
@State private var playState: PlayState = .pause
@State private var songTimer: Int = Int.random(in: 5...30)
@State private var roundTimer: Int = 5
@State private var isTimerActive = false
// @State private var volumeValue = VolumeObserver()
@State private var isFirstPlay = true
@State private var isDancing = false
@State private var player = ApplicationMusicPlayer.shared
private var isPlaying: Bool {
return (player.state.playbackStatus == .playing)
}
let timer = Timer.publish(every: 1, on: .main, in: .common).autoconnect()
var playPauseImage: String {
switch playState {
case .play:
"pause.fill"
case .pause:
"play.fill"
}
}
var body: some View {
VStack {
// Album Cover
HStack(spacing: 20) {
if let artwork = player.queue.currentEntry?.artwork {
ArtworkImage(artwork, height: 100)
} else {
Image(systemName: "music.note")
.resizable()
.frame(width: 100, height: 100)
}
VStack(alignment: .leading) {
/*
This is where I want to display song title, album title, and artist name for example
*/
// Song Title
Text(player.queue.currentEntry?.title ?? "Unable to Find Song Title")
.font(.title)
.fixedSize(horizontal: false, vertical: true)
// Album Title
// Text(player.queue.currentEntry ?? "Album Title Not Found")
// .font(.caption)
// .fixedSize(horizontal: false, vertical: true)
// I don't know what the subtitle actually grabs
Text(player.queue.currentEntry?.subtitle ?? "Artist Name Not Found")
.font(.caption)
}
}
.padding()
// Play/Pause Button
Button(action: {
handlePlayButton()
isFirstPlay = false
}, label: {
Text(playState == .play ? "Pause" : isFirstPlay ? "Play" : "Resume")
.frame(maxWidth: .infinity)
})
.buttonStyle(.borderedProminent)
.padding()
.font(.largeTitle)
.tint(.red)
}
.padding()
// Maybe I should use the `.task` modifier here?
.onAppear {
// I'm sure this code could be improved but don't think it'll help answer the question at the moment.
Task {
if let songs = songs {
do {
if isShuffled {
let shuffledSongs = songs.shuffled()
try await player.queue.insert(shuffledSongs, position: .tail)
handlePlayButton()
} else {
try await player.queue.insert(songs, position: .tail)
}
} catch {
print(error.localizedDescription)
}
}
}
}
}
private func handlePlayButton() {
Task {
if isPlaying {
player.pause()
playState = .pause
isTimerActive = false
} else {
playState = .play
await playTrack()
isTimerActive = true
}
}
}
@MainActor
private func playTrack() async {
do {
try await player.play()
} catch {
print(error.localizedDescription)
}
}
}
//#Preview {
// PlayBackView()
//}
Hi,
In my app I am using MusicLibraryRequest<Artist> to fetch all of the artists in someone's Library collection. With this response I then fetch each artists albums: artist.with([.album]).
The response from this only gives albums in the users Library collection. I would like to augment it with all of the albums for an artist from the full catalogue.
I'm using MusicKit and targeting iOS18 and visionOS 2.
Could someone please point me towards the best way to approach this?
After investing more than a week into getting a bunch of audio unit projects converted into app + appex + framework, they all are now correctly loaded in-process in the demo host app that is part of Xcode's template.
However, Logic Pro adamantly refuses to load them in-process.
Does Logic Pro simply not do that ever, or is there some hint or configuration my plugins need to provide to enable that? If it is unsupported, will it be supported in some future version of Logic?
The entire point of investing that week was performance, which is moot if it is impossible to test the impact of loading in-process in a real-world usage scenario.
Is anyone developing a way for users to control an iOS or PadOS device playing Apple Music to a DAC via USB to amp from another iOS or PadOS device wirelessly? Specifically, full control. Not Accessibility, not to Apple TV, not HomePods, not firmware downgraded Airport Expresses to a DAC or other hacks mentioned for the past decade this “connect” like feature has been desired by audiophiles seeking exclusive mode on a device with that (iOS/PadOS) but — control it while sitting on a couch or in a wheel chair across the room. Exclusive mode being the key feature iOS and PadOS offer that is desired with full or nearly full Apple Music control.
Hi community,
I'm trying to setup an AVAudioFormat with AVAudioPCMFormatInt16. But, i've an error :
AVAEInternal.h:125 [AUInterface.mm:539:SetFormat: ([[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr])] returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 "(null)"
If i understand the error code 10868, the format is not correct. But, how i can use PCM Int16 format ? Here is my method :
- (void)setupAudioDecoder:(double)sampleRate audioChannels:(double)audioChannels {
if (self.isRunning) {
return;
}
self.audioEngine = [[AVAudioEngine alloc] init];
self.audioPlayerNode = [[AVAudioPlayerNode alloc] init];
[self.audioEngine attachNode:self.audioPlayerNode];
AVAudioChannelCount channelCount = (AVAudioChannelCount)audioChannels;
self.audioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16
sampleRate:sampleRate
channels:channelCount
interleaved:YES];
NSLog(@"Audio Format: %@", self.audioFormat);
NSLog(@"Audio Player Node: %@", self.audioPlayerNode);
NSLog(@"Audio Engine: %@", self.audioEngine);
// Error on this line
[self.audioEngine connect:self.audioPlayerNode to:self.audioEngine.mainMixerNode format:self.audioFormat];
/**NSError *error = nil;
if (![self.audioEngine startAndReturnError:&error]) {
NSLog(@"Erreur lors de l'initialisation du moteur audio: %@", error);
return;
}
[self.audioPlayerNode play];
self.isRunning = YES;*/
}
Also, i see the audioEngine seem not running ?
Audio Engine:
________ GraphDescription ________
AVAudioEngineGraph 0x600003d55fe0: initialized = 0, running = 0, number of nodes = 1
Anyone have already use this format with AVAudioFormat ?
Thank you !
As a straightforward example, I've taken Apple's MV-HEVC sample project and added two lines.
First, after the AVAssetWriterInput is created:
frameInput.performsMultiPassEncodingIfSupported = true
Second, after the call to multiviewWriter.startWriting():
print("canPerformMultiplePasses: \(frameInput.canPerformMultiplePasses)")
Which prints true.
This leads me to believe that the first encoding pass should proceed as-normal (even though I haven't handled the logic for the completion of the first pass, etc.).
However, I receive this error when the code attempts to appendTaggedBuffers to the AVAssetWriterInputTaggedPixelBufferGroupAdaptor:
Fatal error: Failed to append tagged buffers to multiview output
Am I missing a step? Or is the multi-pass encoding only supported for standard sample/pixel buffers (and not tagged buffers)?
With iOS 18.1 having call recording out of the box, is it now possible to build apps that can record calls?
I could not find anything in the swift ios docs yet.
I'm seeking information about the original file schema for an m4a file recorded directly on an iPhone (iPhone 5 running iOS 9.2.0).
I currently have two files from which I extracted metadata using ExifTool.
The first file was provided to me by someone who claims it was recorded on an iPhone 5 with iOS 9.2.0. I would like to verify whether this file has been edited.
File Permissions: -rwx------
Content Create Date: 2016:03:01 14:21:08+07:00
The second file was recorded by me on the same device model and iOS version.
File Permissions: -rw-r--r--
Date/Time Original: 2024:10:03 11:44:16+07:00
As you can see, the file permissions differ, and the key for the recording date also differs: one uses "Content Create Date" while the other uses "Date/Time Original." I would like to determine if the first file was edited, but I haven't been able to find any official documentation on the m4a schema or metadata structure from audio recorder apps. I reached out to support, and they directed me to this forum. Any insights or help would be appreciated.
Currently we tested iOS AAC LC encoder using AudioToolbox framework, no matter we set mManufacturer to kAppleHardwareAudioCodecManufacturer or kAppleSoftwareAudioCodecManufacturer, it always run on CPU.
We are working on an app for the vision pro which has high polygons count and lots of high resolution textures. Everything looks smooth and in general very well, The issue is the moment we turn on voice control even if it is not being used the visuals at the center start to stutter left to right. Has anyone seen this ?, it must be a bug, any workaround ?
Thanks,
Guillermo
When we tested the audio quality of our VoIP App, we found that when the iOS18.0 device was played with AirPods Pro 2, we could hear noises similar to peak clipping and distortion, especially when the sound source played was loud and high-pitched. Here is the device information we tested:
Model: iPhone 16 Pro Max, iPhone 15 Pro
System version: iOS 18.0 (22A3354)
Bluetooth headset model: AirPods Pro 2
Bluetooth firmware version: 6F8
We tested multiple apps (including phone calls, FaceTime, Zoom, WeChat, Tencent Meeting), and they all had the above noise problem.
We also found two phenomena:
If we use the same iOS 18 device to connect HUAWEI FreeBuds Pro or FreeBuds 2, there is no such noise problem;
If we use an iOS 17 device to connect to the same AirPods Pro 2 for testing, there is no such noise problem;
Therefore, we suspect that it is caused by the compatibility problem between iOS 18.0 and AirPods firmware 6F8. The firmware version of our AirPods Pro 2 is 6F8, which was released on June 26, and iOS 18.0 was released on September 16. Maybe they are not very compatible. I hope that subsequent firmware updates can fix this problem.
I’m working on an iOS app for a client, and I have a question regarding a specific feature we're looking to implement.
We want the app to respond to a user pressing the volume button three times while the app is in the background. The goal is to allow users to discreetly trigger a safety feature without drawing attention, particularly in situations where they may be in danger or at risk.
This feature is critical for the app and would be a valuable addition, as it could potentially help protect users in emergency situations. However, I haven’t found much information on whether iOS allows background listening for volume button presses. Therefore, I would greatly appreciate your insights on the following:
Is it possible to listen for volume button presses when the app is in the background, or are there system-level restrictions that prevent this?
If it's not directly possible, are there any special provisions, APIs, or entitlements that can be requested from Apple to enable this functionality?
In case this feature is not supported, are there alternative approaches to achieve a similar discreet activation mechanism?
If this is something that requires special permission or a process, could you please guide me on how to proceed?
I understand that maintaining user privacy and security is a priority for iOS, and I want to ensure that any implementation fully complies with Apple's guidelines.
Thanks in advance for your help!
Hi,
for the implementation of an audio player with signed URL's, I need to be able to set an authorization header to the request for an AVURLAsset.
This works but not on Airplay when trying to stream multiple songs in a queue.
For each item I do:
let headerFields: [String: String] = ["Authorization": getIdToken()!]
super.init(url: url, options: ["AVURLAssetHTTPHeaderFieldsKey": headerFields])
But only the first 2 songs in the queue actually get this authorization header sent along, somehow it is removed for subsequent songs.
Any ideas on how I can fix this?
thanks,
Thomas