Hi,
I'm working on an audio mixing app, that comes with bundled audio units that provide some of the app's core functionality.
For the next release of that app, we are planning to make two changes:
make the app sandboxed
package the bundled audio units as .appex bundles instead as .component bundles, so we don't need to take care of the installation at the correct spot in the file system
When trying this new approach, we run into problems where [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:] crashes when trying to load our audio unit with the exception:
AVAEInternal.h:109 [AUInterface.mm:468:AUInterfaceBaseV3: (AudioComponentInstanceNew(comp, &_auv2)): error -10863
Our audio unit has the `sandboxSafe flag enabled, and loads fine when the host app is not sandboxed, so I'm guessing I got the bundle id/code signing requirements for the .appex correct.
It seems, that my .appex isn't even loaded, and the system rejects it because of its metadata. Maybe there something wrong the Info.plist generated by Juice?
"BuildMachineOSBuild" => "23H222"
"CFBundleDisplayName" => "elgato_sample_recorder"
"CFBundleExecutable" => "ElgatoSampleRecorder"
"CFBundleIdentifier" => "com.iwascoding.EffectLoader.samplerecorderAUv3"
"CFBundleName" => "elgato_sample_recorder"
"CFBundlePackageType" => "XPC!"
"CFBundleShortVersionString" => "1.0.0.0"
"CFBundleSignature" => "????"
"CFBundleSupportedPlatforms" => [
0 => "MacOSX"
]
"CFBundleVersion" => "1.0.0.0"
"DTCompiler" => "com.apple.compilers.llvm.clang.1_0"
"DTPlatformBuild" => "24C94"
"DTPlatformName" => "macosx"
"DTPlatformVersion" => "15.2"
"DTSDKBuild" => "24C94"
"DTSDKName" => "macosx15.2"
"DTXcode" => "1620"
"DTXcodeBuild" => "16C5032a"
"LSMinimumSystemVersion" => "10.13"
"NSExtension" => {
"NSExtensionAttributes" => {
"AudioComponents" => [
0 => {
"description" => "Elgato Sample Recorder"
"factoryFunction" => "elgato_sample_recorderAUFactoryAUv3"
"manufacturer" => "Manu"
"name" => "Elgato: Elgato Sample Recorder"
"sandboxSafe" => 1
"subtype" => "Znyk"
"tags" => [
0 => "Effects"
]
"type" => "aufx"
"version" => 65536
}
]
}
"NSExtensionPointIdentifier" => "com.apple.AudioUnit-UI"
"NSExtensionPrincipalClass" => "elgato_sample_recorderAUFactoryAUv3"
}
"NSHighResolutionCapable" => 1
}
Any ideas what I am missing?
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Please Update Andorid MusicKit,the version 1.1.2 will complied fail。the error msg:•SDKUriHandlerActivity>. Apps targeting Android 12 and higher are required to specify an explicit value for android:exported when the corres
In MusicKit Web the playback states are provided as numbers.
For example the playbackStateDidChange event listener will return:
{oldState: 2, state: 3, item:...}
When the state changes from playing (2) to paused (3).
Those are pretty easy to guess, but I'm having a hard time with some of the others: completed,
ended,
loading,
none,
paused,
playing,
seeking,
stalled,
stopped,
waiting.
I cannot find a mapping of states to numbers documented anywhere. I got the above states from an enum in a d.ts file that is often incorrect/incomplete.
Can someone help out pointing to the docs or provide a mapping?
Thanks.
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player)
the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession
Sample code below
the problem I am having is that .duckOthers is not ducking the Application Music Player output
Is this a bug or am I doing this wrong?
// Configure audio session for system-wide ducking
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
// Set the ducking level to maximum
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005)
// Create and configure audio player
self.audioPlayer = try AVAudioPlayer(data: audioData)
self.audioPlayer?.delegate = self
self.audioPlayer?.volume = 1.0 // Ensure full volume for speech
self.audioPlayer?.prepareToPlay()
// Set the audio player's settings for maximum clarity
self.audioPlayer?.enableRate = false
self.audioPlayer?.pan = 0.0 // Center the audio
self.audioPlayer?.play()
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet.
**func installTap(
onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: @escaping AVAudioNodeTapBlock
)
**
It works perfectly fine.
But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
Overview
We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended.
Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below).
Code
The setup is rather simple. We took inspiration from a few sources around the web.
NSMutableDictionary *audio = [[NSMutableDictionary alloc] init];
[audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey];
[audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000
forKey:AVSampleRateKey];
[audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2
forKey:AVNumberOfChannelsKey];
[audio setObject:@160000 forKey:AVEncoderBitRateKey];
m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio];
m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
outputSettings:m_audioConfig];
AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount;
AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat
frameCapacity:audioFrames];
pcmBuffer.frameLength = pcmBuffer.frameCapacity;
AudioChannelLayout layout;
memset(&layout, 0, sizeof(layout));
layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
CMFormatDescriptionRef format;
OSStatus stats = CMAudioFormatDescriptionCreate(
kCFAllocatorDefault,
pcmBuffer.format.streamDescription,
sizeof(layout),
&layout,
0,
nil,
nil,
&format
);
for (int i = 0; i < bCount; i++)
{
AudioPCM pcm;
audioCallback->callback(pcm);
memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize);
}
size_t samplesConsumed = BUFFER_SAMPLES * bCount;
CMSampleBufferRef sampleBuffer;
CMSampleTimingInfo timing;
timing.duration = CMTimeMake(1, config.audioSampleRate);
timing.presentationTimeStamp = presentationTime;
timing.decodeTimeStamp = kCMTimeInvalid;
OSStatus ostatus = CMSampleBufferCreate(
kCFAllocatorDefault,
nil,
false,
nil,
nil,
format,
(CMItemCount)pcmBuffer.frameLength,
1,
&timing,
0,
nil,
&sampleBuffer
);
////
ostatus = CMSampleBufferSetDataBufferFromAudioBufferList(
sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
pcmBuffer.audioBufferList
);
if (ostatus != noErr)
{
NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus));
return;
}
ostatus = CMSampleBufferSetDataReady(sampleBuffer);
if (ostatus != noErr)
{
NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus));
return;
}
// Finally we can attach it, then shove the presentation time forward
[m_audio appendSampleBuffer:sampleBuffer];
The Crash
The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say.
0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636
1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112
2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68
3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196
4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16
5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84
6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116
7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808
8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84
9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60
10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72
11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296
12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720
13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100
14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184
15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960
16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816
17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192
18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500
19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472
20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128
21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168
22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052
23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72
24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136
Any insight would be welcome!
Dear Sirs,
I'd like to add an icon to my audio driver based on AudioDriverKit. This icon should show up left of my audio device in the audio devices dialog. For an Audio Server Plugin I managed to do this using the property kAudioDevicePropertyIcon and CFBundleCopyResourceURL(...) but how would you do this with AudioDriverKit? Should I use IOUserAudioCustomProperty or IOUserAudioControl and how would I refer to the Bundle? Is there an example available somewhere?
Thanks and best regards,
Johannes
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
Does anyone know how to pronounce the sound of a specific instrument when you tap a button on the screen on your iPhone or iPad? Now, in the middle of creating a music learning app, I'm thinking of assigning monotones or chords to the button-like frames on the keyboard and fingerboard on the screen. Can it be achieved with SwiftUI chords alone? Once upon a time, MIDI level 1 I remember that there was a pronunciation function of the instrument, but I don't think about implementing the same function in the current OS. Please lend me your wisdom.
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I'm trying to receive parquet files using the example that provided in documentation. I've done all required steps but receive constantly error 500 with "Upstream Service Error". By looking into the issues list, seems this error exists for months. Is it possible to get it working?
Mobile app - Ellie's Gift
https://apps.apple.com/gb/app/ellies-gift/id1617597875
Using AVFoundation to play audio tracks within the app.
Has always been working fine across apple and android, but iphone 14 and newer devices are unable to play audio.
Any idea's or suggestions?
Hello,
i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method :
suspend fun processAudioFileInBackground(
filePath: String,
developerTokenProvider: DeveloperTokenProvider
) = withContext(Dispatchers.IO) {
val bufferSize = 1024 * 1024
val audioFile = FileInputStream(filePath)
val byteBuffer = ByteBuffer.allocate(bufferSize)
byteBuffer.order(ByteOrder.LITTLE_ENDIAN)
var bytesRead: Int
while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) {
val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data
signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis())
val signature = signatureGenerator.generateSignature()
println("Signature: ${signature.durationInMs}")
val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH)
val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data
val matchResult = session.match(signature)
println("MatchResult : $matchResult")
setMatchResult(matchResult)
byteBuffer.clear()
}
audioFile.close()
}
I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
I’m facing a problem while trying to achieve spatial audio effects in my iOS 18 app. I have tried several approaches to get good 3D audio, but the effect never felt good enough or it didn’t work at all.
Also what mostly troubles me is I noticed that AirPods I have doesn’t recognize my app as one having spatial audio (in audio settings it shows "Spatial Audio Not Playing"). So i guess my app doesn't use spatial audio potential.
First approach uses AVAudioEnviromentNode with AVAudioEngine. Chaining position of player as well as changing listener’s doesn’t seem to change anything in how audio plays.
Here's simple how i initialize AVAudioEngine
import Foundation
import AVFoundation
class AudioManager: ObservableObject {
// important class variables
var audioEngine: AVAudioEngine!
var environmentNode: AVAudioEnvironmentNode!
var playerNode: AVAudioPlayerNode!
var audioFile: AVAudioFile?
...
//Sound set up
func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
audioEngine = AVAudioEngine()
environmentNode = AVAudioEnvironmentNode()
playerNode = AVAudioPlayerNode()
audioEngine.attach(environmentNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: environmentNode, format: nil)
audioEngine.connect(environmentNode, to: audioEngine.mainMixerNode, format: nil)
environmentNode.listenerPosition = AVAudio3DPoint(x: 0, y: 0, z: 0)
environmentNode.listenerAngularOrientation = AVAudio3DAngularOrientation(yaw: 0, pitch: 0, roll: 0)
environmentNode.distanceAttenuationParameters.referenceDistance = 1.0 environmentNode.distanceAttenuationParameters.maximumDistance = 100.0
environmentNode.distanceAttenuationParameters.rolloffFactor = 2.0
// example.mp3 is mono sound
guard let audioURL = Bundle.main.url(forResource: "example", withExtension: "mp3") else {
print("Audio file not found")
return
}
do {
audioFile = try AVAudioFile(forReading: audioURL)
} catch {
print("Failed to load audio file: \(error)")
}
}
...
//Playing sound
func playSpatialAudio(pan: Float ) {
guard let audioFile = audioFile else { return }
// left side
playerNode.position = AVAudio3DPoint(x: pan, y: 0, z: 0)
playerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
do {
try audioEngine.start()
playerNode.play()
} catch {
print("Failed to start audio engine: \(error)")
}
...
}
Second more complex approach using PHASE did better. I’ve made an exemplary app that allows players to move audio player in 3D space. I have added reverb, and sliders changing audio position up to 10 meters each direction from listener but audio seems to only really change left to right (x axis) - again I think it might be trouble with the app not being recognized as spatial.
//Crucial class Variables:
class PHASEAudioController: ObservableObject{
private var soundSourcePosition: simd_float4x4 = matrix_identity_float4x4
private var audioAsset: PHASESoundAsset!
private let phaseEngine: PHASEEngine
private let params = PHASEMixerParameters()
private var soundSource: PHASESource
private var phaseListener: PHASEListener!
private var soundEventAsset: PHASESoundEventNodeAsset?
// Initialization of PHASE
init{
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
// Init PHASE Engine
phaseEngine = PHASEEngine(updateMode: .automatic)
phaseEngine.defaultReverbPreset = .mediumHall
phaseEngine.outputSpatializationMode = .automatic //nothing helps
// Set listener position to (0,0,0) in World space
let origin: simd_float4x4 = matrix_identity_float4x4
phaseListener = PHASEListener(engine: phaseEngine)
phaseListener.transform = origin
phaseListener.automaticHeadTrackingFlags = .orientation
try! self.phaseEngine.rootObject.addChild(self.phaseListener)
do{
try self.phaseEngine.start();
}
catch {
print("Could not start PHASE engine")
}
audioAsset = loadAudioAsset()
// Create sound Source
// Sphere
soundSourcePosition.translate(z:3.0)
let sphere = MDLMesh.newEllipsoid(withRadii: vector_float3(0.1,0.1,0.1), radialSegments: 14, verticalSegments: 14, geometryType: MDLGeometryType.triangles, inwardNormals: false, hemisphere: false, allocator: nil)
let shape = PHASEShape(engine: phaseEngine, mesh: sphere)
soundSource = PHASESource(engine: phaseEngine, shapes: [shape])
soundSource.transform = soundSourcePosition
print(soundSourcePosition)
do {
try phaseEngine.rootObject.addChild(soundSource)
}
catch {
print ("Failed to add a child object to the scene.")
}
let simpleModel = PHASEGeometricSpreadingDistanceModelParameters()
simpleModel.rolloffFactor = rolloffFactor
soundPipeline.distanceModelParameters = simpleModel
let samplerNode = PHASESamplerNodeDefinition(
soundAssetIdentifier: audioAsset.identifier,
mixerDefinition: soundPipeline,
identifier: audioAsset.identifier + "_SamplerNode")
samplerNode.playbackMode = .looping
do {soundEventAsset = try
phaseEngine.assetRegistry.registerSoundEventAsset(
rootNode: samplerNode,
identifier: audioAsset.identifier + "_SoundEventAsset")
} catch {
print("Failed to register a sound event asset.")
soundEventAsset = nil
}
}
//Playing sound
func playSound(){
// Fire new sound event with currently set properties
guard let soundEventAsset else { return }
params.addSpatialMixerParameters(
identifier: soundPipeline.identifier,
source: soundSource,
listener: phaseListener)
let soundEvent = try! PHASESoundEvent(engine: phaseEngine,
assetIdentifier: soundEventAsset.identifier,
mixerParameters: params)
soundEvent.start(completion: nil)
}
...
}
Also worth mentioning might be that I only own personal team account
Hi,
macOS (latest macOS, latest HW, but doesn't matter) seems to prevent CoreMIDI driver logging with standard logging procedures (syslog, unified logging).
The only chance to log something is writing to a file at one of the rare write-accessible locations for CoreMIDI.
How is this supposed to work? Any hint is highly appreciated. Thanks!
Hello!
I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone.
Desired behavior:
Play audio through Bluetooth headset (AirPods)
Record unprocessed environmental audio from the iPhone's built-in microphone
Actual behavior:
When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs)
However, the actual audio data received is clearly still coming from the AirPods microphone
The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds
Environment Details
Device: iPhone 12 Pro Max
iOS Version: 18.4.1
Hardware: AirPods
Audio Framework: AVAudioEngine (also tried AudioQueue)
Code Attempted
I've tried multiple approaches to force the correct routing:
func configureAudioSession() {
let session = AVAudioSession.sharedInstance()
// Configure to allow Bluetooth output but use built-in mic
try? session.setCategory(.playAndRecord,
options: [.allowBluetoothA2DP, .defaultToSpeaker])
try? session.setActive(true)
// Explicitly select built-in microphone
if let inputs = session.availableInputs,
let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) {
try? session.setPreferredInput(builtInMic)
print("Selected input: \(builtInMic.portName)")
}
// Log the current route
let route = session.currentRoute
print("Current input: \(route.inputs.first?.portName ?? "None")")
// Configure audio engine with native format
let inputNode = audioEngine.inputNode
let nativeFormat = inputNode.inputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in
// Process audio buffer
// Despite showing "Built-in Microphone" in route, audio appears to be
// coming from AirPods with voice isolation applied - welp!
}
try? audioEngine.start()
}
I've also tried various combinations of:
Different audio session modes (.default, .measurement, .voiceChat)
Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP)
Setting session.setPreferredInput() both before and after activation
Diagnostic Observations
When AirPods are connected:
AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput()
The actual audio data received shows clear signs of AirPods' voice isolation processing
Background/environmental sounds are actively filtered out...
When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through.
Questions
Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output?
Are there any lower-level configurations that might resolve this issue?
Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
I've been trying to use AVMIDIControlChangeEvent with a bankSelect message type to change the instrument the sequencer uses on a AVMusicTrack with no luck.
I started with the Apple AVAEMixerSample, converting the initial setup/loading and portions dealing with the sequencer to Swift. I got that working and playing the "bluesyRiff" and then modified it to play individual notes. So my createAndSetupSequencer looked like
func createAndSetupSequencer() {
sequencer = AVAudioSequencer(audioEngine: engine)
// guard let midiFileURL = Bundle.main.url(forResource: "bluesyRiff", withExtension: "mid") else {
// print (" failed guard trying to get URL for bluesyRiff")
// return
// }
let track = sequencer.createAndAppendTrack()
var currTime = 1.0
for i: UInt32 in 0...8 {
let newNoteEvent = AVMIDINoteEvent(channel: 0, key: 60+i, velocity: 64, duration: 2.0)
track.addEvent(newNoteEvent, at: AVMusicTimeStamp(currTime))
currTime += 2.0
}
The notes played, so then I also replaced the gs_instruments sound bank with GeneralUser GS MuseScore v1.442 first by trying
guard let soundBankURL = Bundle.main.url(forResource: "GeneralUser GS MuseScore v1.442", withExtension: "sf2") else {
return}
do {
try sampler.loadSoundBankInstrument(at: soundBankURL, program: 0x001C, bankMSB: 0x79, bankLSB: 0x08)
} catch{....
}
This appears to work, the instrument (8 which is "Funk Guitar") plays. If I change to bankLSB: 0x00 I get the "Palm Muted guitar". So I know that the soundfont has these instruments
Stuff goes off the rails when I try to change the instruments in createAndSetupSequencer. Putting
let programChange = AVMIDIProgramChangeEvent(channel: 0, programNumber: 0x001C)
let bankChange = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.bankSelect, value: 0x00)
track.addEvent(programChange, at: AVMusicTimeStamp(1.0))
track.addEvent(bankChange, at: AVMusicTimeStamp(1.0))
just before my add note loop doesn't produce any change. Loading bankLSB 8 (Funk) in sampler.loadSoundBankInstrument and trying to change with bankSelect 0 (Palm muted) in createAndSetupSequencer results in instrument 8 (Funk) playing not Palm Muted.
Loading bankLSB 0 (Palm muted) and trying to change with bankSelect 8 (Funk) doesn't work, 0 (Palm muted) plays
I also tried sampler.loadInstrument(at: soundBankURL) and then I always get the first instrument in the sound font file (piano)no matter what values I put in my programChange/bankChange
I've also changed the time in the track.addEvent to be 0, 1.0, 3.0 etc to no success
The sampler.loadSoundBankInstrument specifies two UInt8 parameters, bankMSB and BankLSB while the AVMIDIControlChangeEvent bankSelect value is UInt32 suggesting it might be some combination of bankMSB and BankLSB. But the documentation makes no mention of what this should look like. I tried various combinations of 0x7908, 0X0879 etc to no avail
I will also point out that I am able to successfully execute other control change events
For example adding
if i == 1 {
let portamentoOnEvent = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.portamento, value: 0xFF)
track.addEvent(portamentoOnEvent, at: AVMusicTimeStamp(currTime))
let portamentoRateEvent = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.portamentoTime, value: 64)
track.addEvent(portamentoRateEvent, at: AVMusicTimeStamp(currTime))
}
does produce a change in the sound. (As an aside, a definition of what portamento time is, other than "the rate of portamento" would be welcome. is it notes/seconds? freq/minute? beats/hour?)
I was able to get the instrument to change in a different program using MusicPlayer and a series of MusicTrackNewMIDIChannelEvent on a track but these operate on a MusicTrack not the AVMusicTrack which the sequencer uses.
Has anyone been successful in switching instruments through an AVMIDIControlChangeEvent or have any feedback on how to do this?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses?
The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect.
I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine.
AVAudioInputNode *inputNode=[engine inputNode];
[inputNode setVoiceProcessingEnabled:NO error:nil];
NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount];
for(i=0;i<trackCount;i++)
{
fixMicFormat[i]=[AVAudioMixerNode new];
[engine attachNode:fixMicFormat[i]];
// And create reverb/compressor and eq the same way...
[engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil];
[engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil];
[engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil];
[engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil];
[micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ];
}
AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1];
[engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
Hello,
Has anyone else experienced variations in the accuracy of the playbackTime value? After a few seconds of playback, the reported time adjusts by a fraction of a second, making it difficult to calculate the actual playbackTime of the audio.
This can be recreated by playing a song in MusicKit, recording the start time of the audio, playing for at least 10-20 seconds, and then comparing the playbackTime value to one calculated using the start time of the audio. In my experience this jump occurs after about 10 seconds of playback.
Any help would be appreciated.
Thanks!
I have used AVQueuePlayer in my music app to play sequence of audios from a remote server, this how I have defined things my player in my ViewModel
Variables
private var cancellables = Set()
private let audioSession = AVAudioSession.sharedInstance()
private var avQueuePlayer: AVQueuePlayer?
@Published var playbackSpeed: Float = 1.0
before starting playback, I am making sure that audio session is set properly, the code snippet used for that is
do {
try audioSession.setCategory(.playback, mode: .default, options: [])
try audioSession.setActive(true, options: [])
} catch {
return
}
and this is the function I am using to update playback speed
func updatePlaybackSpeed(_ newSpeed: Float){
if newSpeed > 0.0, newSpeed <= 2.0{
playbackSpeed = newSpeed
avQueuePlayer?.rate = newSpeed
print("requested speed is (newSpeed) and actual speed is (String(describing: avQueuePlayer?.rate))")
}
}
sometimes whatever speed is set, player seems to play at the same speed as it was set,
e.g. Once I got "requested speed is 1.5 and actual speed is 1.5", and player also seemed to play at the speed of 1.5
but another time I got "requested speed is 2.0 and actual speed is 2.0", but player still seemed to play at the speed of 1.0
to observe changes in rate, I used this
**private func observeRateChanges() {
guard let avQueuePlayer = self.avQueuePlayer else { return }
NotificationCenter.default.publisher(for: AVQueuePlayer.rateDidChangeNotification, object: avQueuePlayer)
.compactMap { $0.userInfo?[AVPlayer.rateDidChangeReasonKey] as? AVPlayer.RateDidChangeReason }
.sink { reason in
switch reason {
case .appBackgrounded:
print("The app transitioned to the background.")
case .audioSessionInterrupted:
print("The system interrupts the app’s audio session.")
case .setRateCalled:
print("The app set the player’s rate.")
case .setRateFailed:
print("An attempt to change the player’s rate failed.")
default:
break
}
}
.store(in: &cancellables)
}**
when rate was set properly, I got this "The app set the player’s rate." from the above function, but when it wasn't, I got this "An attempt to change the player’s rate failed.,"
now I am not able to understand why rate is not being set, and if it gave "requested speed is 2.0 and actual speed is 2.0" from updatePlaybackSpeed function, why does the player seems to play with the speed of 1.0?
Topic:
Media Technologies
SubTopic:
Audio
Is it possible to play WebM audio on iOS? Either with AVPlayer, AVAudioEngine, or some other API?
Safari has supported this for a few releases now, and I'm wondering if I missed something about how to do this. By default these APIs don't seem to work (nor does ExtAudioFileOpen).
Our usecase is making it possible for iOS users to play back audio recorded in our webapp (desktop versions of Chrome & Firefox only support webm as a destination format for MediaRecorder)