hi,
i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au.
is there a lpx resource available for developers anywhere?
if (auto* playHead = processor->getPlayHead())
{
juce::AudioPlayHead::CurrentPositionInfo posInfo;
if (playHead->getCurrentPosition(posInfo))
{
bool isCurrentlyPlaying = posInfo.isPlaying;
if (isCurrentlyPlaying != wasTransportPlaying)
{
if (isCurrentlyPlaying)
{
wasTransportPlaying = isCurrentlyPlaying;
startAllTimers();
}
else
{
wasTransportPlaying = isCurrentlyPlaying;
stopAllTimers();
}
}
}
}
thanks :)
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
There appears to be no method of going forward or backwards in Get Info in the Music application,
Topic:
Media Technologies
SubTopic:
Audio
When using the Apple Devices to sync Apple Music to iPhone where is the Apple Devices backup being written to?
Apple Devices->music->sync.
Not trying to backup the iPhone via Apple Devices app.
Let's consider the following code.
I've created an actor that loads a list of .mp3 files from a Bundle and then makes it available for audio reproduction.
Unfortunately, I'm experiencing a memory leak.
At the play method.
player.play()
From Instruments I get
_malloc_type_malloc_outlined libsystem_malloc.dylib
start_wqthread libsystem_pthread.dylib
private actor AudioActor {
enum Failure: Error {
case soundsNotLoaded([AudioPlayerClient.Sound: Error])
}
enum Player {
case music(AVAudioPlayer)
}
var players: [Sound: Player] = [:]
let bundles: [Bundle]
init(bundles: UncheckedSendable<[Bundle]>) {
self.bundles = bundles.wrappedValue
}
func load(sounds: [Sound]) throws {
try AVAudioSession.sharedInstance().setActive(true, options: [])
var errors: [Sound: Error] = [:]
for sound in sounds {
guard let url = bundle.url(forResource: sound.name, withExtension: "mp3")
else { continue }
do {
self.players[sound] = try .music(AVAudioPlayer(contentsOf: url))
} catch {
errors[sound] = error
}
}
guard errors.isEmpty
else { throw Failure.soundsNotLoaded(errors) }
}
func play(sound: Sound, loops: Int?) throws {
guard let player = self.players[sound]
else { return }
switch player {
case let .music(player):
player.numberOfLoops = loops ?? -1
player.play()
}
}
func stop(sound: Sound) throws {
guard let player = self.players[sound]
else { throw Failure.soundsNotLoaded([:]) }
switch player {
case let .music(player):
player.stop()
}
}
}
I have an app that displays artwork via MPMediaItem.artwork, requesting an image with a specific size. How do I get a media item's MPMediaItemAnimatedArtwork, and how to get the preview image and video to display to the user?
Hello there!
Is there any list of voices that are always available on iOS/iPadOS devices?
It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices.
I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true?
I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available.
Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
Hi. I work on an audio app for iOS which is successfully using the MPRemoteCommandCenter for commands like next, back, skip forward, skip backward etc.
I am trying to implement playback rate controls in my app (so that users can change the playback speed of audio to 0.5x or 2x for example).
While the above commands work, the changePlaybackRateCommand does not seem to. I have enabled the command, given it a target/handler and set supported rates. With the other commands, this caused the UI to change on lock screen, in command center etc, by adding the control for the command (a next button for the next command for example). However, it does not seem to do anything for the playback rate command.
I can implement my own "rate button" UI and rate change handling, but I'm wondering if this is a known bug within Apple? Looking online, it seems other people face the same issue and haven't been able to get this command to work. Why is this API provided if it doesn't seem to do anything? Is there something I'm missing?
Kind regards.
Topic:
Media Technologies
SubTopic:
Audio
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows:
AudioSession:
category:AVAudioSessionCategoryPlayAndRecord
mode:AVAudioSessionModeDefault
option:77
preferredSampleRate:48000.000000
preferredIOBufferDuration:0.010000
AudioUnit
format.mFormatID = kAudioFormatLinearPCM;
format.mSampleRate = 48000.0;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
component.componentType = kAudioUnitType_Output;
component.componentSubType = kAudioUnitSubType_RemoteIO;
component.componentManufacturer = kAudioUnitManufacturer_Apple;
component.componentFlags = 0;
component.componentFlagsMask = 0;
Hi all,
I've developed an audio DSP application in C++ using AudioToolbox and CoreAudio on MacOS 14.4.1 with Xcode 15.
I use an AudioQueue for input and another for output. This works great.
I'm now adding real-time audio analysis eg spectral analysis. I want this to run independently of my audio processing so it can not interfere with audio playback. Taps on AudioQueues seem to be a good way of doing this...
Since the analytics won't modify the audio data, I am using a Siphon Tap by setting the AudioQueueProcessingTapFlags to
kAudioQueueProcessingTap_PreEffects | kAudioQueueProcessingTap_Siphon;
This works fine on my output queue. However, on my input queue the Tap callback is called once and then a EXC_BAD_ACCESS occurs - screen shot below.
NB: I believe that a callback should only call AudioQueueProcessingTapGetSourceAudio when not using a Siphon, so I don't call it.
Relevant code:
AudioQueueProcessingTapCallback tap_callback) {
// Makes an audio tap for a queue
void * tap_data_ptr = NULL;
AudioQueueProcessingTapFlags tap_flags =
kAudioQueueProcessingTap_PostEffects
| kAudioQueueProcessingTap_Siphon;
uint32_t max_frames = 0;
AudioStreamBasicDescription asbd;
AudioQueueProcessingTapRef tap_ref;
OSStatus status = AudioQueueProcessingTapNew(queue_ref,
tap_callback,
tap_data_ptr,
tap_flags,
&max_frames,
&asbd,
&tap_ref);
if (status != noErr) printf("Error while making Tap\n");
else printf("Successfully made tap\n");
}
void tapper(void * tap_data,
AudioQueueProcessingTapRef tap_ref,
uint32_t number_of_frames_in,
AudioTimeStamp * ts_ptr,
AudioQueueProcessingTapFlags * tap_flags_ptr,
uint32_t * number_of_frames_out_ptr,
AudioBufferList * buf_list) {
// Callback function for audio queue tap
printf("Tap callback");
}```
Image of exception stack provided by Xcode:

What have I missed?
Appreciate any help you learned folks may be able to provide.
Best,
Geoff.
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions:
The device is connected to a 2.4GHz Wi-Fi network.
A Bluetooth audio accessory is connected (e.g. headset).
AVAudioSession is active (such as during a voice call or when using the Voice Memos app).
The user moves away from the access point, causing a disconnect.
Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active.
However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again.
We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation.
It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction?
Test Environment:
Device: iPhone 13 mini
iOS: 17.5.1
Wi-Fi: 2.4GHz band
Accessories: Bluetooth headset
We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
Hi. I am working on an audio app for iOS. I have added the CPNowPlayingPlaybackRateButton to my CPNowPlayingTemplate.
When the button is clicked, my handler changes the rate in the AVPlayer and updates the MPNowPlayingInfoCenter to the new rate, for example, 2.0.
Throughout, the Carplay button always displays "0x". I am wondering how to get this UI to accurately reflect the playback rate the user has selected, as always displaying 0x is a poor user experience.
You may suggest MPChangePlaybackRateCommand is relevant here, but I have not been able to get that to work either, and judging by posts online, not many other people have either. I have made a post about that here: https://developer.apple.com/forums/thread/773099
Is this a known Apple bug? Is there a way to get the UI to accurately reflect the playback rate of my audio?
Kind regards.
Topic:
Media Technologies
SubTopic:
Audio
Good day, ladies and gents.
I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.)
I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice.
Here's the code used to set up the AudioUnit:
-(NSString*) configureAU
{
AudioComponent component = NULL;
AudioComponentDescription description;
OSStatus err = noErr;
UInt32 param;
AURenderCallbackStruct callback;
if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent
// Open the AudioOutputUnit
description.componentType = kAudioUnitType_Output;
description.componentSubType = kAudioUnitSubType_HALOutput;
description.componentManufacturer = kAudioUnitManufacturer_Apple;
description.componentFlags = 0;
description.componentFlagsMask = 0;
if( component = AudioComponentFindNext( NULL, &description ) )
{
err = AudioComponentInstanceNew( component, &audioUnit );
if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; }
}
// Configure the AudioOutputUnit:
// You must enable the Audio Unit (AUHAL) for input and output for the same device.
// When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement.
// When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'.
param = 1; // Enable input on the AUHAL
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, ¶m, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)");
param = 0; // Disable output on the AUHAL
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, ¶m, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)");
param = sizeof(AudioDeviceID); // Select the default input device
AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster };
err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, ¶m, &inputDeviceID );
chkerr("Couldn't get default input device (ID=%d)");
// Set the current device to the default input unit
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) );
chkerr("Failed to hook up input device to our AudioUnit (ID=%d)");
callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data
callback.inputProcRefCon = self;
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) );
chkerr("Could not install render callback on our AudioUnit (ID=%d)");
param = sizeof(AudioStreamBasicDescription); // get hardware device format
err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, ¶m );
chkerr("Could not install render callback on our AudioUnit (ID=%d)");
audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking
actualOutputFormat.mChannelsPerFrame = audioChannels;
actualOutputFormat.mSampleRate = deviceFormat.mSampleRate;
actualOutputFormat.mFormatID = kAudioFormatLinearPCM;
actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 )
actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved;
#if __BIG_ENDIAN__
actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian;
#endif
actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8;
actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8;
actualOutputFormat.mFramesPerPacket = 1;
actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame;
// Set the AudioOutputUnit output data format
err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription));
chkerr("Could not change the stream format of the output device (ID=%d)");
param = sizeof(UInt32); // Get the number of frames in the IO buffer(s)
err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, ¶m );
chkerr("Could not determine audio sample size (ID=%d)");
err = AudioUnitInitialize( audioUnit ); // Initialize the AU
chkerr("Could not initialize the AudioUnit (ID=%d)");
// Allocate our audio buffers
audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame];
if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; }
return nil;
}
(...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.)
Thanks for your attention! ==Dave
[p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?]
{pps: of course, the code lines up prettier in a monospaced font!}
Hi there!
We have a suite of AudioUnit v2 plugins that have been shipped for some time as aufx plugins, and we are looking into MIDI-related platform upgrades, so we need a way to update these plugins to request MIDI from Logic (and other AU hosts) but avoid changing our AU type and subtype so we don't break existing sessions. Any ideas on how we can do this?
Hi. I am working on an audio app for iOS. I have implemented UI and handling which allows the user to change playback rate of audio. When the user selects a different rate, I update the rate property on my AVQueuePlayer. This is working well on device.
When I use Airplay, it works for some devices and not for others. Some devices won't change playback rate and will always play at 1x speed.
Is this possibly a limitation of those 3rd-party devices? Or is there something I'm missing/should check? Would love to get playback rate changes working across all Airplay devices with our app.
Kind regards.
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline).
Using .with(.tracks) on the library album fetches the tracks added to the library.
The trackCount property is referring to the tracks that can be fetched from the library.
The isComplete property is always nil when fetching from the library.
One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well.
Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids.
Thank you
Hi, I'm trying to plan out development of an app and am wondering if it is possible to have user generated content automatically populate into a custom shazamkit catalogue and be able to query this catalogue non-locally?
Storing all the submissions locally would obviously not scale.
Since the last update to IOS 26.0 (23A5276f) the AirPods connect to my IPhone and the Audio is still running through the phone. They are shown in the Bluetooth Icon that they’re paired.
Topic:
Media Technologies
SubTopic:
Audio
Does an artist similarity station broaden selection variety compared to a song similarity station?
You don't have to answer if it is against nondisclosure terms.
Is there a way to permanently disable PHASE SDK logging? It seems to be a lot chattier than Apple's other SDKs.
While developing a RealityKit app that uses AudioPlaybackController, I must manually hide the PHASE SDK log output several times each day so I can see my app's log messages.
Thank you.
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers:
an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers,
and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers.
The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image.
Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized.
However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time.
On the other hand, with the same player code and network streams on macOS, the synchronization always works fine.
This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again.
So, any help / hints on this sync problem will be greatly appreciated! :)