Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem.
I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone 👋
I’m building an iOS app in Swift where I want to do the following:
Record the user’s voice
Transcribe the spoken sentence (speech-to-text)
Auto-detect the spoken language
Translate it to another language selected by the user (e.g., English → Spanish or Hindi → English)
Speak back (text-to-speech) the translated text on the same device
Is this possible to record via phone mic and play the transcribe voice into headphone's audio?
My app utilizes background audio to play music files. I have the audio background mode enabled and I initialize the AVAudioSession in playback mode with the mixWithOthers option. And it usually works great while the app is backgrounded. I listen for audio interruptions as well as route changes and I am able to handle them appropriately and I can usually resume my background audio no problem.
I discovered an issue while connected to CarPlay though. Roughly 50% of the time when I disconnect from a phone call while connected to CarPlay I get the following error after calling the play() method of my AVAudioPlayer instance:
"ATAudioSessionClientImpl.mm:281 activation failed. status = 561015905"
If I instead try to start a new audio session I get a similar error:
Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
Like I said, this isn't reproducible 100% of the time and is so far only seen while connected to CarPlay. I don't think Im forgetting so additional capability or plist setting, but if anyone has any clues it would be greatly appreciated. Otherwise this is likely just a bug that I need to report to Apple.
One very important note, and reason I believe it's just a bug, is that while I was testing I found that other music apps like Spotify will also fail to resume their audio at the same time my app fails.
Another important detail is that when it works successfully I receive the audio session interruption ended notification, and when it doesn't work I only receive a route configuration change or route override notification. From there I am able to still successfully granted background time to execute code, but my call to resume audio fails with the above mentioned error codes.
Hi. I work on an audio app for iOS which is successfully using the MPRemoteCommandCenter for commands like next, back, skip forward, skip backward etc.
I am trying to implement playback rate controls in my app (so that users can change the playback speed of audio to 0.5x or 2x for example).
While the above commands work, the changePlaybackRateCommand does not seem to. I have enabled the command, given it a target/handler and set supported rates. With the other commands, this caused the UI to change on lock screen, in command center etc, by adding the control for the command (a next button for the next command for example). However, it does not seem to do anything for the playback rate command.
I can implement my own "rate button" UI and rate change handling, but I'm wondering if this is a known bug within Apple? Looking online, it seems other people face the same issue and haven't been able to get this command to work. Why is this API provided if it doesn't seem to do anything? Is there something I'm missing?
Kind regards.
Topic:
Media Technologies
SubTopic:
Audio
Hi there,
I recently launched a dj app to the mac app store, and was wondering how I could access songs for mixing purposes via Apple Music just like how serato, rekordbox, djay, and other DJ apps do?
Thanks,
Gunek
Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks.
Is this feature/api available to developers.
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it.
Current Attempt & Observations
I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer.
This works consistently when the app is active, but in the background, it only works about 60% of the time.
In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio.
Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background.
The app has background modes enabled for Audio, Background Fetch, and Remote Notifications.
Best Approach to Achieve This?
I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths:
Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server?
Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case?
Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach?
Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment?
I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated.
Thank you for your time and guidance.
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why.
Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
According to the header file the outputVolume properties supported range is 0.0-1.0:
/*! @property outputVolume
@abstract The mixer's output volume.
@discussion
This accesses the mixer's output volume (0.0-1.0, inclusive).
@property (nonatomic) float outputVolume;
However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume?
Thanks
I need to implement a solution through an API or custom driver to completely block out the built-in speakers and microphone of Mac, because I need other apps to use specified external devices as audio input and output. Is there a way to achieve this requirement? What I mean is that even in system preferences, it should not be possible to choose the built-in microphone and speakers; only my external device can be used.
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume.
Observed behavior:
When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1).
The app is then moved to the background.
While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5).
When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1).
From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground.
Questions:
According to Apple’s documentation for AVAudioSession.outputVolume:
“The systemwide output volume set by the user.”
https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume
However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description.
Questions:
The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume?
Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background?
Any clarification on the intended behavior or recommended handling would be greatly appreciated.
Hi,
I’m an iOS developer building an app with an use case that needs advanced playback on Apple Music subscription streams, specifically:
• Real-time tempo change (BPM) during playback — i.e., time-stretch with key-lock, not just crossfade.
• Beat-matched transitions between tracks.
From what I can tell, this capability seems to exist only for approved partners and isn’t available through public MusicKit.
Question: What’s the official request path to be evaluated for that restricted partner entitlement (application form, questionnaire, NDA, or internal team/BD contact)? If the entitlement identifier is internal, how can I get my account routed to the right Apple Music team?
For reference, publicly announced partners include Algoriddim djay, Serato DJ Pro, rekordbox (AlphaTheta), and Engine DJ—all of which appear to implement mixing features that imply advanced playback (tempo/beat-matching) on Apple Music content. I’d prefer not to share product details publicly for the moment and can provide specifics privately if needed.
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
FairPlay Streaming
MusicKit
AVFoundation
ApplicationMusicPlayer is not available on watchOS but all other platforms. Is there a technical reason for that like battery life? Same goes for SystemMusicPlayer and MPMusicPlayerController. I already filed feedbacks for that.
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline).
Using .with(.tracks) on the library album fetches the tracks added to the library.
The trackCount property is referring to the tracks that can be fetched from the library.
The isComplete property is always nil when fetching from the library.
One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well.
Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids.
Thank you
hi,
i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au.
is there a lpx resource available for developers anywhere?
if (auto* playHead = processor->getPlayHead())
{
juce::AudioPlayHead::CurrentPositionInfo posInfo;
if (playHead->getCurrentPosition(posInfo))
{
bool isCurrentlyPlaying = posInfo.isPlaying;
if (isCurrentlyPlaying != wasTransportPlaying)
{
if (isCurrentlyPlaying)
{
wasTransportPlaying = isCurrentlyPlaying;
startAllTimers();
}
else
{
wasTransportPlaying = isCurrentlyPlaying;
stopAllTimers();
}
}
}
}
thanks :)
Recently, after the update of 26.3 Mac OS (Tahoe), the ordering of my music app has been horrible at best - music disappearing, tracks not aligning with albums (even if the albums are different years).
It's created quite a problem, because the disappearing tracks issue seems to be replicating to iOS devices as well (although track numbering and album association seem to be stable). Has anyone else heard of this issue?
Topic:
Media Technologies
SubTopic:
Audio
Hi, I'm trying to plan out development of an app and am wondering if it is possible to have user generated content automatically populate into a custom shazamkit catalogue and be able to query this catalogue non-locally?
Storing all the submissions locally would obviously not scale.
Hello everyone,
I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes.
Following AVFoundation documentation, I’m configuring my audio session like this:
let session = AVAudioSession.sharedInstance()
try session.setCategory(
.playback,
mode: .default,
options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers]
)
self.engine.attach(self.player)
self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat)
try? session.setActive(true)
When it’s time to play cues, I schedule playback on a DispatchQueue:
// scheduleAudio uses DispatchQueue
self.scheduleAudio(at: interval.start) {
do {
try audio.engine.start()
audio.node.play()
for sample in interval.samples {
audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime))
}
} catch {
print("Audio activation failed: \(error)")
}
}
This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905.
Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected.
I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio.
Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background?
Any advice or pointers would be greatly appreciated!
Hi. I am working on an audio app for iOS. I have implemented UI and handling which allows the user to change playback rate of audio. When the user selects a different rate, I update the rate property on my AVQueuePlayer. This is working well on device.
When I use Airplay, it works for some devices and not for others. Some devices won't change playback rate and will always play at 1x speed.
Is this possibly a limitation of those 3rd-party devices? Or is there something I'm missing/should check? Would love to get playback rate changes working across all Airplay devices with our app.
Kind regards.
My app Balletrax is a music player for people to use while they teach ballet. Used to be you could silence notifications during use, but now the customer seems to have to know how to use Focus mode, remember to turn it on and off, and have to check the notifications one does and doesn't want to use. Is there no way to silence all notifications when the app is in use?