Hi everyone 👋
I’m building an iOS app in Swift where I want to do the following:
Record the user’s voice
Transcribe the spoken sentence (speech-to-text)
Auto-detect the spoken language
Translate it to another language selected by the user (e.g., English → Spanish or Hindi → English)
Speak back (text-to-speech) the translated text on the same device
Is this possible to record via phone mic and play the transcribe voice into headphone's audio?
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello, I'm working on a MusicKit based SwiftUI app. I've integrated AirPlay using the AVRoutePickerView like so:
struct UIKitAirPlayPickerView: UIViewRepresentable {
func makeUIView(context: Context) -> AVRoutePickerView {
let routePickerView = AVRoutePickerView()
routePickerView.prioritizesVideoDevices = false
return routePickerView
}
func updateUIView(_ uiView: AVRoutePickerView, context: Context) {}
}
The AirPlay menu appears as expected, and selecting an AirPlay device functions as expected. I'm currently sending audio from my app to a HomePod. However, the state of the AVRoutePickerView does not reflect the playback state. There is no cover art and it says "Not Playing". When my device is locked, my lock screen shows the album art, metadata and AirPlay routing as expected.
My app uses the ApplicationMusicPlayer however I encounter the same behavior using the SystemMusicPlayer.
Any guidance on how to troubleshoot this? Is there any other way to integrate the system AirPlay picker into my app, or is this my only option?
Thank you for reading.
Hi,
I'm still stuck getting a basic record-with-playthrouh pipeline to work.
Has anyone a sample of setting up a AVAudioEngine pipeline for recording with playthrough?
Plkaythrough works with AVPlayerNode as input but not with any microphone input. The docs mention the "enabled state" of the outputNode of the engine without explaining the concept, i.e. how to enable an output.
When the engine renders to and from an audio device, the AVAudioSession category and the availability of hardware determines whether an app performs output. Check the output node’s output format (specifically, the hardware format) for a nonzero sample rate and channel count to see if output is in an enabled state.
Well, in my setup the output is NOT enabled, and any attempt to switch (e.g. audioEngine.outputNode.auAudioUnit.setDeviceID(deviceID) )/ attach a dedicated device / ... results in exceptions / errors
Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks.
Is this feature/api available to developers.
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows:
AudioSession:
category:AVAudioSessionCategoryPlayAndRecord
mode:AVAudioSessionModeDefault
option:77
preferredSampleRate:48000.000000
preferredIOBufferDuration:0.010000
AudioUnit
format.mFormatID = kAudioFormatLinearPCM;
format.mSampleRate = 48000.0;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
component.componentType = kAudioUnitType_Output;
component.componentSubType = kAudioUnitSubType_RemoteIO;
component.componentManufacturer = kAudioUnitManufacturer_Apple;
component.componentFlags = 0;
component.componentFlagsMask = 0;
I am building an app where I am using AudioRecordingIntent to start audio recording from shortcuts / Action button etc. Whenever I set that up, I notice that I get an error -
Unknown NSError Live Activity start failed: The operation couldn’t be completed. Target is not foreground
I explicitly try to start the live activity and then start the audio recording and that's when I see this error.
How can I make this work? I am unable to find any examples.
Topic:
Media Technologies
SubTopic:
Audio
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions:
The device is connected to a 2.4GHz Wi-Fi network.
A Bluetooth audio accessory is connected (e.g. headset).
AVAudioSession is active (such as during a voice call or when using the Voice Memos app).
The user moves away from the access point, causing a disconnect.
Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active.
However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again.
We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation.
It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction?
Test Environment:
Device: iPhone 13 mini
iOS: 17.5.1
Wi-Fi: 2.4GHz band
Accessories: Bluetooth headset
We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
I have a flutter iOS app that has some simple sound FX for button clicks, swipes, etc.
In simulator and on real device the sound works fine, but when i upload the app to testflight (and App store) the sound FX don't play. When I upload the app to my phone via xcode I am using the release profile so I don't see what the difference could be.
I have also gone through the archive that i uploaded and verified that the sound files are indeed there.
I have other flutter apps that use sound but non since the iOS 26 update. I've tried 3 different flutter sound libraries and all face the same issue.
Wondering if anyone else is seeing this issue or if I'm missing a simple permission or something that has changed recently?
Thanks in advanced
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video?
I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
Hello.
To determine wether "AVB/EAV Mode" of a AV-capable network interfaces is turned on or off I query the IO registry and evaluate the property "AVBControllerState".
I was wondering if this is the "correct" approach and if there is anything known about the values for this property?
Network interfaces without AV capability may also carry this property (e.g.: for my WiFi adapter the value of 1) whereas the value for interfaces with AV capability can be 0 and 3. At least as far as I could observe with my limited amount of test devices at hand.
Is it safe to assume that a value of 3 means this feature is turned on, 0 that it is turned off and ignore values of 1?
Is there another approach to get to know the status of the "AVB/EAV Mode"?
Thanks for any insight.
Best regards,
Ingo
hi,
i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au.
is there a lpx resource available for developers anywhere?
if (auto* playHead = processor->getPlayHead())
{
juce::AudioPlayHead::CurrentPositionInfo posInfo;
if (playHead->getCurrentPosition(posInfo))
{
bool isCurrentlyPlaying = posInfo.isPlaying;
if (isCurrentlyPlaying != wasTransportPlaying)
{
if (isCurrentlyPlaying)
{
wasTransportPlaying = isCurrentlyPlaying;
startAllTimers();
}
else
{
wasTransportPlaying = isCurrentlyPlaying;
stopAllTimers();
}
}
}
}
thanks :)
I have an AUv3 that passes all validation and can be loaded into Logic Pro without issue. The UI for the plug in can be any aspect ratio but Logic insists on presenting it in a view with a fixed aspect ratio. That is when resizing, both the height and width are resized. I have never managed to work out what it is I need to do specify to Logic to allow the user to resize width or height independently of each other.
Can anyone tell me what I need to specify in the AU code that will inform Logic that the view can be resized from any side of the window/panel?
I am trying to stream audio from local filesystem.
For that, I am trying to use an AVAssetResourceLoaderDelegate for an AVURLAsset. However, Content-Length is not known at the start. To overcome this, I tried several methods:
Set content length as nil, in the AVAssetResourceLoadingContentInformationRequest
Set content length to -1, in the ContentInformationRequest
Both of these cause the AVPlayerItem to fail with an error.
I also tried setting Content-Length as INT_MAX, and setting a renewalDate = Date(timeIntervalSinceNow: 5). However, that seems to be buggy. Even after updating the Content-Length to the correct value (e.g. X bytes) and finishing that loading request, the resource loader keeps getting requests with requestedOffset = X with dataRequest.requestsAllDataToEndOfResource = true. These requests keep coming indefinitely, and as a result it seems that the next item in the queue does not get played. Also, .AVPlayerItemDidPlayToEndTime notification does not get called.
I wanted to check if this is an expected behavior or is there a bug in this implementation. Also, what is the recommended way to stream audio of unknown initial length from local file system?
Thanks!
Hi folks - I'm having trouble finding specific documentation about Audio Unit MIDI plugins - as in MIDI -only. Any suggestions welcome as searches aren't returning much. (too niche? user error?)
Topic:
Media Technologies
SubTopic:
Audio
Hello everyone,
I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes.
Following AVFoundation documentation, I’m configuring my audio session like this:
let session = AVAudioSession.sharedInstance()
try session.setCategory(
.playback,
mode: .default,
options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers]
)
self.engine.attach(self.player)
self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat)
try? session.setActive(true)
When it’s time to play cues, I schedule playback on a DispatchQueue:
// scheduleAudio uses DispatchQueue
self.scheduleAudio(at: interval.start) {
do {
try audio.engine.start()
audio.node.play()
for sample in interval.samples {
audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime))
}
} catch {
print("Audio activation failed: \(error)")
}
}
This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905.
Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected.
I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio.
Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background?
Any advice or pointers would be greatly appreciated!
Let's consider the following code.
I've created an actor that loads a list of .mp3 files from a Bundle and then makes it available for audio reproduction.
Unfortunately, I'm experiencing a memory leak.
At the play method.
player.play()
From Instruments I get
_malloc_type_malloc_outlined libsystem_malloc.dylib
start_wqthread libsystem_pthread.dylib
private actor AudioActor {
enum Failure: Error {
case soundsNotLoaded([AudioPlayerClient.Sound: Error])
}
enum Player {
case music(AVAudioPlayer)
}
var players: [Sound: Player] = [:]
let bundles: [Bundle]
init(bundles: UncheckedSendable<[Bundle]>) {
self.bundles = bundles.wrappedValue
}
func load(sounds: [Sound]) throws {
try AVAudioSession.sharedInstance().setActive(true, options: [])
var errors: [Sound: Error] = [:]
for sound in sounds {
guard let url = bundle.url(forResource: sound.name, withExtension: "mp3")
else { continue }
do {
self.players[sound] = try .music(AVAudioPlayer(contentsOf: url))
} catch {
errors[sound] = error
}
}
guard errors.isEmpty
else { throw Failure.soundsNotLoaded(errors) }
}
func play(sound: Sound, loops: Int?) throws {
guard let player = self.players[sound]
else { return }
switch player {
case let .music(player):
player.numberOfLoops = loops ?? -1
player.play()
}
}
func stop(sound: Sound) throws {
guard let player = self.players[sound]
else { throw Failure.soundsNotLoaded([:]) }
switch player {
case let .music(player):
player.stop()
}
}
}
According to the header file the outputVolume properties supported range is 0.0-1.0:
/*! @property outputVolume
@abstract The mixer's output volume.
@discussion
This accesses the mixer's output volume (0.0-1.0, inclusive).
@property (nonatomic) float outputVolume;
However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume?
Thanks
I have sent in a feedback report (FB18222398) but I have no idea if anyone has looked at it. I know from past experiences that Apple devs do look at these forums.
This applies to each of the betas, 1, 2 and 3. I have created a new Personal Voice with each beta. I create a personal voice in English. When it's done processing, I tap Preview and it says in English what is expected. But after some time, an hour or a day, the language of the voice file changes languages and no longer works properly. If I press Preview it is no longer intelligible. I have a text to speech app and initially the created voice works but then when the language of the file changes, it no longer works. I have run an app on my iphone through Xcode that prints to the console the voices installed on the device with the language. Currently this is the voice file:
Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D
Language: es-MX
and on a second device the same personal voice is in a different language:
Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D
Language: zh-CN
Although, a previous personal voice file that listed as Spanish-Mexican played in English with a Spanish accent or when playing Spanish text, it sounded almost perfect. This current personal voice doesn't do that, and is unintelligible. Previous attempts have converted to Chinese.
I hope someone can look into this.
I have an AUv3 plugin which uses an FFT - which requires n samples before it can produce any output - so, depending on the relation between the host's buffer size and the FFT window size, it may receive a several buffers of samples, producing no output, and then dumping out what it has once a sufficient number of samples have been received.
This means that output is produced in fits and starts, in batches that match the FFT size (modulo oversampling) - e.g. if being fed buffers of 256 samples with an fft size of 1024, the output buffer sizes will be 0 for the first 3 buffers, and upon the fourth, the first 256 processed samples are returned and the remaining 768 cached; the next three buffers will return the remaining cached samples while processing and buffering subsequent ones, and so forth.
The internal mechanics of that I have solved, caching output if the current output buffer is too small, and so forth - so it all works as advertised, and the plugin reports its latency correctly. And when run as an app in demo-mode, playback works as expected.
In the plugin's render block, it captures the number of frames written, and if it is less than the number of frames passed in, adjusts the mDataByteSize of the output buffers to match the actual quantity of data being returned:
unsigned int framesWritten = (unsigned int) processHelper->processWithEvents(inAudioBufferList, outAudioBufferList, timestamp, frameCount, realtimeEventListHead);
if (framesWritten < frameCount) {
for (UInt32 i = 0; i < outAudioBufferList->mNumberBuffers; ++i) {
outAudioBufferList->mBuffers[i].mDataByteSize = framesWritten * 4; // assume 4 byte floats
}
}
However, there are a couple of serious issues:
auval -v fails it with - Render Test at 64 frames, sample rate: 22050 Hz ERROR: Output Buffer Size does not match requested
When connected to Logic Pro, it appears that mDataByteSize is ignored, and the entire allocated buffer is read - audio has sections of silence snipped into it which corresponds the number of empty buffers being returned
If I set Logic's buffer size to 1024 and use a 1024 sample FFT window, the plugin works correctly - but of course a plugin cannot dictate buffer size, and `1024 is too small a window size to be useful for anything but filtering very high frequencies
This seems like it has to be a solvable problem, and most likely the issue is in how my code reports the number of usable samples in the returned buffer.
So, what is the correct way for a plugin to report that it has no samples to return, but will, uh, real soon now?
I know I could convert this plugin to be one that does offline rendering of the entire input, but this is real-time processing, just with a fixed amount of latency, so that should not be necessary.
I've got a setup using AVAudioEngine with several tone generator nodes, each with a chain of processing nodes, the chains then mixed into the main output.
Generator ➡️ Effect ➡️... ➡️ .mainMixerNode ➡️ .outputNode).
Generator ➡️ Effect ➡️... ⤴️
...
Generator ➡️ Effect ➡️... ⤴️
The user should be able to mute any chain individually. I've found several potential approaches to muting, but not terribly happy with any of them.
Adjust the amplitudes directly in my tone generators. Issue: Consumes CPU even when completely muted. 4 generators adds ~15% cpu, even when all chains are muted.
Detach/attach chains that are muted/unmuted. Issue: Causes loud clicking/popping sounds whenever muted/unmuted.
Fade mixer output volume while detaching/attaching a chain (just cutting the volume immediately to 0 doesn't get rid of the clicking/popping). Issue: Causes all channels to fade during the transition, so not ideal.
The rest of these ideas are variations on making volume control+detatch/attach work for individual chains, since approach #3 worked well.
Add an AVAudioMixer to the end of each chain (just for volume control). Issue: Only the mixer on the final chain functions -- the others block all output. Not sure what's going on there.
Use matrix mixer (for multi-input volume control). Plus detach/attach to reduce CPU if necessary. Not yet attempted, due to perceived complexity and reports of fragility in order of wiring in. A bunch of effort before I even know if it's going to work.
Develop my own fader node to put on the end of each channel. Unlike the tone generator (simple AVSourceNode), developing an effect node seems complex and time consuming. Might not even fix CPU use.
I'm not completely averse to the learning curve of either 5 or 6, but would rather get some guidance on best approach before diving in. They both seem likely to take more effort than I'd like for the simple behavior I'm trying to achieve.