I have a simple AVAudioEngine graph as follows:
AVAudioPlayerNode -> AVAudioUnitEQ -> AVAudioUnitTimePitch -> AVAudioUnitReverb -> Main mixer node of AVAudioEngine.
I noticed that whenever I have AVAudioUnitTimePitch or AVAudioUnitVarispeed in the graph, I noticed a very distinct crackling/popping sound in my Airpods Pro 2 when starting up the engine and playing the AVAudioPlayerNode and unable to find the reason why this is happening. When I remove the node, the crackling completely goes away. How do I fix this problem since i need the user to be able to control the pitch and rate of the audio during playback.
import AVKit
@Observable @MainActor
class AudioEngineManager {
nonisolated private let engine = AVAudioEngine()
private let playerNode = AVAudioPlayerNode()
private let reverb = AVAudioUnitReverb()
private let pitch = AVAudioUnitTimePitch()
private let eq = AVAudioUnitEQ(numberOfBands: 10)
private var audioFile: AVAudioFile?
private var fadePlayPauseTask: Task<Void, Error>?
private var playPauseCurrentFadeTime: Double = 0
init() {
setupAudioEngine()
}
private func setupAudioEngine() {
guard let url = Bundle.main.url(forResource: "Song name goes here", withExtension: "mp3") else {
print("Audio file not found")
return
}
do {
audioFile = try AVAudioFile(forReading: url)
} catch {
print("Failed to load audio file: \(error)")
return
}
reverb.loadFactoryPreset(.mediumHall)
reverb.wetDryMix = 50
pitch.pitch = 0 // Increase pitch by 500 cents (5 semitones)
engine.attach(playerNode)
engine.attach(pitch)
engine.attach(reverb)
engine.attach(eq)
// Connect: player -> pitch -> reverb -> output
engine.connect(playerNode, to: eq, format: audioFile?.processingFormat)
engine.connect(eq, to: pitch, format: audioFile?.processingFormat)
engine.connect(pitch, to: reverb, format: audioFile?.processingFormat)
engine.connect(reverb, to: engine.mainMixerNode, format: audioFile?.processingFormat)
}
func prepare() {
guard let audioFile else { return }
playerNode.scheduleFile(audioFile, at: nil)
}
func play() {
DispatchQueue.global().async { [weak self] in
guard let self else { return }
engine.prepare()
try? engine.start()
DispatchQueue.main.async { [weak self] in
guard let self else { return }
playerNode.play()
fadePlayPauseTask?.cancel()
playPauseCurrentFadeTime = 0
fadePlayPauseTask = Task { [weak self] in
guard let self else { return }
while true {
let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: true)
// Ramp up volume until 1 is reached
if volume >= 1 { break }
engine.mainMixerNode.outputVolume = volume
try await Task.sleep(for: .milliseconds(10))
playPauseCurrentFadeTime += 0.01
}
engine.mainMixerNode.outputVolume = 1
}
}
}
}
func pause() {
fadePlayPauseTask?.cancel()
playPauseCurrentFadeTime = 0
fadePlayPauseTask = Task { [weak self] in
guard let self else { return }
while true {
let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: false)
// Ramp down volume until 0 is reached
if volume <= 0 { break }
engine.mainMixerNode.outputVolume = volume
try await Task.sleep(for: .milliseconds(10))
playPauseCurrentFadeTime += 0.01
}
engine.mainMixerNode.outputVolume = 0
playerNode.pause()
// Shut down engine once ramp down completes
DispatchQueue.global().async { [weak self] in
guard let self else { return }
engine.pause()
}
}
}
private func updateVolume(for x: Double, rising: Bool) -> Float {
if rising {
// Fade in
return Float(pow(x, 2) * (3.0 - 2.0 * (x)))
} else {
// Fade out
return Float(1 - (pow(x, 2) * (3.0 - 2.0 * (x))))
}
}
func setPitch(_ value: Float) {
pitch.pitch = value
}
func setReverbMix(_ value: Float) {
reverb.wetDryMix = value
}
}
struct ContentView: View {
@State private var audioManager = AudioEngineManager()
@State private var pitch: Float = 0
@State private var reverb: Float = 0
var body: some View {
VStack(spacing: 20) {
Text("🎵 Audio Player with Reverb & Pitch")
.font(.title2)
HStack {
Button("Prepare") {
audioManager.prepare()
}
Button("Play") {
audioManager.play()
}
.padding()
.background(Color.green)
.foregroundColor(.white)
.cornerRadius(10)
Button("Pause") {
audioManager.pause()
}
.padding()
.background(Color.red)
.foregroundColor(.white)
.cornerRadius(10)
}
VStack {
Text("Pitch: \(Int(pitch)) cents")
Slider(value: $pitch, in: -2400...2400, step: 100) { _ in
audioManager.setPitch(pitch)
}
}
VStack {
Text("Reverb Mix: \(Int(reverb))%")
Slider(value: $reverb, in: 0...100, step: 1) { _ in
audioManager.setReverbMix(reverb)
}
}
}
.padding()
}
}
AVFoundation
RSS for tagWork with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.
Posts under AVFoundation tag
57 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak.
Here is a simplified version of the code I'm using:
// This function is called when the user taps a button in the view controller:
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
}
- (IBAction)myButtonAction:(id)sender {
NSLog(@"Test");
soundCreate();
}
@end
// media.m
static AVAudioEngine *audioEngine = nil;
void soundCreate(void)
{
if (audioEngine != nil)
return;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
[[AVAudioSession sharedInstance] setActive:YES error:nil];
audioEngine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init];
[audioEngine attachNode:playerNode];
[audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil];
[audioEngine startAndReturnError:nil];
}
In the memory leak report, the following call stack is repeated, seemingly in a loop:
ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore
ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore
AUListenerAddParameter AudioToolboxCore
addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore
0x180178ddf
I am doing something similar to this post
Within an AVCaptureDataOutputSynchronizerDelegate method, I create a pixelBuffer using CVPixelBufferCreate with the following attributes:
kCVPixelBufferIOSurfacePropertiesKey as String: true,
kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey as String: true
When I copy the data from the vImagePixelBuffer "rotatedImageBuffer", I get the following error:
Thread 10: EXC_BAD_ACCESS (code=1, address=0x14caa8000)
I get the same error with memcpy and data.copyBytes (not running them at the same time obviously).
If I use CVPixelBufferCreateWithBytes, I do not get this error. However, CVPixelBufferCreateWithBytes does not let you include attributes (see linked post above).
I am using vImage because I need the original CVPixelBuffer from the camera output and a rotated version with a different color scheme.
// Copy to pixel buffer
let attributes: NSDictionary = [
true : kCVPixelBufferIOSurfacePropertiesKey,
true : kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey,
]
var colorBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, attributes, &colorBuffer)
//let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes, nil, nil, attributes as CFDictionary, &colorBuffer) // does NOT produce error, but also does not have attributes
guard status == kCVReturnSuccess, let colorBuffer = colorBuffer else {
print("Failed to create buffer")
return
}
let lockFlags = CVPixelBufferLockFlags(rawValue: 0)
guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(colorBuffer, lockFlags) else {
print("Failed to lock base address")
return
}
let colorBufferMemory = CVPixelBufferGetBaseAddress(colorBuffer)!
let data = Data(bytes: rotatedImageBuffer.data, count: rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height))
data.copyBytes(to: colorBufferMemory.assumingMemoryBound(to: UInt8.self), count: data.count) // Fails here
//memcpy(colorBufferMemory, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) // Also produces the same error
CVPixelBufferUnlockBaseAddress(colorBuffer, lockFlags)
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest.
Occasionally (but not always), I receive the following runtime error:
Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size}
From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest.
Could someone clarify:
What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects?
Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request?
This information would be extremely helpful to reliably avoid this internal error.
Thank you!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
ML Compute
Machine Learning
Camera
AVFoundation
Short summary
When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers.
Details
Background
I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements.
This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices.
Implementation
The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders.
However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices.
The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github.
The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock.
Demonstration
These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above:
https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ
The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow).
In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing.
For completeness here is the same effect in the stock camera app with AE/AF lock enabled:
https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg
Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing.
So...
Any suggestion on how to prevent this behavior would be highly appreciated.
Hello Community,
I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro.
From this depth map, I generate a point cloud using the intrinsic camera parameters.
I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud.
For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°.
My question is:
Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code?
To obtain the depth map, I’m using:
AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
Thanks in advance for your help!
Capturing more than one display is no longer working with macOS Sequoia.
We have a product that allows users to capture up to 2 displays/screens. Our application is using gstreamer which in turn is based on AVFoundation.
I found a quick way to replicate the issue by just running 2 captures from separate terminals. Assuming display 1 has device index 0, and display 2 has device index 1, here are the steps:
install gstreamer with
brew install gstreamer
Then open 2 terminal windows and launch the following processes:
terminal 1 (device-index:0):
gst-launch-1.0 avfvideosrc -e device-index=0 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink
terminal 2 (device-index:1):
gst-launch-1.0 avfvideosrc -e device-index=1 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink
The first process that is launched will show the screen, the second process launched will not.
Testing this on macOS Ventura and Sonoma works as expected, showing both screens.
I submitted the same issue on Feedback Assistant: FB15900976
Is it possible to stream video from a UVC (USB Video Class) camera on an iPhone 15? If so, are there any specific hardware or software requirements to enable this functionality?
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds.
However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, I get this warning in the console :- Visual isTranslatable: NO; reason: observation failure: noObservations
Here is the screenshot :-
My entire code :-
import AVFoundation
import Metal
class GradientVideoCompositorTest: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String: Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
]
var requiredPixelBufferAttributesForRenderContext: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
]
private var renderContext: AVVideoCompositionRenderContext?
private var metalDevice: MTLDevice!
private var metalCommandQueue: MTLCommandQueue!
private var metalLibrary: MTLLibrary!
private var metalPipeline: MTLComputePipelineState!
override init() {
super.init()
setupMetal()
}
func setupMetal() {
guard let device = MTLCreateSystemDefaultDevice(),
let queue = device.makeCommandQueue(),
let library = try? device.makeDefaultLibrary(),
let function = library.makeFunction(name: "gradientShader") else {
fatalError("Metal setup failed")
}
self.metalDevice = device
self.metalCommandQueue = queue
self.metalLibrary = library
self.metalPipeline = try? device.makeComputePipelineState(function: function)
}
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
renderContext = newRenderContext
}
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
guard let outputPixelBuffer = renderContext?.newPixelBuffer(),
let metalTexture = createMetalTexture(from: outputPixelBuffer) else {
request.finish(with: NSError(domain: "com.example.gradient", code: -1, userInfo: nil))
return
}
var time = Float(request.compositionTime.seconds)
renderGradient(to: metalTexture, time: time)
request.finish(withComposedVideoFrame: outputPixelBuffer)
}
private func createMetalTexture(from pixelBuffer: CVPixelBuffer) -> MTLTexture? {
var texture: MTLTexture?
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .bgra8Unorm,
width: width,
height: height,
mipmapped: false
)
textureDescriptor.usage = [.shaderWrite, .shaderRead]
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
if let textureCache = createTextureCache(), let cvTexture = createCVMetalTexture(from: pixelBuffer, cache: textureCache) {
texture = CVMetalTextureGetTexture(cvTexture)
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
return texture
}
private func renderGradient(to texture: MTLTexture, time: Float) {
guard let commandBuffer = metalCommandQueue.makeCommandBuffer(),
let commandEncoder = commandBuffer.makeComputeCommandEncoder() else { return }
commandEncoder.setComputePipelineState(metalPipeline)
commandEncoder.setTexture(texture, index: 0)
var mutableTime = time
commandEncoder.setBytes(&mutableTime, length: MemoryLayout<Float>.size, index: 0)
let threadsPerGroup = MTLSize(width: 16, height: 16, depth: 1)
let threadGroups = MTLSize(
width: (texture.width + 15) / 16,
height: (texture.height + 15) / 16,
depth: 1
)
commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadsPerGroup)
commandEncoder.endEncoding()
commandBuffer.commit()
}
private func createTextureCache() -> CVMetalTextureCache? {
var cache: CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &cache)
return cache
}
private func createCVMetalTexture(from pixelBuffer: CVPixelBuffer, cache: CVMetalTextureCache) -> CVMetalTexture? {
var cvTexture: CVMetalTexture?
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
CVMetalTextureCacheCreateTextureFromImage(
kCFAllocatorDefault,
cache,
pixelBuffer,
nil,
.bgra8Unorm,
width,
height,
0,
&cvTexture
)
return cvTexture
}
}
class GradientCompositionInstructionTest: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange
var enablePostProcessing: Bool = true
var containsTweening: Bool = true
var requiredSourceTrackIDs: [NSValue]? = nil
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
init(timeRange: CMTimeRange) {
self.timeRange = timeRange
}
}
func createGradientVideoComposition(duration: CMTime, size: CGSize) -> AVMutableVideoComposition {
let composition = AVMutableComposition()
let instruction = GradientCompositionInstructionTest(timeRange: CMTimeRange(start: .zero, duration: duration))
let videoComposition = AVMutableVideoComposition()
videoComposition.customVideoCompositorClass = GradientVideoCompositorTest.self
videoComposition.renderSize = size
videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // 30 FPS
videoComposition.instructions = [instruction]
return videoComposition
}
#include <metal_stdlib>
using namespace metal;
kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]],
constant float &time [[buffer(0)]],
uint2 id [[thread_position_in_grid]]) {
float2 uv = float2(id) / float2(output.get_width(), output.get_height());
// Animated colors based on time
float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0);
float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3);
// Linear interpolation for gradient
float3 gradientColor = mix(color1, color2, uv.y);
output.write(float4(gradientColor, 1.0), id);
}
The app registers a periodic time observer to the AVPlayer when the playback starts and it works fine. When switching to AirPlay during playback, the periodic time observation continues working as expected.
However, when switching back to local playback, the periodic time observer does not fire anymore until a seek is performed. The app removes the periodic time observer only when the playback stops.
I can see that when switching back to local playback, the timeControlStatus successively changes
to .waitingToPlayAtSpecifiedRate (reason: .evaluatingBufferingRate)
then to .waitingToPlayAtSpecifiedRate (reason: .toMinimizeStalls)
and finally to .playing
But the time observation does not work anymore.
Also, the issue is systematic with Live and VOD streams providing a program date (with HLS property #EXT-X-PROGRAM-DATE-TIME), with or without any DRM, and is never reproduced with other VOD streams.
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers:
an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers,
and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers.
The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image.
Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized.
However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time.
On the other hand, with the same player code and network streams on macOS, the synchronization always works fine.
This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again.
So, any help / hints on this sync problem will be greatly appreciated! :)
Hello! I have been following the UsingAVFoundationToPlayAndPersistHTTPLiveStreams sample code in order to test persisting streams to disk. In addition to support for m3u8, I have noticed in testing that this also seems to work for MP3 Audio, simply by changing the plist entries to point to remote URLs with audio/mpeg content. Is this expected, or are there caveats that I should be aware of?
Thanks you!
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
I'm using AVAudioSession to record sound in my application. But I recently came to realize that when the app starts a recording session on a tablet, OS automatically sets the tablet volume to 50% and when after recording ends, it doesn't change back to the previous volume level before starting the recording. So I would like to know whether this is an OS default behavior or a bug?
If it's a default behavior, I much appreciate if I can get a link to the documentation.
We are currently working on a CarPlay navigation app and so far everything is working well except for speaking turn notifications.
Our TTS implementation works fine on the phone and works fine on CarPlay if the voice is spoken over the speaker in the car. If users connect a BT headset to the car and listen through that headset, then the voice commands are chopped up / stutter.
Why would users use BT headset? Well, we are working on a motorcycle app, and there are no speakers usually on a motorcycle.
It sounds like the BT channel is opened and closed repeatedly for every character / word spoken. This happens on different CarPlay devices and different Bluetooth headsets, we have reports from multiple users that they find this behavior annoying and that other apps work fine.
Is this a known issue? Are there possible workaround?
I’m building a professional camera app where users can customize the video recording format and color grading. In the func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method, I handle video frames and use Metal for real-time color grading. This works well when device.activeColorSpace is sRGB or P3, and the results are great. However, when the color space is HLG_BT2020 or appleLog, the MTKTextureLoader.newTexture(cgImage: cgImage, options: options) method throws an error. After researching, I found that the video frame in these color spaces has a bit-per-channel (bpc) greater than 8 after being converted to CGImage, causing the texture creation to fail. I tried converting the CGImage to a lower bpc to successfully create the texture, but the final output image is garbled and not as expected. Is there a solution to this issue?
According to the documentation (https://developer.apple.com/documentation/avfoundation/avplayeritem/externalmetadata), AVPlayerItem should have an externalMetadata property. However it does not appear to be visible to my app. When I try, I get:
Value of type 'AVPlayerItem' has no member 'externalMetadata'
Documentation states iOS 12.2+; I am building with a minimum deployment target of iOS 18.
Code snippet:
import Foundation
import AVFoundation
/// ... in function ...
// create metadata as described in https://developer.apple.com/videos/play/wwdc2022/110338
var title = AVMutableMetadataItem()
title.identifier = .commonIdentifierAlbumName
title.value = "My Title" as NSString?
title.extendedLanguageTag = "und"
var playerItem = await AVPlayerItem(asset: composition)
playerItem.externalMetadata = [ title ]
I have a memory leak, when using AVAudioPlayer. I managed to narrow down the issue into a very simple app, which code I paste in at the end.
The memory leak start immediately when I start playing sound, but only in the emylator. On the real iPhone there is no memory leak.
The memory leak on the Simulator looks like this:
import SwiftUI
import AVFoundation
struct ContentView_Audio: View {
var sound: AVAudioPlayer?
init() {
guard let path = Bundle.main.path(forResource: "cd201", ofType: "mp3") else { return }
let url = URL(fileURLWithPath: path)
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.mixWithOthers])
} catch {
return
}
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
return
}
do {
sound = try AVAudioPlayer(contentsOf: url)
} catch {
return
}
}
var body: some View {
HStack {
Button {
playSound()
} label: {
ZStack {
Circle()
.fill(.mint.opacity(0.3))
.frame(width: 44, height: 44)
.shadow(radius: 8)
Image(systemName: "play.fill")
.resizable()
.frame(width: 20, height: 20)
}
}
.padding()
Button {
stopSound()
} label: {
ZStack {
Circle()
.fill(.mint.opacity(0.3))
.frame(width: 44, height: 44)
.shadow(radius: 8)
Image(systemName: "stop.fill")
.resizable()
.frame(width: 20, height: 20)
}
}
.padding()
}
}
private func playSound() {
guard sound != nil else { return }
sound?.volume = 1
// sound?.numberOfLoops = -1
sound?.play()
}
func stopSound() {
sound?.stop()
}
}
I'm able to get text to speech to audio file using the following code for iOS 12 iPhone 8 to create a car file:
audioFile = try AVAudioFile(
forWriting: saveToURL,
settings: pcmBuffer.format.settings,
commonFormat: .pcmFormatInt16,
interleaved: false)
where pcmBuffer.format.settings is:
[AVAudioFileTypeKey: kAudioFileMP3Type,
AVSampleRateKey: 48000,
AVEncoderBitRateKey: 128000,
AVNumberOfChannelsKey: 2,
AVFormatIDKey: kAudioFormatLinearPCM]
However, this code does not work when I run the app in iOS 18 on iPhone 13 Pro Max. The audio file is created, but it doesn't sound right. It has a lot of static and it seems the speech is very low pitch.
Can anyone give me a hint or an answer?
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected.
Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers)
Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers)
When using paired input and output devices:
The setup works as expected.
Example: MacBook Pro microphone → MacBook Pro speakers.
When using mismatched devices:
AVAudioEngine setup fails during aggregate device construction.
Example: AirPods microphone → MacBook Pro speakers.
Error logs indicate a channel count mismatch.
Here are the partial logs. Due to the content limit, I cannot post the entire logs.
AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875)
AUVPAggregate.cpp:1036 err=-10875
AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875
AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
AggregateDevice.mm:182 error fetching default pair
AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702]
AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID
AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702]
AggregateDevice.mm:182 error fetching default pair
AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
Is it possible to use voice processing with different input/output devices?
If yes, are there any specific configurations required to handle mismatched devices?
How can we resolve channel count mismatch errors during aggregate device construction?
Are there settings or API adjustments to enforce compatibility between input/output devices?
Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices?
For instance, can we force an intermediate channel configuration or downmix input/output formats?
Hi all,
with my app ScreenFloat, you can record your screen, along with system- and microphone audio.
Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on.
Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one.
So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app.
But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow.
My approach is this:
"Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession
Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession
For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback.
I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest).
So, my question is, is there a faster way?
The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data).
All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal.
I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps.
I'd appreciate any ideas or pointers.
Thank you kindly,
Matthias