AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

AVAudioRecorder loses audio recorded before interruption
Hi everyone, I'm running into an issue with AVAudioRecorder when handling interruptions such as phone calls or alarms. Problem: When the app is recording audio and an interruption occurs: I handle the interruption with audioRecorder?.pause() inside AVAudioSession.interruptionNotification (on .began). On .ended, I check for .shouldResume and call audioRecorder?.record() again. The recorder resumes successfully, but only the audio recorded after the interruption is saved. The audio recorded before the interruption is lost, even though I'm using the same file URL and not recreating the recorder. Repro: Start a recording with AVAudioRecorder Simulate a system interruption (e.g., incoming call) Resume recording after the interruption Stop and inspect the output audio file Expected: Full audio (before and after interruption) should be saved. Actual: Only the audio after interruption is saved; the earlier part is missing Notes: According to the documentation, calling .record() after .pause() should resume recording into the same file. I confirmed that the file URL does not change, and I do not recreate the recorder instance. No error is thrown by the system during this process. This behavior happens consistently when the app is interrupted and resumed. Question: Is this a known issue? Is there a recommended workaround for preserving the full recording when interruptions happen? Thanks in advance!
0
0
21
2d
Intermittent Memory Leak Indicated in Simulator When Using AVAudioEngine with mainMixerNode Only
Hello, I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak. Here is a simplified version of the code I'm using: // This function is called when the user taps a button in the view controller: #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)myButtonAction:(id)sender { NSLog(@"Test"); soundCreate(); } @end // media.m static AVAudioEngine *audioEngine = nil; void soundCreate(void) { if (audioEngine != nil) return; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; audioEngine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init]; [audioEngine attachNode:playerNode]; [audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil]; [audioEngine startAndReturnError:nil]; } In the memory leak report, the following call stack is repeated, seemingly in a loop: ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore AUListenerAddParameter AudioToolboxCore addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore 0x180178ddf
0
0
24
2d
CVPixelBufferCreate EXC_BAD_ACCESS
I am doing something similar to this post Within an AVCaptureDataOutputSynchronizerDelegate method, I create a pixelBuffer using CVPixelBufferCreate with the following attributes: kCVPixelBufferIOSurfacePropertiesKey as String: true, kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey as String: true When I copy the data from the vImagePixelBuffer "rotatedImageBuffer", I get the following error: Thread 10: EXC_BAD_ACCESS (code=1, address=0x14caa8000) I get the same error with memcpy and data.copyBytes (not running them at the same time obviously). If I use CVPixelBufferCreateWithBytes, I do not get this error. However, CVPixelBufferCreateWithBytes does not let you include attributes (see linked post above). I am using vImage because I need the original CVPixelBuffer from the camera output and a rotated version with a different color scheme. // Copy to pixel buffer let attributes: NSDictionary = [ true : kCVPixelBufferIOSurfacePropertiesKey, true : kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey, ] var colorBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, attributes, &colorBuffer) //let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes, nil, nil, attributes as CFDictionary, &colorBuffer) // does NOT produce error, but also does not have attributes guard status == kCVReturnSuccess, let colorBuffer = colorBuffer else { print("Failed to create buffer") return } let lockFlags = CVPixelBufferLockFlags(rawValue: 0) guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(colorBuffer, lockFlags) else { print("Failed to lock base address") return } let colorBufferMemory = CVPixelBufferGetBaseAddress(colorBuffer)! let data = Data(bytes: rotatedImageBuffer.data, count: rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) data.copyBytes(to: colorBufferMemory.assumingMemoryBound(to: UInt8.self), count: data.count) // Fails here //memcpy(colorBufferMemory, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) // Also produces the same error CVPixelBufferUnlockBaseAddress(colorBuffer, lockFlags)
1
0
34
3d
Alert structure and playing sound
Hello, I'm working on an SwiftUI iOS app that shows a list of timers. When the timer is up then I pop up an alert struct. The user hits "ok" to dismiss the alert. I am trying to include an alarm sound using AVFoundation. I can get the sounds to play if I change the code to play when a button clicks so I believe I have the url path correct. But I really want it to play during the alert pop up. I have not been able to find examples where this is done using an alert so I suspect I need a custom view but thought I'd try the alert route first. Anyone try this before? @State var audioPlayer: AVAudioPlayer? .alert(isPresented: $showAlarmAlert) { playSound() -- Calls AVFoundation return Alert(title: Text("Time's Up!")) } func playSound() { let alertSoundPath = Bundle.main.url(forResource: "classicAlarm", withExtension: "mp3")! do { audioPlayer = try AVAudioPlayer(contentsOf: alertSoundPath) audioPlayer?.play() } catch { appData.logger.debug("Error playing sound: \(alertSoundPath)") } }
1
0
24
3d
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
0
0
16
6d
Crackling/Popping sound when using AVAudioUnitTimePitch
I have a simple AVAudioEngine graph as follows: AVAudioPlayerNode -> AVAudioUnitEQ -> AVAudioUnitTimePitch -> AVAudioUnitReverb -> Main mixer node of AVAudioEngine. I noticed that whenever I have AVAudioUnitTimePitch or AVAudioUnitVarispeed in the graph, I noticed a very distinct crackling/popping sound in my Airpods Pro 2 when starting up the engine and playing the AVAudioPlayerNode and unable to find the reason why this is happening. When I remove the node, the crackling completely goes away. How do I fix this problem since i need the user to be able to control the pitch and rate of the audio during playback. import AVKit @Observable @MainActor class AudioEngineManager { nonisolated private let engine = AVAudioEngine() private let playerNode = AVAudioPlayerNode() private let reverb = AVAudioUnitReverb() private let pitch = AVAudioUnitTimePitch() private let eq = AVAudioUnitEQ(numberOfBands: 10) private var audioFile: AVAudioFile? private var fadePlayPauseTask: Task<Void, Error>? private var playPauseCurrentFadeTime: Double = 0 init() { setupAudioEngine() } private func setupAudioEngine() { guard let url = Bundle.main.url(forResource: "Song name goes here", withExtension: "mp3") else { print("Audio file not found") return } do { audioFile = try AVAudioFile(forReading: url) } catch { print("Failed to load audio file: \(error)") return } reverb.loadFactoryPreset(.mediumHall) reverb.wetDryMix = 50 pitch.pitch = 0 // Increase pitch by 500 cents (5 semitones) engine.attach(playerNode) engine.attach(pitch) engine.attach(reverb) engine.attach(eq) // Connect: player -> pitch -> reverb -> output engine.connect(playerNode, to: eq, format: audioFile?.processingFormat) engine.connect(eq, to: pitch, format: audioFile?.processingFormat) engine.connect(pitch, to: reverb, format: audioFile?.processingFormat) engine.connect(reverb, to: engine.mainMixerNode, format: audioFile?.processingFormat) } func prepare() { guard let audioFile else { return } playerNode.scheduleFile(audioFile, at: nil) } func play() { DispatchQueue.global().async { [weak self] in guard let self else { return } engine.prepare() try? engine.start() DispatchQueue.main.async { [weak self] in guard let self else { return } playerNode.play() fadePlayPauseTask?.cancel() playPauseCurrentFadeTime = 0 fadePlayPauseTask = Task { [weak self] in guard let self else { return } while true { let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: true) // Ramp up volume until 1 is reached if volume >= 1 { break } engine.mainMixerNode.outputVolume = volume try await Task.sleep(for: .milliseconds(10)) playPauseCurrentFadeTime += 0.01 } engine.mainMixerNode.outputVolume = 1 } } } } func pause() { fadePlayPauseTask?.cancel() playPauseCurrentFadeTime = 0 fadePlayPauseTask = Task { [weak self] in guard let self else { return } while true { let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: false) // Ramp down volume until 0 is reached if volume <= 0 { break } engine.mainMixerNode.outputVolume = volume try await Task.sleep(for: .milliseconds(10)) playPauseCurrentFadeTime += 0.01 } engine.mainMixerNode.outputVolume = 0 playerNode.pause() // Shut down engine once ramp down completes DispatchQueue.global().async { [weak self] in guard let self else { return } engine.pause() } } } private func updateVolume(for x: Double, rising: Bool) -> Float { if rising { // Fade in return Float(pow(x, 2) * (3.0 - 2.0 * (x))) } else { // Fade out return Float(1 - (pow(x, 2) * (3.0 - 2.0 * (x)))) } } func setPitch(_ value: Float) { pitch.pitch = value } func setReverbMix(_ value: Float) { reverb.wetDryMix = value } } struct ContentView: View { @State private var audioManager = AudioEngineManager() @State private var pitch: Float = 0 @State private var reverb: Float = 0 var body: some View { VStack(spacing: 20) { Text("🎵 Audio Player with Reverb & Pitch") .font(.title2) HStack { Button("Prepare") { audioManager.prepare() } Button("Play") { audioManager.play() } .padding() .background(Color.green) .foregroundColor(.white) .cornerRadius(10) Button("Pause") { audioManager.pause() } .padding() .background(Color.red) .foregroundColor(.white) .cornerRadius(10) } VStack { Text("Pitch: \(Int(pitch)) cents") Slider(value: $pitch, in: -2400...2400, step: 100) { _ in audioManager.setPitch(pitch) } } VStack { Text("Reverb Mix: \(Int(reverb))%") Slider(value: $reverb, in: 0...100, step: 1) { _ in audioManager.setReverbMix(reverb) } } } .padding() } }
1
0
42
1d
Image brightness adapts despite exposure lock
Short summary When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers. Details Background I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements. This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices. Implementation The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders. However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices. The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github. The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock. Demonstration These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above: https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow). In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing. For completeness here is the same effect in the stock camera app with AE/AF lock enabled: https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing. So... Any suggestion on how to prevent this behavior would be highly appreciated.
1
0
35
5d
AVPlayer periodic time observer stops notifying when switching back from AirPlay to local playback
The app registers a periodic time observer to the AVPlayer when the playback starts and it works fine. When switching to AirPlay during playback, the periodic time observation continues working as expected. However, when switching back to local playback, the periodic time observer does not fire anymore until a seek is performed. The app removes the periodic time observer only when the playback stops. I can see that when switching back to local playback, the timeControlStatus successively changes to .waitingToPlayAtSpecifiedRate (reason: .evaluatingBufferingRate) then to .waitingToPlayAtSpecifiedRate (reason: .toMinimizeStalls) and finally to .playing But the time observation does not work anymore. Also, the issue is systematic with Live and VOD streams providing a program date (with HLS property #EXT-X-PROGRAM-DATE-TIME), with or without any DRM, and is never reproduced with other VOD streams.
2
0
30
2w
Vision Framework VNTrackObjectRequest: Minimum Valid Bounding Box Size Causing Internal Error (Code=9)
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest. Occasionally (but not always), I receive the following runtime error: Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size} From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest. Could someone clarify: What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects? Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request? This information would be extremely helpful to reliably avoid this internal error. Thank you!
1
0
35
5d
Clarification of use of `AVAssetDownloadConfiguration` and `AVAssetDownloadTask` to persist MP3 Audio downloads/streams
Hello! I have been following the UsingAVFoundationToPlayAndPersistHTTPLiveStreams sample code in order to test persisting streams to disk. In addition to support for m3u8, I have noticed in testing that this also seems to work for MP3 Audio, simply by changing the plist entries to point to remote URLs with audio/mpeg content. Is this expected, or are there caveats that I should be aware of? Thanks you!
0
0
22
2w
AVAudioSession automatically sets the tablet audio volume to 50% when recording audio.
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 I'm using AVAudioSession to record sound in my application. But I recently came to realize that when the app starts a recording session on a tablet, OS automatically sets the tablet volume to 50% and when after recording ends, it doesn't change back to the previous volume level before starting the recording. So I would like to know whether this is an OS default behavior or a bug? If it's a default behavior, I much appreciate if I can get a link to the documentation.
0
0
29
2w
AVSpeechUtterance stutters in CarPlay when connected to a BT headset
We are currently working on a CarPlay navigation app and so far everything is working well except for speaking turn notifications. Our TTS implementation works fine on the phone and works fine on CarPlay if the voice is spoken over the speaker in the car. If users connect a BT headset to the car and listen through that headset, then the voice commands are chopped up / stutter. Why would users use BT headset? Well, we are working on a motorcycle app, and there are no speakers usually on a motorcycle. It sounds like the BT channel is opened and closed repeatedly for every character / word spoken. This happens on different CarPlay devices and different Bluetooth headsets, we have reports from multiple users that they find this behavior annoying and that other apps work fine. Is this a known issue? Are there possible workaround?
0
0
21
2w
AirPlay v1 is broken in iOS 18.4?
After upgrading to iOS 18.4, I'm no longer able to establish an AirPlay v1 connection to an audio system. The symptom is that the AirPlay route picker just spins when trying to connect to an audio system. It eventually gives up. I tested this on an iPhone 14, connecting to a HomePod, AirPort express, AppleTV and a Wiim Pro. If I try connecting with AirPlay v2, ex: using Apple Music, the connection succeeds and audio can be played. I'm the developer of an app that plays audio over AirPlay while also recording. My app has to use AirPlay v1 because AvAudioSession doesn't allow the policy .longFormAudio when the category is .playAndRecord. This issue is a real pain as it means my app is suddenly broken for many thousands of users. Is anyone else seeing this issue? Any suggestions for a workaround?
2
0
130
2w
AVPlayerItem. externalMetadata not available
According to the documentation (https://developer.apple.com/documentation/avfoundation/avplayeritem/externalmetadata), AVPlayerItem should have an externalMetadata property. However it does not appear to be visible to my app. When I try, I get: Value of type 'AVPlayerItem' has no member 'externalMetadata' Documentation states iOS 12.2+; I am building with a minimum deployment target of iOS 18. Code snippet: import Foundation import AVFoundation /// ... in function ... // create metadata as described in https://developer.apple.com/videos/play/wwdc2022/110338 var title = AVMutableMetadataItem() title.identifier = .commonIdentifierAlbumName title.value = "My Title" as NSString? title.extendedLanguageTag = "und" var playerItem = await AVPlayerItem(asset: composition) playerItem.externalMetadata = [ title ]
0
0
21
3w
Processing AVCaptureVideoDataOutput video stream with appleLog and HLG_BT2020 AVCaptureColorSpace input
I’m building a professional camera app where users can customize the video recording format and color grading. In the func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method, I handle video frames and use Metal for real-time color grading. This works well when device.activeColorSpace is sRGB or P3, and the results are great. However, when the color space is HLG_BT2020 or appleLog, the MTKTextureLoader.newTexture(cgImage: cgImage, options: options) method throws an error. After researching, I found that the video frame in these color spaces has a bit-per-channel (bpc) greater than 8 after being converted to CGImage, causing the texture creation to fail. I tried converting the CGImage to a lower bpc to successfully create the texture, but the final output image is garbled and not as expected. Is there a solution to this issue?
1
0
31
2w
Memory Leak in AVAudioPlayer in Simulator only
I have a memory leak, when using AVAudioPlayer. I managed to narrow down the issue into a very simple app, which code I paste in at the end. The memory leak start immediately when I start playing sound, but only in the emylator. On the real iPhone there is no memory leak. The memory leak on the Simulator looks like this: import SwiftUI import AVFoundation struct ContentView_Audio: View { var sound: AVAudioPlayer? init() { guard let path = Bundle.main.path(forResource: "cd201", ofType: "mp3") else { return } let url = URL(fileURLWithPath: path) do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.mixWithOthers]) } catch { return } do { try AVAudioSession.sharedInstance().setActive(true) } catch { return } do { sound = try AVAudioPlayer(contentsOf: url) } catch { return } } var body: some View { HStack { Button { playSound() } label: { ZStack { Circle() .fill(.mint.opacity(0.3)) .frame(width: 44, height: 44) .shadow(radius: 8) Image(systemName: "play.fill") .resizable() .frame(width: 20, height: 20) } } .padding() Button { stopSound() } label: { ZStack { Circle() .fill(.mint.opacity(0.3)) .frame(width: 44, height: 44) .shadow(radius: 8) Image(systemName: "stop.fill") .resizable() .frame(width: 20, height: 20) } } .padding() } } private func playSound() { guard sound != nil else { return } sound?.volume = 1 // sound?.numberOfLoops = -1 sound?.play() } func stopSound() { sound?.stop() } }
2
0
38
3w
save audio file in iOS 18 instead of iOS 12
I'm able to get text to speech to audio file using the following code for iOS 12 iPhone 8 to create a car file: audioFile = try AVAudioFile( forWriting: saveToURL, settings: pcmBuffer.format.settings, commonFormat: .pcmFormatInt16, interleaved: false) where pcmBuffer.format.settings is: [AVAudioFileTypeKey: kAudioFileMP3Type, AVSampleRateKey: 48000, AVEncoderBitRateKey: 128000, AVNumberOfChannelsKey: 2, AVFormatIDKey: kAudioFormatLinearPCM] However, this code does not work when I run the app in iOS 18 on iPhone 13 Pro Max. The audio file is created, but it doesn't sound right. It has a lot of static and it seems the speech is very low pitch. Can anyone give me a hint or an answer?
2
0
28
3w
Play Audio for a Metronome
Hi, I am looking for a good way to play sounds at a high frequency. At the moment I am using the AVAudioEngine, and create a couple AVAudioPlayerNode and for each sound I need to play I create a AVAudioPCMBuffer. When the app needs to play a sound, I get the correct AVAudioPCMBuffer for the sound and use the first available AVAudioPlayerNode and feed it to the buffer. The timing for a metronome app has to be very precise because if it's of by about 16ms the user can hear that it is not playing had the right interval. For low speeds this is working without any problems, but at high speeds it is getting worse. Maybe anyone has an idea on how I can improve my method. Its a Plugin for Flutter. import AVFoundation class FastSoundPlayer { private var audioPlayers: [SoundPlayer?] = [] private var sounds: [String: Sound] = [:] private var engine = AVAudioEngine() let session = AVAudioSession.sharedInstance() init() { do { try session.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: [AVAudioSession.CategoryOptions.mixWithOthers]) try session.setActive(true) createSoundPlayers(count: 20) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } // Selector method to handle applicationDidBecomeActiveNotification func applicationDidBecomeActive() { // Reinitialize AVAudioEngine and reattach all nodes do { engine.reset() objc_sync_enter(audioPlayers) audioPlayers.removeAll() createSoundPlayers(count: 20) objc_sync_exit(audioPlayers) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } func createSoundPlayers(count: Int) { for _ in 0..<count { let player = SoundPlayer() engine.attach(player.player) engine.connect(player.player, to: engine.mainMixerNode, format: nil) audioPlayers.append(player) } } func load(sound: Data, name: String) { let sound = Sound(soundData: sound) sounds[name] = sound } func play(name: String) { if !engine.isRunning { applicationDidBecomeActive() } guard let sound = sounds[name] else { print("Sound not found") return } if let player = getAvailablePlayer() { player.play(sound: sound) } } func getAvailablePlayer() -> SoundPlayer? { for player in audioPlayers { if !player!.isPlaying { return player } } return nil } } class SoundPlayer { let player = AVAudioPlayerNode() var isPlaying = false init() { player.volume = 1.0 } func play(sound: Sound) { player.scheduleBuffer(sound.sound!, at: nil, options: .interrupts, completionCallbackType: .dataPlayedBack) { _ in self.complete() } if (player.engine != nil && player.engine!.isRunning) { player.play() isPlaying = true } } func complete() { isPlaying = false } } class Sound { var sound: AVAudioPCMBuffer? init(soundData: Data) { do { let temporaryURL = FileManager.default.temporaryDirectory.appendingPathComponent("tempSound.wav") try soundData.write(to: temporaryURL) // Create AVAudioFile from the temporary file URL let audioFile = try AVAudioFile(forReading: temporaryURL) // Define the format for the PCM buffer (44100Hz, stereo) let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: false) // Create AVAudioPCMBuffer guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: AVAudioFrameCount(audioFile.length)) else { // Failed to create PCM buffer self.sound = nil return } // Read audio file into PCM buffer try audioFile.read(into: pcmBuffer) // Assign the created AVAudioPCMBuffer to the sound property self.sound = pcmBuffer } catch { print("Error loading sound file: \(error.localizedDescription)") self.sound = nil } } } Thanks!
1
0
49
Mar ’25
Playback Issues for DRM content when sending CMCD
Since iOS and tvOS 18, CMCD can now be automatically sent by AVPlayer (https://developer.apple.com/streaming/Whats-new-HLS.pdf). However, after enabling CMCD, our streams occasionally fail with the following error: CoreMediaErrorDomain Error -17383 This issue appears to affect only DRM-protected (FairPlay) streams so far. We activate CMCD via the resource loader of an AVURLAsset, before assigning the item to an AVPlayer. Unfortunately, we haven’t found a reliable way to reproduce the issue, and we’ve been unable to gather any useful diagnostic information. Has anyone else observed this behavior when enabling CMCD on FairPlay streams?
2
0
57
4w