AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How can I create my own Genlock hardware for the iPhone 17 Pro?
What options do I have if I don't want to use Blackmagic's Camera ProDock as the external Sync Hardware, but instead I want to create my own USB-C hardware accessory which would show up as an AVExternalSyncDevice on the iPhone 17 Pro? Which protocol does my USB-C device have to implement to show up as an eligible clock device in AVExternalSyncDevice.DiscoverySession?
0
0
299
1d
The behavior of AVPlayerItem.didPlayToEndTimeNotification is not as expected in iOS 26.
Hello, Environment macOS 15.6.1 / Xcode 26 beta 7 / iOS 26 Beta 9 In a simple AVFoundation video-playback sample, I’m seeing different behavior between iOS 18 and iOS 26 regarding AVPlayerItem.didPlayToEndTimeNotification. I’ve attached a minimal sample below. Please replace videoURL with a valid short video URL. Repro steps Tap “Play” to start playback and let the video finish. The AVPlayerItem.didPlayToEndTimeNotification registered with NotificationCenter should fire, and you should see Play finished. in the console. Without relaunching, tap “Play” again. This is where the issue arises. Observed behavior On iOS 18 and earlier: The video does not play again (it does not restart from the beginning), but AVPlayerItem.didPlayToEndTimeNotification is posted and Play finished. appears in the console. The same happens every time you press “Play”. On iOS 26: Pressing “Play” does not post AVPlayerItem.didPlayToEndTimeNotification. The code path that prints Play finished. is never called (the callback enclosing that line is not invoked again). Building the same program with Xcode 16.4 and running it on an iOS 26 beta device shows the same phenomenon, which suggests there has been a behavioral change for AVPlayerItem.didPlayToEndTimeNotification on iOS 26. I couldn’t find any mention of this in the release notes or API Reference. Because the semantics around AVPlayerItem.didPlayToEndTimeNotification appear to differ, we’re forced to adjust our logic. If there is a way to achieve the iOS 18–style behavior on iOS 26, I would appreciate guidance. Alternatively, if this change is intentional, could you share the reasoning? Is iOS 26 the correct behavior from Apple’s perspective and iOS 18 (and earlier) behavior considered incorrect? Any official clarification would be extremely helpful. import UIKit import AVFoundation final class ViewController: UIViewController { private let videoURL = URL(string: "https://......mp4")! private var player: AVPlayer? private var playerItem: AVPlayerItem? private var playerLayer: AVPlayerLayer? private var observeForComplete: NSObjectProtocol? // UI private let playerContainerView = UIView() private let playButton = UIButton(type: .system) private let stopButton = UIButton(type: .system) private let replayButton = UIButton(type: .system) deinit { if let observeForComplete { NotificationCenter.default.removeObserver(observeForComplete) } } override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .systemBackground setupUI() setupPlayer() } override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() playerLayer?.frame = playerContainerView.bounds } // MARK: - Setup private func setupUI() { playerContainerView.translatesAutoresizingMaskIntoConstraints = false playerContainerView.backgroundColor = .black view.addSubview(playerContainerView) // Buttons playButton.setTitle("Play", for: .normal) stopButton.setTitle("Pause", for: .normal) replayButton.setTitle("RePlay", for: .normal) [playButton, stopButton, replayButton].forEach { $0.titleLabel?.font = .systemFont(ofSize: 16, weight: .semibold) $0.translatesAutoresizingMaskIntoConstraints = false $0.contentEdgeInsets = UIEdgeInsets(top: 10, left: 16, bottom: 10, right: 16) } let stack = UIStackView(arrangedSubviews: [playButton, stopButton, replayButton]) stack.axis = .horizontal stack.spacing = 16 stack.alignment = .center stack.distribution = .equalCentering stack.translatesAutoresizingMaskIntoConstraints = false view.addSubview(stack) NSLayoutConstraint.activate([ playerContainerView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20), playerContainerView.leadingAnchor.constraint(equalTo: view.leadingAnchor), playerContainerView.trailingAnchor.constraint(equalTo: view.trailingAnchor), playerContainerView.heightAnchor.constraint(equalToConstant: 200), stack.topAnchor.constraint(equalTo: playerContainerView.bottomAnchor, constant: 20), stack.centerXAnchor.constraint(equalTo: view.centerXAnchor) ]) // Action playButton.addTarget(self, action: #selector(didTapPlay), for: .touchUpInside) stopButton.addTarget(self, action: #selector(didTapStop), for: .touchUpInside) replayButton.addTarget(self, action: #selector(didTapReplayFromStart), for: .touchUpInside) } private func setupPlayer() { // AVURLAsset -> AVPlayerItem → AVPlayer let asset = AVURLAsset(url: videoURL) let item = AVPlayerItem(asset: asset) self.playerItem = item let player = AVPlayer(playerItem: item) player.automaticallyWaitsToMinimizeStalling = true self.player = player let layer = AVPlayerLayer(player: player) layer.videoGravity = .resizeAspect playerContainerView.layer.addSublayer(layer) layer.frame = playerContainerView.bounds self.playerLayer = layer // Notification if let observeForComplete { NotificationCenter.default.removeObserver(observeForComplete) } if let playerItem { observeForComplete = NotificationCenter.default.addObserver( forName: AVPlayerItem.didPlayToEndTimeNotification, object: playerItem, queue: .main ) { [weak self] _ in guard self != nil else { return } Task { @MainActor in print("Play finished.") } } } } // MARK: - Actions @objc private func didTapPlay() { player?.play() } @objc private func didTapStop() { player?.pause() } // RePlay @objc private func didTapReplayFromStart() { player?.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero) { [weak self] _ in self?.player?.play() } } } I would greatly appreciate an official response from Apple engineering on whether this is an intentional change, a regression, or an API contract clarification, and what the recommended approach is going forward. Thank you.
1
1
321
2d
Threading guarantees with AVCaptureVideoDataOutput
I'm writing some camera functionality that uses AVCaptureVideoDataOutput. I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput. My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? For example, does [AVCaptureSession stopRunning] imply a blocking call until all pending frame-callbacks are done? I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource. I have setup similar to the following: // Foreground thread logic dispatch_queue_t queue = dispatch_queue_create("qt_avf_camera_queue", nullptr); AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; setupInputDevice(captureSession); // Connects the AVCaptureDevice... // Store some arbitrary data to be attached to the frame, stored on the foreground thread FrameMetaData frameMetaData = ...; MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc]; // Capture frameMetaData by reference in lambda [sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }]; AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue]; [captureSession addOutput:captureVideoDataOutput]; [captureSession startRunning]; [captureSession stopRunning]; // Is it now safe to destroy frameMetaData, or do we need manual barrier? And then in MySampleBufferDelegate: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Invokes the callback set above FrameMetaData *frameMetaData = frameMetaDataGetter(); emitSampleBuffer(sampleBuffer, frameMetaData); }
1
0
345
1d
Threading guarantees with AVCaptureVideoDataOutput
I'm writing some camera functionality that uses AVCaptureVideoDataOutput. I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput. My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource. I have setup similar to the following: // Foreground thread logic dispatch_queue_t queue = dispatch_queue_create("avf_camera_queue", nullptr); AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; setupInputDevice(captureSession); // Connects the AVCaptureDevice... // Store some arbitrary data to be attached to the frame, stored on the foreground thread FrameMetaData frameMetaData = ...; MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc]; // Capture frameMetaData by reference in lambda [sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }]; AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue]; [captureSession addOutput:captureVideoDataOutput]; [captureSession startRunning]; [captureSession stopRunning]; // Is it now safe to destroy frameMetaData, or do we need manual barrier? And then in MySampleBufferDelegate: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Invokes the callback set above FrameMetaData *frameMetaData = frameMetaDataGetter(); emitSampleBuffer(sampleBuffer, frameMetaData); }
0
0
301
1w
Why does AVAudioRecorder show 8 kHz when iPhone hardware is 48 kHz?
Hi everyone, I’m testing audio recording on an iPhone 15 Plus using AVFoundation. Here’s a simplified version of my setup: let settings: [String: Any] = [ AVFormatIDKey: Int(kAudioFormatLinearPCM), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1, AVLinearPCMBitDepthKey: 16, AVLinearPCMIsFloatKey: false ] audioRecorder = try AVAudioRecorder(url: fileURL, settings: settings) audioRecorder?.record() When I check the recorded file’s sample rate, it logs: Actual sample rate: 8000.0 However, when I inspect the hardware sample rate: try session.setCategory(.playAndRecord, mode: .default) try session.setActive(true) print("Hardware sample rate:", session.sampleRate) I consistently get: `Hardware sample rate: 48000.0 My questions are: Is the iPhone mic actually capturing at 8 kHz, or is it recording at 48 kHz and then downsampling to 8 kHz internally? Is there any way to force the hardware to record natively at 8 kHz? If not, what’s the recommended approach for telephony-quality audio (true 8 kHz) on iOS devices? Thanks in advance for your guidance!
1
0
158
1w
Unable to play .aivu with VideoPlayerComponent
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here: https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent Here is a simplified version of the code I'm running in another application: import SwiftUI import RealityKit import AVFoundation struct ImmersiveView: View { var body: some View { RealityView { content in let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!) let videoEntity = Entity() var videoPlayerComponent = VideoPlayerComponent(avPlayer: player) videoPlayerComponent.desiredImmersiveViewingMode = .full videoPlayerComponent.desiredViewingMode = .stereo player.play() videoEntity.components.set(videoPlayerComponent) content.add(videoEntity) } } } Full code is here: https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes: App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error} CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience [0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format. Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things: https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video Steps to reproduce the issue: - Open AIVUPlayerSample project and run. Look at the logs. All code can be found in ImmersiveView.swift Sample file is included in the project Expected results: If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace. Am i doing something wrong in the code? I'm basically following the documentation here. Feedback ticket: FB19971306
3
0
715
1w
How to get a callback once a requested frameDuration change has been applied?
When changing a camera's exposure, AVFoundation provides a callback which offers the timestamp of the first frame captured with the new exposure duration: AVCaptureDevice.setExposureModeCustom(duration:, iso:, completionHandler:). I want to get a similar callback when changing frame duration. After setting AVCaptureDevice.activeVideoMinFrameDuration or AVCaptureDevice.activeVideoMinFrameDuration to a new value, how can I compute the index or the timestamp of the first camera frame which was captured using the newly set frame duration?
0
0
460
2w
Implementation of Audio-Video Synchronization in Swift
I have a feature requirement: to switch the writer for file writing every 5 minutes, and then quickly merge the last two files. How can I ensure that the merged file is seamlessly combined and that the audio and video information remains synchronized? Currently, the merged video has glitches, and the audio is also out of sync. If there are experts who can provide solutions in this area, I would be extremely grateful.
1
0
187
2w
Live Photos created with PHLivePhoto API show "Motion not available" when setting as wallpaper
I'm creating Live Photos programmatically in my app using the Photos and AVFoundation frameworks. While the Live Photos work perfectly in the Photos app (long press shows motion), users cannot set them as motion wallpapers. The system shows "Motion not available" message. Here's my approach for creating Live Photos: // 1. Create video with required metadata let writer = try AVAssetWriter(outputURL: videoURL, fileType: .mov) let contentIdentifier = AVMutableMetadataItem() contentIdentifier.identifier = .quickTimeMetadataContentIdentifier contentIdentifier.value = assetIdentifier as NSString writer.metadata = [contentIdentifier] // Video settings: 882x1920, H.264, 30fps, 2 seconds // Added still-image-time metadata at middle frame // 2. Create HEIC image with asset identifier var makerAppleDict: [String: Any] = [:] makerAppleDict["17"] = assetIdentifier // Required key for Live Photo metadata[kCGImagePropertyMakerAppleDictionary as String] = makerAppleDict // 3. Generate Live Photo PHLivePhoto.request( withResourceFileURLs: [photoURL, videoURL], placeholderImage: nil, targetSize: .zero, contentMode: .aspectFit ) { livePhoto, info in // Success - Live Photo created } // 4. Save to Photos library PHAssetCreationRequest.forAsset().addResource(with: .photo, fileURL: photoURL, options: nil) PHAssetCreationRequest.forAsset().addResource(with: .pairedVideo, fileURL: videoURL, options: nil) What I've Tried Matching exact video specifications from Camera app (882x1920, H.264, 30fps) Adding all documented metadata (content identifier, still-image-time) Testing various video durations (1.5s, 2s, 3s) Different image formats (HEIC, JPEG) Comparing with exiftool against working Live Photos Expected Behavior Live Photos created programmatically should be eligible for motion wallpapers, just like those from the Camera app. Actual Behavior System shows "Motion not available" and only allows setting as static wallpaper. Any insights or workarounds would be greatly appreciated. This is affecting our users who want to use their created content as wallpapers. Questions Are there additional undocumented requirements for Live Photos to be wallpaper-eligible? Is this a deliberate restriction for third-party apps, or a bug? Has anyone successfully created Live Photos that work as motion wallpapers? Environment iOS 17.0 - 18.1 Xcode 16.0 Tested on iPhone 16 Pro
1
1
221
2w
MIDI output form Standalone MIDI Processor Demo App to DAW
I am trying to get MIDI output from the AU Host demo app using the recent MIDI processor example. The processor works correctly in Logic Pro, but I cannot send MIDI from the AUv3 extension in standalone mode using the default host app to another program (e.g., Ableton). The MIDI manager, which is part of the standalone host app, works fine, and I can send MIDI using it directly—Ableton receives it without issues. I have already set the midiOutputNames in the extension, and the midiOutBlock is mapped. However, the MIDI data from the AUv3 extension does not reach Ableton in standalone mode. I suspect the issue is that midiOutBlock might never be called in the plugin, or perhaps an input to the plugin is missing, which prevents it from sending MIDI. I am currently using the default routing. I have modified the MIDI manager such that it works well as described above. Here is a part of my code for SimplePlayEngine.swift and my MIDIManager.swift for reference: @MainActor @Observable public class SimplePlayEngine { private let midiOutBlock: AUMIDIOutputEventBlock = { sampleTime, cable, length, data in return noErr } var scheduleMIDIEventListBlock: AUMIDIEventListBlock? = nil public init() { engine.attach(player) engine.prepare() setupMIDI() } private func setupMIDI() { if !MIDIManager.shared.setupPort(midiProtocol: MIDIProtocolID._2_0, receiveBlock: { [weak self] eventList, _ in if let scheduleMIDIEventListBlock = self?.scheduleMIDIEventListBlock { _ = scheduleMIDIEventListBlock(AUEventSampleTimeImmediate, 0, eventList) } }) { fatalError("Failed to setup Core MIDI") } } func initComponent(type: String, subType: String, manufacturer: String) async -> ViewController? { reset() guard let component = AVAudioUnit.findComponent(type: type, subType: subType, manufacturer: manufacturer) else { fatalError("Failed to find component with type: \(type), subtype: \(subType), manufacturer: \(manufacturer))" ) } do { let audioUnit = try await AVAudioUnit.instantiate( with: component.audioComponentDescription, options: AudioComponentInstantiationOptions.loadOutOfProcess) self.avAudioUnit = audioUnit self.connect(avAudioUnit: audioUnit) return await audioUnit.loadAudioUnitViewController() } catch { return nil } } private func startPlayingInternal() { guard let avAudioUnit = self.avAudioUnit else { return } setSessionActive(true) if avAudioUnit.wantsAudioInput { scheduleEffectLoop() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) do { try engine.start() } catch { isPlaying = false fatalError("Could not start engine. error: \(error).") } if avAudioUnit.wantsAudioInput { player.play() } isPlaying = true } private func resetAudioLoop() { guard let avAudioUnit = self.avAudioUnit else { return } if avAudioUnit.wantsAudioInput { guard let format = file?.processingFormat else { fatalError("No AVAudioFile defined.") } engine.connect(player, to: engine.mainMixerNode, format: format) } } public func connect(avAudioUnit: AVAudioUnit?, completion: @escaping (() -> Void) = {}) { guard let avAudioUnit = self.avAudioUnit else { return } engine.disconnectNodeInput(engine.mainMixerNode) resetAudioLoop() engine.detach(avAudioUnit) func rewiringComplete() { scheduleMIDIEventListBlock = auAudioUnit.scheduleMIDIEventListBlock if isPlaying { player.play() } completion() } let hardwareFormat = engine.outputNode.outputFormat(forBus: 0) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat) if isPlaying { player.pause() } let auAudioUnit = avAudioUnit.auAudioUnit if !auAudioUnit.midiOutputNames.isEmpty { auAudioUnit.midiOutputEventBlock = midiOutBlock } engine.attach(avAudioUnit) if avAudioUnit.wantsAudioInput { engine.disconnectNodeInput(engine.mainMixerNode) if let format = file?.processingFormat { engine.connect(player, to: avAudioUnit, format: format) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: format) } } else { let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 2) engine.connect(avAudioUnit, to: engine.mainMixerNode, format: stereoFormat) } rewiringComplete() } } and my MIDI Manager @MainActor class MIDIManager: Identifiable, ObservableObject { func setupPort(midiProtocol: MIDIProtocolID, receiveBlock: @escaping @Sendable MIDIReceiveBlock) -> Bool { guard setupClient() else { return false } if MIDIInputPortCreateWithProtocol(client, portName, midiProtocol, &port, receiveBlock) != noErr { return false } for source in self.sources { if MIDIPortConnectSource(port, source, nil) != noErr { print("Failed to connect to source \(source)") return false } } setupVirtualMIDIOutput() return true } private func setupVirtualMIDIOutput() { let virtualStatus = MIDISourceCreate(client, virtualSourceName, &virtualSource) if virtualStatus != noErr { print("❌ Failed to create virtual MIDI source: \(virtualStatus)") } else { print("✅ Created virtual MIDI source: \(virtualSourceName)") } } func sendMIDIData(_ data: [UInt8]) { print("hey") var packetList = MIDIPacketList() withUnsafeMutablePointer(to: &packetList) { ptr in let pkt = MIDIPacketListInit(ptr) _ = MIDIPacketListAdd(ptr, 1024, pkt, 0, data.count, data) if virtualSource != 0 { let status = MIDIReceived(virtualSource, ptr) if status != noErr { print("❌ Failed to send MIDI data: \(status)") } else { print("✅ Sent MIDI data: \(data)") } } } } }
0
0
262
3w
AVCaptureMetadataOutput .face detection not working on iOS 26 Beta with high sessionPreset
In iOS 26 (Developer Beta), the AVCaptureMetadataOutputObjectsDelegate no longer receives callbacks when metadataOutput.metadataObjectTypes = [.face] is set. On earlier iOS versions the issue does not occur. Interestingly, face detection works if I set the sessionPreset to .medium, but not with .high — except on the iPhone 16 Pro Max, where it works regardless.
1
0
159
2w
Issue with Airplay for DRM videos
When I try to send a DRM-protected video via Airplay to an Apple TV, the license request is made twice instead of once as it normally does on iOS. We only allow one request per session for security reasons, this causes the second request to fail and the video won't play. We've tested DRM-protected videos without token usage limits and it works, but this creates a security hole in our system. Why does it request the license twice in function: func contentKeySession(_ session: AVContentKeySession, didProvide keyRequest: AVContentKeyRequest)? Is there a way to prevent this?
0
0
184
3w
AVAudioSessionCategoryOptionAllowBluetooth incorrectly marked as deprecated in iOS 8 in iOS 26 beta 5
AVAudioSessionCategoryOptionAllowBluetooth is marked as deprecated in iOS 8 in iOS 26 beta 5 when this option was not deprecated in iOS 18.6. I think this is a mistake and the deprecation is in iOS 26. Am I right? It seems that the substitute for this option is "AVAudioSessionCategoryOptionAllowBluetoothHFP". The documentation does not make clear if the behaviour is exactly the same or if any difference should be expected... Has anyone used this option in iOS 26? Should I expect any difference with the current behaviour of "AVAudioSessionCategoryOptionAllowBluetooth"? Thank you.
2
0
203
3w
CMFormatDescription.audioStreamBasicDescription has wrong or unexpected sample rate for audio channels with different sample rates
In my app I use AVAssetReaderTrackOutput to extract PCM audio from a user-provided video or audio file and display it as a waveform. Recently a user reported that the waveform is not in sync with his video, and after receiving the video I noticed that the waveform is in fact double as long as the video duration, i.e. it shows the audio in slow-motion, so to speak. Until now I was using CMFormatDescription.audioStreamBasicDescription.mSampleRate which for this particular user video returns 22'050. But in this case it seems that this value is wrong... because the audio file has two audio channels with different sample rates, as returned by CMFormatDescription.audioFormatList.map({ $0.mASBD.mSampleRate }) The first channel has a sample rate of 44'100, the second one 22'050. If I use the first sample rate, the waveform is perfectly in sync with the video. The problem is given by the fact that the ratio between the audio data length and the sample rate multiplied by the audio duration is 8, double the ratio for the first audio file (4). In the code below this ratio is given by Double(length) / (sampleRate * asset.duration.seconds) When commenting out the line with the sampleRate variable definition in the code below and uncommenting the following line, the ratios for both audio files are 4, which is the expected result. I would expect audioStreamBasicDescription to return the correct sample rate, i.e. the one used by AVAssetReaderTrackOutput, which (I think) somehow merges the stereo tracks. The documentation is sparse, and in particular it’s not documented whether the lower or higher sample rate is used; in this case, it seems like the higher one is used, but audioStreamBasicDescription for some reason returns the lower one. Does anybody know why this is the case or how I should extract the sample rate of the produced PCM audio data? Should I always take the higher one? I created FB19620455. let openPanel = NSOpenPanel() openPanel.allowedContentTypes = [.audiovisualContent] openPanel.runModal() let url = openPanel.urls[0] let asset = AVURLAsset(url: url) let assetTrack = asset.tracks(withMediaType: .audio)[0] let assetReader = try! AVAssetReader(asset: asset) let readerOutput = AVAssetReaderTrackOutput(track: assetTrack, outputSettings: [AVFormatIDKey: Int(kAudioFormatLinearPCM), AVLinearPCMBitDepthKey: 16, AVLinearPCMIsBigEndianKey: false, AVLinearPCMIsFloatKey: false, AVLinearPCMIsNonInterleaved: false]) readerOutput.alwaysCopiesSampleData = false assetReader.add(readerOutput) let formatDescriptions = assetTrack.formatDescriptions as! [CMFormatDescription] let sampleRate = formatDescriptions[0].audioStreamBasicDescription!.mSampleRate //let sampleRate = formatDescriptions[0].audioFormatList.map({ $0.mASBD.mSampleRate }).max()! print(formatDescriptions[0].audioStreamBasicDescription!.mSampleRate) print(formatDescriptions[0].audioFormatList.map({ $0.mASBD.mSampleRate })) if !assetReader.startReading() { preconditionFailure() } var length = 0 while assetReader.status == .reading { guard let sampleBuffer = readerOutput.copyNextSampleBuffer(), let blockBuffer = sampleBuffer.dataBuffer else { break } length += blockBuffer.dataLength } print(Double(length) / (sampleRate * asset.duration.seconds))
0
1
85
Aug ’25
Getting error 'Can't Decode' when exporting a video file via AVAssetExportSession
I'm working on a video player app that has the basic functionality of viewing a video and then be able to trim and crop that video and then save it. My flow of trimming a video and then saving it works well with any and every video. Cropping, however, doesn't work in the sense that I am unable to Save the video and export it. Whenever I crop a video, in the video player, I can see the cropped version of the video (it plays too!) but on saving said video, I get the error: Export failed with status: 4, error: Cannot Decode I've been debugging for 2 days now but I'm still unsure as to why this happens. I'm almost certain the bug is somewhere cause of cropping and then saving/exporting. If anyone has dealt with this before, please let me know what the best step to do is! If you could help me refine the flow for cropping and exporting, that'd be really helpful too. Thanks!
0
0
61
Aug ’25
Screen Rocorder plus Front Camera
I want to build an app for ios using react native. preferably expo. The app will be for recording user experiences with technology. the SLUDGE that they face while navigating through technology. I want to have basic login, signup. The main feature would be to have 2 recording modes. First is record the screen and the front camera simultaneously. Second is to record the back camera and the front camera simultaneously. I can then patch the two outputs later on that is the screen recording and the front camera clip in post processing. I want to know if this is possible as I was told that react native and expo does not have the support yet. if not is there any library or another approach to make this app come alive.
0
0
46
Aug ’25
AVAssetResourceLoaderDelegate and CoreMediaErrorDomain -12881 When Playing HLS Audio
I am developing an app that plays HLS audio. When using AVPlayerItem with AVURLAsset, can AVAssetResourceLoaderDelegate correctly handle HLS segments? My goal is to use AVAssetResourceLoaderDelegate to add authentication HTTP headers when accessing HLS .m3u8 and .ts files. I can successfully download the files, but playback fails with errors. Specifically, I am observing the following cases: A. AVAssetResourceLoaderDelegate is canceled, and CoreMediaErrorDomain -12881 occurs In NSURLConnectionDataDelegate’s didReceiveResponse method, set contentInformationRequest In didReceiveData, call dataRequest respondWithData resourceLoader didCancelLoadingRequest is called CoreMediaErrorDomain -12881 occurs B. CoreMediaErrorDomain -12881 occurs In NSURLConnectionDataDelegate’s didReceiveResponse method, set contentInformationRequest In connection didReceiveData, buffer all received data until the end In connectionDidFinishLoading, pass the buffered data to respondWithData Call loadingRequest finishLoading CoreMediaErrorDomain -12881 occurs In both cases, dataRequest.requestsAllDataToEndOfResource is YES. For this use case, I am not using AVURLAssetHTTPHeaderFieldsKey because I need to apply the most up-to-date authentication data at the moment each file is accessed. I would appreciate any advice or suggestions you might have. Thank you in advance!
0
1
91
Aug ’25
Unexpected AVAudioSession behavior after iOS 18.5 causing audio loss in VoIP calls
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls. func configureForVoIPCall() throws { try setCategory( .playAndRecord, mode: .voiceChat, options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker]) try setActive(true) }
1
0
133
Aug ’25
How to detect when iOS Camera app starts video recording (with Allow Audio Playback ON)?
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing. ➡️ The problem: My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior. Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins. So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON. ➡️ What I’ve tried: — AVAudioSession.interruptionNotification → doesn’t fire — devicesChangedEventStream → not triggered I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience ➡️ What I need: A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings. Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated. Thanks.
0
0
176
Aug ’25