AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Musickit Media player missing output device selection
Hi All, I am working on a DJ playout app (MACOS). The app has a few AVAudioPlayerNode's combined with the ApplicationMusicPlayer from Musickit. I can route the output of the AVaudioPlayer to a hardware device so that the audio files are directed to their own dedicated output on my Mac. The ApplicationMusicPlayer is following the default output and this is pretty annoying. Has anyone found a solution to chain the ApplicationMusicPlayer and get it set to a output device? Thanks Pancras
0
0
65
1d
AVSampleBufferDisplayLayerContentLayer memory leaks.
I noticed that AVSampleBufferDisplayLayerContentLayer is not released when the AVSampleBufferDisplayLayer is removed and released. It is possible to reproduce the issue with the simple code: import AVFoundation import UIKit class ViewController: UIViewController { var displayBufferLayer: AVSampleBufferDisplayLayer? override func viewDidLoad() { super.viewDidLoad() let displayBufferLayer = AVSampleBufferDisplayLayer() displayBufferLayer.videoGravity = .resizeAspectFill displayBufferLayer.frame = view.bounds view.layer.insertSublayer(displayBufferLayer, at: 0) self.displayBufferLayer = displayBufferLayer DispatchQueue.main.asyncAfter(deadline: .now() + 1) { self.displayBufferLayer?.flush() self.displayBufferLayer?.removeFromSuperlayer() self.displayBufferLayer = nil } } } In my real project I have mutliple AVSampleBufferDisplayLayer created and removed in different view controllers, this is problematic because the amount of leaked AVSampleBufferDisplayLayerContentLayer keeps increasing. I wonder that maybe I should use a pool of AVSampleBufferDisplayLayer and reuse them, however I'm slightly afraid that this can also lead to strange bugs. Edit: It doesn't cause leaks on iOS 18 device but leaks on iPad Pro, iOS 17.5.1
2
0
175
1d
AVAudioEngine Hangs/Locks Apps After Call to -connect:to:format:
Periodically when testing I am running into a situation where the app hangs and beach balls forever when using AVAudioEngine. This seems to log out when this affect happens: Now when this happens if I pause the debugger it's hanging at a call to: [engine connect:playerNode to:engine.mainMixerNode format:buffer.format]; #0 0x000000019391ca9c in __psynch_mutexwait () #1 0x0000000104d49100 in _pthread_mutex_firstfit_lock_wait () #2 0x0000000104d49014 in _pthread_mutex_firstfit_lock_slow () #3 0x00000001938928ec in std::__1::recursive_mutex::lock () #4 0x00000001ef80e988 in CADeprecated::RealtimeMessenger::_PerformPendingMessages () #5 0x00000001ef818868 in AVAudioNodeTap::Uninitialize () #6 0x00000001ef7fdc68 in AUGraphNodeBase::Uninitialize () #7 0x00000001ef884f38 in AVAudioEngineGraph::PerformCommand () #8 0x00000001ef88e780 in AVAudioEngineGraph::_Connect () #9 0x00000001ef8b7e70 in AVAudioEngineImpl::Connect () #10 0x00000001ef8bc05c in -[AVAudioEngine connect:to:format:] () Current all my audio engine related calls are on the main queue (though I am curious about this https://forums.developer.apple.com/forums/thread/123540?answerId=816827022#816827022). In any case, anyone know where I'm going wrong here?
6
0
145
12h
HLS CMAF/fMP4 CENC CBCS pattern encryption
Hello, I'm writing a program to create CMAF compliant HLS files, with encryption. I have a copy of ISO_IEC_23001-7_2023 to attempt to follow the spec. I am following the 1:9 pattern encryption using CBCS, so for every 16 bytes of encrypted NAL unit data (of type 1 and 5), there's 144 bytes of clear data. When testing my output in Safari with 'identity' keys Quickly Diagnosing Content Key and IV Issues, Safari will request the identity key from my test server and first few bytes of the CMAF renditions, but will not play and console gives away no clues to the error. I am setting the subsample bytesofclear/protected data in the senc boxes. What I'm not sure of, is whether HLS/Safari/iOS acknowledges the senc/saiz/saio boxes of the MP4. There are other third party packagers Bento4, who suggest that they do not: those clients ignore the explicit encryption layout metadata found in saio/saiz boxes, and instead rely purely on the video slice header size to determine the portions of the sample that is encrypted So now I'm fairly sure I need to decipher the video slice header size, and apply the protected blocks from that point on. My question is, is that all there is to it? And is there a better way to debug my output? mediastreamvalidator will only work against unencrypted variants (which I'm outputting okay). Thanks in advance!
0
0
144
1w
Custom AVAssetResourceLoaderDelegate on iOS 15 fails to load large files
In our app we have implemented a AVAssetResourceLoaderDelegate to handle encrypted downloaded files. We have it working on all iOS versions but we are seeing issues on iOS 15 (15.8.3) with large files (> 1 Gb). We have so far seen two cases where either the load method on the AVURLAsset fails early and throws an unknown error code or starts requesting more data than the device has available RAM. The CPU usage is almost always over 100%, even after pausing playback. The memory issue can happen even though the player has successfully started playback. When running this on devices running iOS 16 and above we set the isEntireLengthAvailableOnDemand to true on the AVAssetResourceLoadingContentInformationRequest. This seems to be key to solving the issue those devices that support it. If we set the property to false we see the same memory issue as on iOS 15. So we have a solution for iOS 16 and upwards but are at a loss for how to handle iOS 15. Is there something we have overlooked or is it in fact an issue with that iOS version?
0
0
133
1w
Extrinsic matrix
Hi everyone, I am working on a 3D reconstruction project. Recently I have been able to retrieve the intrinsics from the two cameras on the back of my iPhone. One consideration is that I want this app to run regardless if there is no LiDAR, but at least two cameras on the back. IF there is a LiDAR that is something I have considered to work later on the course of the project. I am using a AVCaptureSession with the two cameras AVCaptureDevice: builtInWideAngleCamera builtInUltraWideCamera The intrinsic matrices seem to be correct. However, the when I retrieve the extrinsics, e.g., builtInWideAngleCamera w.r.t. builtInUltraWideCamera the matrix I get looks like this: Extrinsic Matrix (Ultra-Wide to Wide): [0.9999968, 0.0008149305, -0.0023960583, 0.0] [-0.0008256607, 0.9999896, -0.0044807075, 0.0] [0.002392382, 0.0044826716, 0.99998707, 0.0]. [-14.277955, -8.135408e-10, -0.3359985, 0.0] The extrinsic matrix of the form: [R | t], seems to be correct for the rotational part, but the translational vector is ALL ZEROS. Which suggests that the cameras are physically overlapped as well the last element not being 1 (homogeneous coordinates). Has anyone encountered this 'issue' before? Is there a flaw in my reasoning or something I might be missing? Any comments are very much appreciated.
1
0
177
1w
Does tvOS Support the Multiview Feature?
I've seen the Multiview feature on tvOS that displays a small grid icon when available. However, I only see this functionality in VisionOS using the AVMultiviewManager. Does a different name refer to this feature on tvOS? Relevant Links: https://www.reddit.com/r/appletv/comments/12opy5f/handson_with_the_new_multiview_split_screen/ https://www.pocket-lint.com/how-to-use-multiview-apple-tv/#:~:text=You'll%20see%20a%20grid,running%20at%20the%20same%20time.
1
0
127
1w
AVPlayer takes too much memory when playing AVComposition with multiple videos
I am creating an AVComposition and using it with an AVPlayer. The player works fine and doesn't consume much memory when I do not set playerItem.videoComposition. Here is the code that works without excessive memory usage: func configurePlayer(composition: AVMutableComposition, videoComposition: AVVideoComposition) { player.pause() player.replaceCurrentItem(with: nil) let playerItem = AVPlayerItem(asset: composition) player.play() } However, when I add playerItem.videoComposition = videoComposition, as in the code below, the memory usage becomes excessive: func configurePlayer(composition: AVMutableComposition, videoComposition: AVVideoComposition) { player.pause() player.replaceCurrentItem(with: nil) let playerItem = AVPlayerItem(asset: composition) playerItem.videoComposition = videoComposition player.play() } Issue Details: The memory usage seems to depend on the number of video tracks in the composition, rather than their duration. For instance, two videos of 30 minutes each consume less memory than 40 videos of just 2 seconds each. The excessive memory usage is showing up in the Other Processes section of Xcode's debug panel. For reference, 42 videos, each less than 30 seconds, are using around 1.4 GB of memory. I'm struggling to understand why adding videoComposition causes such high memory consumption, especially since it happens even when no layer instructions are applied. Any insights on how to address this would be greatly appreciated. Before After I initially thought the problem might be due to having too many layer instructions in the video composition, but this doesn't seem to be the case. Even when I set a videoComposition without any layer instructions, the memory consumption remains high.
2
0
116
1w
AVAssetWriterInput appendSampleBuffer failed with error -12780
I tried adding watermarks to the recorded video. Appending sample buffers using AVAssetWriterInput's append method fails and when I inspect the AVAssetWriter's error property, I get the following: Error Domain=AVFoundation Error Domain Code=-11800 "This operation cannot be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDDescription=This operation cannot be completed, NSUnderlyingError=0x302399a70 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}} As far as I can tell -11800 indicates an AVErrorUknown, however I have not been able to find information about the -12780 error code, which as far as I can tell is undocumented. Thanks! Here is the code
1
0
165
1d
Custom FairPlay DRM error handling mechanics
Hi, I have a usecase where I'd like to handle and prevent automatic retries whenever certain errors occur during FairPlay content key requests. Here's the current flow: FairPlay certificate is requested and obtained from my server makeStreamingContentKeyRequestData is called on the keyRequest The license server will return a 403 along with a body response containing a json with the detailed code and message The error is caught and handled properly by calling AVContentKeyRequest.processContentKeyResponseError The AVContentKeySession automatically retries up to 8 times by providing a new key request through public func contentKeySession(_ session: AVContentKeySession, didProvide keyRequest: AVContentKeyRequest) My license server gets hit with 8 requests that will always result in a 403, these retries are useless My custom error is succesfully caught later down the line through AVPlayerItem.observe(\.status), this is great Thing is.. I'd like to catch the 403 error and prevent any retry from being made at step 5, ideally through public func contentKeySession(_ session: AVContentKeySession, contentKeyRequest keyRequest: AVContentKeyRequest, didFailWithError err: Error) I've looked for quite a while and just can't seem to find any way of achieving this. Is this not supported at all?
0
8
205
2w
CoreMediaErrorDomain -12035 error when playing a Fairplay-protected HLS stream on iOS 18+ through the Apple lightning AV Adapter
Our iOS/AppleTV video content playback app uses AVPlayer to play HLS video streams and supports both custom and system playback UIs. The Fairplay content key is retrieved using AVContentKeySession. AirPlay is supported too. When the iPhone is connected to a TV through the lightning Apple Digital AV Adapter (A1438), the app is mirrored as expected. Problem: when using an iPhone or iPad on iOS 18.1.1, FairPlay-protected HLS streams are not played and a CoreMediaErrorDomain -12035 error is received by the AVPlayerItem. Also, once the issue has occurred, the mirroring freezes (the TV indefinitely displays the app playback screen) although the app works fine on the iOS device. The content key retrieval works as expected (I can see that 2 content key requests are made by the system by the way, probably one for the local playback and one for the adapter, as when AirPlaying) and the error is thrown after providing the AVContentKeyResponse. Unfortunately, and as far as I know, there is not documentation on CoreMediaErrorDomain errors so I don't know what -12035 means. The issue does not occur: on an iPhone on iOS 17.7 (even with FairPlay-protected HLS streams) when playing DRM-free video content (whatever the iOS version) when using the USB-C AV Adapter (whatever the iOS version) Also worth noting: the issue does not occur with other video playback apps such as Apple TV or Netflix although I don't have any details on the kind of streams these apps play and the way the FairPlay content key is retrieved (if any) so I don't know if it is relevant.
2
0
186
2w
Constituent active device switching very slow on iPhone 16 Pro Models on focus changes
Hi all, we are in the business of scanning documents and barcodes with the camera system of mobile devices. Since there is a wide variety of use cases, from scanning tiniest barcodes and small business cards to scanning barcodes or large documents from far distances we preferably rely on the triple camera devices, if available, with automatic constituent device switching. This approach used to be working perfectly fine. Depending on the zoom level (we prefer to use an initial zoom value of 2.0) and the focusing distance the iPhone Pro models switched through the different camera systems at light speed: from ultra-wide to wide, tele and back. No issues at all. Unfortunately the new iPhone 16 Pro models behave very different when it comes to constituent device switching based on focus distance. The switching is slow and sometimes it does not happen at all when the focusing distance changes. Especially when aiming for a at a distant object for a longer time and then aiming at a very close object that is maybe 2" away. The iPhone 15 Pro here always switches immediately to the ultra-wide camera, while the iPhone 16 Pro takes at least 2-3 seconds, in rare cases up to 10 seconds and sometimes forever to switch to the ultra-wide camera. Of course we assumed that our code is responsible for these issues. So we experimented with restricting the devices and so on. Then we stripped more and more configuration code but nothing we tried improved the situation. So we ended up writing a minimal example app that demonstrates the problem. You can find the code below. Execute it on various iPhones and aim at far distance (> 10 feet) and then quickly to very close distance (<5 inches). Here is a list of devices and our test results: iPhone 15 Pro, iOS 17.6: very fast and reliable switching iPhone 15 Pro, iOS 18.1: very fast and reliable switching iPhone 13 Pro Max, iOS 15.3: very fast and reliable switching iPhone 16 (dual-wide camera), iOS 18.1: very fast and reliable switching iPhone 16 Pro, iOS 18.1: slow switching, unreliable iPhone 16 Pro Max, iOS 18.1: slow switching, unreliable Questions: Does anyone else have seen this issue? And possibly found a workaround? Is this behaviour intended on iPhone 16 Pro models? Can we somehow improve the switching speed? Further the iPhone 16 Pro models also show a jumping preview in the preview layer when they switch the constituent active device. Not dramatic, but compared to the other phones it looks like a glitch. Thank you very much! Kind regards, Sebastian import UIKit import AVFoundation class ViewController: UIViewController { var captureSession : AVCaptureSession! var captureDevice : AVCaptureDevice! var captureInput : AVCaptureInput! var previewLayer : AVCaptureVideoPreviewLayer! var activePrimaryConstituentToken: NSKeyValueObservation? var zoomToken: NSKeyValueObservation? override func viewDidLoad() { super.viewDidLoad() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) checkPermissions() setupAndStartCaptureSession() } func checkPermissions() { let cameraAuthStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video) switch cameraAuthStatus { case .authorized: return case .denied: abort() case .notDetermined: AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: { (authorized) in if(!authorized){ abort() } }) case .restricted: abort() @unknown default: fatalError() } } func setupAndStartCaptureSession() { DispatchQueue.global(qos: .userInitiated).async{ self.captureSession = AVCaptureSession() self.captureSession.beginConfiguration() if self.captureSession.canSetSessionPreset(.photo) { self.captureSession.sessionPreset = .photo } self.captureSession.automaticallyConfiguresCaptureDeviceForWideColor = true self.setupInputs() DispatchQueue.main.async { self.setupPreviewLayer() } self.captureSession.commitConfiguration() self.captureSession.startRunning() self.activePrimaryConstituentToken = self.captureDevice.observe(\.activePrimaryConstituent, options: [.new], changeHandler: { (device, change) in let type = device.activePrimaryConstituent!.deviceType.rawValue print("Device type: \(type)") }) self.zoomToken = self.captureDevice.observe(\.videoZoomFactor, options: [.new], changeHandler: { (device, change) in let zoom = device.videoZoomFactor print("Zoom: \(zoom)") }) let switchZoomFactor = 2.0 DispatchQueue.main.async { self.setZoom(CGFloat(switchZoomFactor), animated: false) } } } func setupInputs() { if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) { captureDevice = device } else { fatalError("no back camera") } guard let input = try? AVCaptureDeviceInput(device: captureDevice) else { fatalError("could not create input device from back camera") } if !captureSession.canAddInput(input) { fatalError("could not add back camera input to capture session") } captureInput = input captureSession.addInput(input) } func setupPreviewLayer() { previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) view.layer.addSublayer(previewLayer) previewLayer.frame = self.view.layer.frame } func setZoom(_ value: CGFloat, animated: Bool) { guard let device = captureDevice else { return } let maxZoom: CGFloat = captureDevice.maxAvailableVideoZoomFactor let minZoom: CGFloat = captureDevice.minAvailableVideoZoomFactor let zoomValue = max(min(value, maxZoom), minZoom) let deltaZoom = Float(abs(zoomValue - device.videoZoomFactor)) do { try device.lockForConfiguration() if animated { device.ramp(toVideoZoomFactor: zoomValue, withRate: max(deltaZoom * 50.0, 50.0)) } else { device.videoZoomFactor = zoomValue } device.unlockForConfiguration() } catch { return } } }
4
2
207
1d
Checking authorization status of AVCaptureDevice or CLLocation Manager gives runtime warnings in iOS 18
I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18): Issue 1: Performing I/O on the main thread can cause slow launches. Issue 2: Interprocess communication on the main thread can cause non-deterministic delays. Issue 3: Interprocess communication on the main thread can cause non-deterministic delays. Here is the code: @Published var cameraAuthorization:AVAuthorizationStatus @Published var micAuthorization:AVAuthorizationStatus @Published var photoLibAuthorization:PHAuthorizationStatus @Published var locationAuthorization:CLAuthorizationStatus var locationManager:CLLocationManager override init() { // Issue 1 (Performing I/O on the main thread can cause slow launches.) cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video) micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly) //Issue 1: Performing I/O on the main thread can cause slow launches. locationManager = CLLocationManager() locationAuthorization = locationManager.authorizationStatus super.init() //Issue 2: Interprocess communication on the main thread can cause non-deterministic delays. locationManager.delegate = self } And also in route Change notification handler of AVAudioSession.routeChangeNotification, //Issue 3: Hangs - Interprocess communication on the main thread can cause non-deterministic delays. let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback) I wonder how checking authorisation status can give these issues? What is the fix here?
1
0
177
2w
Does the videoDeviceNotAvailableWithMultipleForegroundApps Interruption Occur on iPhones?
Hello, I am developing a service using capture sessions. I have a question concerning something curious I've noticed. Occasionally, I've been informed that the capture session stops working. Upon investigation, I found records of the videoDeviceNotAvailableWithMultipleForegroundApps interruption on the devices. From what I've looked up in the documents, it seems to occur due to multitasking capabilities, but I'm wondering if there are any specific scenarios where this happens on iPhone devices? Here is the relevant documentation link: https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps I suspect it might have something to do with Picture-in-Picture (PIP) mode, but when I developed and tested a direct video streaming PIP, the issue did not occur. Does anyone have insights on this matter or related experiences they could share?
0
0
161
2w
why does tripple camera take photo faster than single camera device?
I found this phenomenon, and it can be reproduced stably. If I use a triple-camera to take a photo, if the picture is moving, or I move the phone, let's assume it moves horizontally, when I aim at an object, I press the shutter, which is called time T. At this time, the picture in the viewfinder is T0, and the photo produced is about T+100ms. If I use a single-camera to take a photo, use the same speed to move the phone to move the picture, and press the shutter when aiming at the same object, the photo produced is about T+400ms later. Let me describe the problem I encountered in another way. Suppose a pile of cards are placed horizontally on the table, and the cards are written with numbers from left to right, 0,1,2,3,4,5,6... Now aim the camera at the number 0, and then move to the right at a uniform speed. The numbers pass through the camera's viewfinder and continue to increase. When aiming at the number 5, press the shutter. If it is a triple-camera, the photo obtained will probably show 6, while if it is taken with a single-camera, the photo obtained will be about 9. This means the triple camera can capture photos faster, but why is this the case? Any explanation?
2
0
226
2w
How to add CIFilter to AVCaptureDeferredPhotoProxy
Hello, I m trying to implement deferred photo processing in my photo capture app. After I take a photo, I pass it through a CIFilter, now with the Deferred Photo Processing where would I pass the resulting photo through the CIFilter? Since there is no way for me to know when the system has finished processing a photo. If I have to do it in my app foreground every time, how do I prevent a scenario, where the user takes a photo, heads straight to the Photos App and sees the image without the filter?
2
0
230
2w
How to save 4K60 ProRes Log Video Internally on iPhone internally?
Hello Apple Engineers, Specific Issue: I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far: I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec. The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log. However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device." This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding. Here are my questions: Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)? There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation? If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated. Thank you for your help!
0
0
194
3w