AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

AVFoundation Documentation

Posts under AVFoundation tag

362 Posts
Sort by:
Post not yet marked as solved
1 Replies
35 Views
I'm trying to read meta information from MXF files without success. I get an empty AVAsset array. I saw that there are mentions of "MTRegisterProfessionalVideoWorkflowFormatReaders". But there is absolutely no documentation. I don't know where to look. Has anyone encountered this? Please help with any information
Posted
by IGGGG.
Last updated
.
Post not yet marked as solved
1 Replies
59 Views
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request. I am using the hd1920x1080 preset, but both the wide and ultra-wide photos are being delivered in the highest possible resolution (4224x2376). I've tried to disable any setting that suggests that it should be using that 4k resolution rather than 1080p on the AVCapturePhotoOutput, AVCapturePhotoSettings and AVCaptureDevice, but nothing has worked. Some debugging that I've done: When I turn off constituent photo delivery by commenting out the line of code below, I end up getting a single photo delivered with the 1080p resolution, as you'd expect. // photoSettings.virtualDeviceConstituentPhotoDeliveryEnabledDevices = captureDevice.constituentDevices I tried the constituent photo delivery with the .builtInDualCamera and got only 4k results (same as described above) I tried using a AVCaptureMultiCamSession with .builtInDualWideCamera and also only got 4k imagery I inspected the resolved settings on photo.resolvedSettings.photoDimensions, and the dimensions suggest the imagery should be 1080p, but then when I inspect the UIImage, it is always 4k. guard let imageData = photo.fileDataRepresentation() else { return } guard let capturedImage = UIImage(data: imageData ) else { return } print("photo.resolvedSettings.photoDimensions", photo.resolvedSettings.photoDimensions) // 1920x1080 print("capturedImage.size", capturedImage.size) // 4224x2376 -- Any help here would be greatly appreciated, because I've run out of things to try and documentation to follow 🙏
Posted
by nanders.
Last updated
.
Post not yet marked as solved
0 Replies
26 Views
I am implementing pan and zoom features for an app using a custom USB camera device, in iPadOS. I am using an update function (shown below) to apply transforms for scale and translation but they are not working. By re-enabling the animation I can see that the scale translation seems to initially take effect but then the image animates back to its original scale. This all happens in a fraction of a second but I can see it. The translation transform seems to have no effect at all. Printing out the value of AVCaptureVideoPreviewLayer.transform before and after does show that my values have been applied. private func updateTransform() { #if false // Disable default animation. CATransaction.begin() CATransaction.setDisableActions(true) defer { CATransaction.commit() } #endif // Apply the transform. logger.debug("\(String(describing: self.videoPreviewLayer.transform))") let transform = CATransform3DIdentity let translate = CATransform3DTranslate(transform, translationX, translationY, 0) let scale = CATransform3DScale(transform, scale, scale, 1) videoPreviewLayer.transform = CATransform3DConcat(translate, scale) logger.debug("\(String(describing: self.videoPreviewLayer.transform))") } My question is this, how can I properly implement pan/zoom for an AVCaptureVideoPreviewLayer? Or even better, if you see a problem with my current approach or understand why the transforms I am applying do not work, please share that information.
Posted Last updated
.
Post not yet marked as solved
1 Replies
131 Views
In this code, I aim to enable users to select an image from their phone gallery and display it with less opacity on top of the z-index. The selected image should appear on top of the user's phone camera feed, allowing them to see the canvas on which they are drawing as well as the low-opacity image. The app's purpose is to enable users to trace an image on the canvas while simultaneously seeing the camera feed. CameraView.swift import SwiftUI import AVFoundation struct CameraView: View { let selectedImage: UIImage var body: some View { ZStack { CameraPreview() Image(uiImage: selectedImage) .resizable() .aspectRatio(contentMode: .fill) .opacity(0.5) // Adjust the opacity as needed .edgesIgnoringSafeArea(.all) } } } struct CameraPreview: UIViewRepresentable { func makeUIView(context: Context) -> UIView { let cameraPreview = CameraPreviewView() return cameraPreview } func updateUIView(_ uiView: UIView, context: Context) {} } class CameraPreviewView: UIView { private let captureSession = AVCaptureSession() override init(frame: CGRect) { super.init(frame: frame) setupCamera() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } private func setupCamera() { guard let backCamera = AVCaptureDevice.default(for: .video) else { print("Unable to access camera") return } do { let input = try AVCaptureDeviceInput(device: backCamera) if captureSession.canAddInput(input) { captureSession.addInput(input) let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer.videoGravity = .resizeAspectFill previewLayer.frame = bounds layer.addSublayer(previewLayer) captureSession.startRunning() } } catch { print("Error setting up camera input:", error.localizedDescription) } } } Thanks for helping and your time.
Posted
by jhems.
Last updated
.
Post not yet marked as solved
0 Replies
143 Views
After upgrading to iOS 17, Thread Performance Checker is complaining of priority inversion when converting a CVPixelBuffer to UIImage through a CIImage instance. It might be a false-positive or an issue? - (UIImage *)imageForSampleBuffer:(CMSampleBufferRef)sampleBuffer andOrientation:(UIImageOrientation)orientation { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; UIImage *uiImage = [UIImage imageWithCIImage:ciImage]; NSData *data = UIImageJPEGRepresentation(uiImage, 90); } The code snippet above, when running in a thread set to the default priority results in the message below: Thread Performance Checker: Thread running at User-interactive quality-of-service class waiting on a lower QoS thread running at Default quality-of-service class. Investigate ways to avoid priority inversions PID: 1188, TID: 723209 Backtrace ================================================================= 3 AGXMetalG14 0x0000000235c77cc8 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 2927816 4 AGXMetalG14 0x0000000235ccd784 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 3278724 5 AGXMetalG14 0x0000000235ccf6a4 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 3286692 6 MetalTools 0x000000022f758b68 E712D983-01AD-3FE5-AB66-E00ABF76CD7F + 568168 7 CoreImage 0x00000001a7c0e580 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 267648 8 CoreImage 0x00000001a7d0cc08 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 1309704 9 CoreImage 0x00000001a7c0e2e0 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 266976 10 CoreImage 0x00000001a7c0e1d0 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 266704 11 libdispatch.dylib 0x0000000105e4a7bc _dispatch_client_callout + 20 12 libdispatch.dylib 0x0000000105e5be24 _dispatch_lane_barrier_sync_invoke_and_complete + 176 13 CoreImage 0x00000001a7c0a784 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 251780 14 CoreImage 0x00000001a7c0a46c 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 250988 15 libdispatch.dylib 0x0000000105e5b764 _dispatch_block_async_invoke2 + 148 16 libdispatch.dylib 0x0000000105e4a7bc _dispatch_client_callout + 20 17 libdispatch.dylib 0x0000000105e5266c _dispatch_lane_serial_drain + 832 18 libdispatch.dylib 0x0000000105e5343c _dispatch_lane_invoke + 460 19 libdispatch.dylib 0x0000000105e524a4 _dispatch_lane_serial_drain + 376 20 libdispatch.dylib 0x0000000105e5343c _dispatch_lane_invoke + 460 21 libdispatch.dylib 0x0000000105e60404 _dispatch_root_queue_drain_deferred_wlh + 328 22 libdispatch.dylib 0x0000000105e5fa38 _dispatch_workloop_worker_thread + 444 23 libsystem_pthread.dylib 0x00000001f35a4f20 _pthread_wqthread + 288 24 libsystem_pthread.dylib 0x00000001f35a4fc0 start_wqthread + 8
Posted Last updated
.
Post not yet marked as solved
1 Replies
92 Views
In my app I play HLS streams via AVPlayer. It works well! However, when I try to download those same HLS urls via MakeAssetDownloadTask I regularly come across the error: Download error for identifier 21222: Error Domain=CoreMediaErrorDomain Code=-12938 "HTTP 404: File Not Found" UserInfo={NSDescription=HTTP 404: File Not Found, _NSURLErrorRelatedURLSessionTaskErrorKey=( "BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>" ), _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>} I have a feeling that the AVPlayer has a way to resolve this that the MakeAssetDownloadTask lacks. I am wondering if any of you have come across this or have insight. Thank you! BTW this is using Xcode Version 15.3 (15E204a) and developing for visionOS 1.0.1
Posted Last updated
.
Post not yet marked as solved
0 Replies
175 Views
Dear Apple Developer Forum, we have customers here complaining about not being able to play live streams (HLS FairPlay) with ou application anymore since having upgraded their phone to iOS 17.4.1. We can't reproduce this problem in-house but the error code sent to ou analytics platform is CoreMediaErrorDomain error -12852 . Would it be possible to get more information on this error especially the potential cause of this and if the app is not responsible how we can help our customers ? Kind regards Cédric
Posted Last updated
.
Post not yet marked as solved
2 Replies
279 Views
Hello, I tried to build AVCam sample application for iOS17 and run it on MacBook (designed as iPad) with macos14.3 (Sonoma). https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app?language=objc When building and testing with Xcode 15.2, AVCam application crashes systematically when choosing target "My Mac (Designed for iPad)" In fact, SIGABORT signal is received in a thread dealing with "portrait effect" Thread 19 Queue : com.apple.portrait.effect_init (serial) Is it a known bug? Is there a workaround about this case? Best regards External webcam is detected by AVCam but preview and capture are systematically upside down. (may be the same FaceTime HD camera's) Is it a known bug? Is there a workaround about this case?
Posted
by ftristani.
Last updated
.
Post not yet marked as solved
0 Replies
99 Views
Is it possible to find IDR frame (CMSampleBuffer) in AVAsset h264 video file?
Posted
by tien6b0.
Last updated
.
Post not yet marked as solved
0 Replies
103 Views
I have a camera application which aims to take images as close to simultaneously as possible from the wide and ultra-wide cameras. The AVCaptureMultiCamSession is setup with manual connections. Note: we are not using builtInDualWideCamera with constituent photo delivery enabled since some features we use are not supported in that mode. At the moment, we are manually trying to synchronize frames between the two cameras, but we would like to use the AVCaptureDataOutputSynchronizer to improve our results. Is it possible to synchronize the wide and ultra-wide video outputs? All examples and docs that I've found show synchronization with video and depth, metadata, or audio, but not two video outputs. From my testing, I've found that the dataOutputSynchronizer either fires with the wide video output, or the ultra video output, but never both (at least one is nil), suggesting that they are not being synchronized. self.outputSync = AVCaptureDataOutputSynchronizer(dataOutputs: [wideCameraOutput, ultraCameraOutput]) outputSync.setDelegate(self, queue: .main) ... func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { guard let syncWideData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.wideCameraOutput) as? AVCaptureSynchronizedSampleBufferData, let syncedUltraData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.ultraCameraOutput) as? AVCaptureSynchronizedSampleBufferData else { return; } // either syncWideData or syncUltraData is always nil, so the guard condition never passes. }
Posted
by nanders.
Last updated
.
Post not yet marked as solved
1 Replies
217 Views
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Posted Last updated
.
Post not yet marked as solved
0 Replies
119 Views
We found that crashes occur on some specific devices. But don't know the root cause for it. It only appears on the user side and cannot be reproduced on our local devices. From the stack, a crash occurs inside AVCapture after calling discoverySessionWithDeviceTypes: NSArray<AVCaptureDevice*>* GetVideoCaptureDevices() { NSArray* captureDeviceType = @[ AVCaptureDeviceTypeBuiltInWideAngleCamera, AVCaptureDeviceTypeExternalUnknown ]; AVCaptureDeviceDiscoverySession* deviceDiscoverySession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:captureDeviceType mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionUnspecified]; return deviceDiscoverySession.devices; } The following is the crash call stack: OS Version: macOS 13.5 (22G74) Report Version: 104 Crashed Thread: 10301 Application Specific Information: Fatal Error: EXC_BAD_INSTRUCTION / EXC_I386_INVOP / 0x7ff8194b3522 Thread 10301 Crashed: 0 AppKit 0x7ff8194b3522 -[NSApplication _crashOnException:] 1 AppKit 0x7ff8194b32b3 -[NSApplication reportException:] 2 AppKit 0x7ff819569efa NSApplicationUncaughtExceptionHandler 3 CoreFoundation 0x7ff8161c010a <unknown> 4 libobjc.A.dylib 0x7ff815c597c8 <unknown> 5 libc++abi.dylib 0x7ff815f926da std::__terminate 6 libc++abi.dylib 0x7ff815f92695 std::terminate 7 libobjc.A.dylib 0x7ff815c65929 <unknown> 8 libdispatch.dylib 0x7ff815e38046 _dispatch_client_callout 9 libdispatch.dylib 0x7ff815e39266 _dispatch_once_callout 10 AVFCapture 0x7ff8328cafb6 +[AVCaptureDALDevice devices] 11 AVFCapture 0x7ff832996410 +[AVCaptureDevice_Tundra _devicesWithAllowIOSMacEnvironment:] 12 AVFCapture 0x7ff83299652b +[AVCaptureDevice_Tundra _devicesWithDeviceTypes:mediaType:position:allowIOSMacEnvironment:] 13 AVFCapture 0x7ff83299e8c0 -[AVCaptureDeviceDiscoverySession_Tundra _initWithDeviceTypes:mediaType:position:allowIOSMacEnvironment:prefersUnsuspendedAndAllowsAnyPosition:] 14 AVFCapture 0x7ff83299e7a4 +[AVCaptureDeviceDiscoverySession_Tundra discoverySessionWithDeviceTypes:mediaType:position:] 15 Electron Framework 0x119453784 media::GetVideoCaptureDevices (video_capture_device_avfoundation_helpers.mm:22) I want to know what is the root cause of this crash. How should I simulate it and fix it? Any suggestions would be highly appreciated. Thank you.
Posted
by Colin1994.
Last updated
.
Post not yet marked as solved
0 Replies
108 Views
For whoever needs to hear this... Say you have an AVURLAsset: let asset = AVURLAsset(url: URL(string: "https://www.example.com/playlist.m3u8")!) Then say you load that asset into an AVPlayerItem, and would like it to automatically load certain asset keys you're interested in ahead of time: let playerItem = AVPlayerItem( asset: avURLAsset, automaticallyLoadedAssetKeys: [ "metadata", "commonMetadata", "availableMetadataFormats", "allMediaSelections", "hasProtectedContent", "overallDurationHint"]) Among those keys, do not use "tracks" even though it's one of the available options. That will break AirPlay across all platforms (the user chooses an AirPlay destination and the AVPlayerItem's status instantly switches to failed). Took me far too long to track this down, just wanted to get it out there to save anybody else some time if they ever run into it.
Posted
by Suges.
Last updated
.
Post not yet marked as solved
1 Replies
183 Views
I am using Lidar to measure the distance between the target point and the iPhone Pro. I am getting the correct distance only if I am greater than 70 cm away from the target point. I need that value to be accurate for distances below 70 cm as well. Is there any coding level issue or It's Lidar's limitations?
Posted
by Ramneet.
Last updated
.
Post not yet marked as solved
1 Replies
275 Views
We are facing a weird behaviour when implementing the AirPlay functionality of our iOS app. When we test our app on Apple TV devices everything works fine. On some smart TVs with a specific AirPlay receiver version, (more details below) the stream gets stuck on buffering state immediately after switching to AirPlay mode. On other smart TVs, with different AirPlay receiver version, everything works as expected. The interesting part is that other free or DRM protected streams, work fine on all devices. Smart TVs that AirPlay works fine AirPlay Version -> 25.06 (19.9.9) Smart TVs that AirPlay stuck at buffering state: AirPlayReceiverSDKVersion -> 3.3.0.54 AirPlayReceiverAppVersion -> 53.122.0 You can reproduce this issue using the following stream url: https://tr.vod.cdn.cosmotetvott.gr/v1/310/668/1674288197219/1674288197219.ism/.m3u8?qual=a&ios=1&hdnts=st=1713194669\~exp=1713237899\~acl=\*/310/668/1674288197219/1674288197219.ism/\*\~id=cab757e3-9922-48a5-988b-3a4f5da368b6\~data=de9bbd0100a8926c0311b7dbe5389f7d91e94a199d73b6dc75ea46a4579769d7~hmac=77b648539b8f3a823a7d398d69e5dc7060632c29 If this link expires, notify me to send a new one for testing. Could you please provide to us any specific suggestion as to what causes this issue on those specific streams?
Posted
by gcharita.
Last updated
.
Post not yet marked as solved
12 Replies
2.7k Views
I just upgraded to iOS 17 and it looks like AVSpeechSynthesizer is now broken. I noticed when feeding certain strings to AVSpeechUtterance it just dies flat out stops after only speaking a portion of the string. For example I fed it a string of approx. 1200 words and it speaks up to around 300 words or so and then just stops. The synthesizer delegate method -speechSynthesizer:didFinishSpeechUtterance: is called when this happens, as if this is supposed to be the end even though it is not even close to being finishes. Was working fine on iOS 16. FWIW I create the AVSpeechUtterance with -initWithString:
Posted Last updated
.
Post not yet marked as solved
0 Replies
190 Views
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video. I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view. Is this even possible?
Posted Last updated
.
Post not yet marked as solved
2 Replies
202 Views
On iOS, working with a video feed, I am getting a yellow warning that: "'AVCaptureVideoOrientation' was deprecated in iOS 17.0: Use AVCaptureDeviceRotationCoordinator instead". But I haven't been able to figure out how to get AVCaptureDevice.RotationCoordinator to work, and I haven't found any example of its usage in the Developer Forums or on the wider internet (the one mention of it in a WWDC session doesn't provide illustration of its use). Can anyone offer a working example using Swift?
Posted
by grb2007.
Last updated
.
Post not yet marked as solved
1 Replies
515 Views
I have a custom USB device that includes a microphone. I can see the microphone on macOS when I plug in the device so I know that it is working with the kernel and AV subsystems. I can enumerate and reference the microphone using AVCaptureDevice but I have not been able to figure out how to use this device reference with AVAudioEngine. I'm trying to accomplish two things with this microphone. I want to stream audio from the microphone and have it rendered to the speakers on my MacBook Pro. I want to capture sound data from the microphone and forward it to a live streaming API. To my mind, from what I've read, I need AVAudioEngine to do this but I'm having trouble determining from the documentation just how to go about it on macOS. It seems that there is a lot more information for iOS or iPadOS but since USB-C support is sparsely documented on those operating systems, I'm focusing on the desktop (macOS) for now. Can I convert an AVCaptureDevice into and audio input for AVAudioEngine? If not, how can I accomplish what I'm trying to do using whatever is available on AVFoundation?
Posted Last updated
.