Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Some use AVCaptureControl problems
Set 3 controls to the AVCaptureSession and remove them all. The number of controls in the session is indeed 0, but the camera controls button still shows the previous 3 controls. If it is only 3->2 or 3->1, it can be modified normally, 3->0 is not OK, 0->3 is OK. f (self.captureControl.zoom) { if (self.zoomScaleControl) { self.zoomScaleControl.enabled = false; [_session removeControl:self.zoomScaleControl]; } AVCaptureSlider *zoomSlider = [self.captureControl.zoom fetchCaptureSlider]; [zoomSlider setActionQueue:dispatch_get_main_queue() action:^(float zoomFactor) { @strongify(self); if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeZoomScale:)]) { [self.dataOutputDelegate videoCaptureSession:self tryChangeZoomScale:zoomFactor]; } }]; self.zoomScaleControl = zoomSlider; } else { if (self.zoomScaleControl) { self.zoomScaleControl.enabled = false; [_session removeControl:self.zoomScaleControl]; } self.zoomScaleControl = nil; } if (self.captureControl.exposure) { if (self.exposureBiasControl) { self.exposureBiasControl.enabled = false; [_session removeControl:self.exposureBiasControl]; } AVCaptureSlider *exposureSlider = [self.captureControl.exposure fetchCaptureSlider]; [exposureSlider setActionQueue:dispatch_get_main_queue() action:^(float bias) { @strongify(self); if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeExposureBias:)]) { [self.dataOutputDelegate videoCaptureSession:self tryChangeExposureBias:bias]; } }]; self.exposureBiasControl = exposureSlider; } else { if (self.exposureBiasControl) { self.exposureBiasControl.enabled = false; [_session removeControl:self.exposureBiasControl]; } self.exposureBiasControl = nil; } if (self.captureControl.len) { if (self.lenControl) { self.lenControl.enabled = false; [_session removeControl:self.lenControl]; } ORLenCaptureControlCustomModel *len = self.captureControl.len; AVCaptureIndexPicker *picker = [len fetchCaptureSlider]; [picker setActionQueue:dispatch_get_main_queue() action:^(NSInteger selectedIndex) { @strongify(self); if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:didChangeLenIndex:datas:)]) { [self.dataOutputDelegate videoCaptureSession:self didChangeLenIndex:selectedIndex datas:self.captureControl.len.indexDatas]; } }]; self.lenControl = picker; } else { if (self.lenControl) { self.lenControl.enabled = false; [_session removeControl:self.lenControl]; } self.lenControl = nil; } if ([_session canAddControl:self.zoomScaleControl]) { [_session addControl:self.zoomScaleControl]; } else { self.zoomScaleControl = nil; } if ([_session canAddControl:self.lenControl]) { [_session addControl:self.lenControl]; } else { self.lenControl = nil; } if ([_session canAddControl:self.exposureBiasControl]) { [_session addControl:self.exposureBiasControl]; } else { self.exposureBiasControl = nil; } if (_session.controlsDelegate == nil) { [_session setControlsDelegate:self queue:GetCaptureControlQueue()]; }
0
0
425
Oct ’24
Time Limit
Hi Team, When we select the screen Timing option and if the screen limit is exists for the day limit we have an option to click Ignore Limit—-> Ignore limit for today. After we select this option, suppose if we are watching any videos via Third party apps (YouTube) at that moment it got hang after we clicked multiple times the play button the video started to play but the volume is not working, again if we click previous or forward button then only the sounds are coming. Kindly look into this issue and address it. Thanks, Pemkumar S
0
0
325
Sep ’24
Realitiy Composer file size and iOS 18
Hi folks: I've been creating .reality files out of Reality Composer for over a year. Some of the files are up to 500 mB and, prior to the last month they opened fine as AR projected experiences on even basic iPhones and iPads. Now, I think since iOS 18, a 64Mb file will open as an AR experience but files it seems from about 350MB up don't open. Files just opens a window displaying the name of the file, that it's a .reality file and the file size. But it no longer opens into either an AR or Object display of the .reality scene. Has there been a new file size limit put on .reality files that Files will open or what else is going on here. Have a client who was about to launch and experience based on the .Reality file I can no longer open. Please help.
0
0
502
Oct ’24
Media Library Access not working in my App since iOS 18.2
Hi There, I have an app which access the media library, to save and load files. Since the IOS 18.2, the access to the media library stopped working. Now, I've noticed that our App doesn't show in the List of apps with access to Files ( Privacy & Security -> Files & Folders). Weird behavior is that, one iPhone with iOS 18.3.1 can access to the Files but others no, same iOS version 18.3.1. Test on Simulators (MAC) and works fine also. My info.plist file have the keys to access media library for long time and hasn't changed (at least in the las 4 years) including the key "Privacy - Media Library Usage Description". Also, I've noticed, that the message (popup) that request access to the media library, when using the app for the first time, doesn't show up anymore. We request access to the network (wifi) and this message still showing up but no the media library. I'm using Visual Studio with Xamarin on a MAC. I really appreciate any help you can because is very odd behavior and this started from the iOS 18.2.
0
0
356
Feb ’25
Distortion corrected images
Hi, Currently I am developing a 3D reconstruction project. Which requires images to be distortion-free (rectilinear) and with known intrinsics. The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data. The distortion correction parameters such as lensDistortionLookupTable are used. The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation. There are two questions I would like to get support on: A way to set the calibration parameters in each image. I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach. For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array. For example (last 10 elements are zero): "LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000" The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong. In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted. Including zeros (full array): Excluding zeros (array size changed): Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)? What is the criteria to shrink/exclude elements of the array? Any advice is very much welcome.
0
0
391
Feb ’25
AVContentKeySession reuse
Context We develop an iOS/Apple TV app that allows to play HLS+FP Live streams (custom playback UI), some of which use the same FairPlay content key id. All FairPlay content keys are requested to the same content key server. Implementation Despite Apple documentation warning to not reuse AVContentKeySessions, we use only one AVContentKeySession for all channels which allows the system to reuse the content key when a content key id is met again. As seen in another thread, people seems to think this is OK. Issue When reusing the AVContentKeySession and the user quickly tunes channels multiple times (up to 2 or 3 times per second using gestures), an inconsistency may occur where the content key request for a previous streams is asked to the delegate after a new stream is already being prepared and its AVURLAsset already assigned as the content key session AVContentKeyRecipient. Note that the previous content key recipient is removed before the new one is added. We also have been reported for crashes (though I haven't experienced it myself) when performing multiple channels tunings which makes us think that the AVContentKeySession should definitely not been reused. Note: On the other hand if a new AVContentKeySession is used for each stream, the system systematically requests a content key even if previous streams have used the same content key id. In this case, neither the crash nor the inconsistency issue are observed but it dramatically increases the number of calls to the content key server. Questions Should AVContentKeySessions definitely not be reused? Otherwise, how to handle the inconsistency issue described above?
0
0
387
Feb ’25
Microphone access from control center
Title: Unable to Access Microphone in Control Center Widget – Is It Possible? Hello everyone, I'm attempting to create a widget in the Control Center that accesses the microphone, similar to how Shazam does it. However, I'm running into an issue where the widget always prints "Microphone permission denied." It's worth mentioning that microphone access works fine when I'm using the app itself. Here's the code I'm using in the widget: swift Copy code func startRecording() async { logger.info("Starting recording...") print("Starting recording...") recognizedText = "" isFinishingRecognition = false // First, check speech recognition authorization let speechAuthStatus = await withCheckedContinuation { continuation in SFSpeechRecognizer.requestAuthorization { status in continuation.resume(returning: status) } } guard speechAuthStatus == .authorized else { logger.error("Speech recognition not authorized") return } // Then, request microphone permission using our manager let micPermission = await AudioSessionManager.shared.requestMicrophonePermission() guard micPermission else { logger.error("Microphone permission denied") print("Microphone permission denied") return } // Continue with recording... } Issues: The code consistently prints "Microphone permission denied" when run from the widget. Microphone access works without issues when the same code is executed from within the app. Questions: Is it possible for a Control Center widget to access the microphone? If yes, what might be causing the "Microphone permission denied" error in the widget? Are there additional permissions or configurations required to enable microphone access in a widget? Any insights or suggestions would be greatly appreciated! Thank you.
0
0
489
Nov ’24
AVPlayer handle 403 error
Hello, Is there a way to handle 403 error returned by the server, eg token expired ? Cannot find any information about this and everything that I tried wasn't working (addObserver, NotificationCenter with .AVPlayerItemNewErrorLogEntry, AVPlayerItemPlaybackStalled, ...) Thank you very much.
0
0
123
Mar ’25
Do we need an MFi authentication chip to send video/commands to iPhone?
Hello, my company is developing a product that will send data to/from the phone over cable and Wi-Fi. I have three questions: Do we need an MFi authentication chip in our product if we plan to send video and commands to the iPhone/iPad over USB or Lightning cable? Likewise, do we need an MFI authentication chip for communication over Wi-Fi? (Informal research suggests that the answer is no to this one.) And, do we even still need MFI certification at all for Wi-Fi comms? (We are not using HomeKit.) Thank you!
0
0
408
Feb ’25
H.265 Decoding with VideoToolBox
I am creating an app that decodes H.265 elementary streams on iOS. I use VideoToolBox to decode from H.265 to NV12. The decoded data is enqueued in the CMSampleBufferDisplayLayer as a CMSampleBuffer. However, nothing is displayed in the VideoPlayerView. It remains black. The decoding in VideoToolBox is successful. I confirmed this by saving the NV12 data in the CMSampleBuffer to a file and displaying it using a tool. Why is nothing displayed in the VideoPlayerView? I can provide other source code as well. // // VideoPlayerView.swift // H265Decoder // // Created by Kohshin Tokunaga on 2025/02/15. // import SwiftUI import AVFoundation struct VideoPlayerView: UIViewRepresentable { // Return an H265Player as the coordinator, and start playback there. func makeCoordinator() -> H265Player { H265Player() } func makeUIView(context: Context) -> UIView { let uiView = UIView(frame: .zero) // Base layer for attaching sublayers uiView.backgroundColor = .black // Screen background color (for iOS) // Create the display layer and add it to uiView.layer let displayLayer = context.coordinator.displayLayer displayLayer.frame = uiView.bounds displayLayer.backgroundColor = UIColor.clear.cgColor uiView.layer.addSublayer(displayLayer) // Start playback context.coordinator.startPlayback() return uiView } func updateUIView(_ uiView: UIView, context: Context) { // Reset the frame of the AVSampleBufferDisplayLayer when the view's size changes. let displayLayer = context.coordinator.displayLayer displayLayer.frame = uiView.layer.bounds // Optionally update the layer's background color, etc. uiView.backgroundColor = .black displayLayer.backgroundColor = UIColor.clear.cgColor // Flush transactions if necessary CATransaction.flush() } } // // H265Player.swift // H265Decoder // // Created by Kohshin Tokunaga on 2025/02/15. // import Foundation import AVFoundation import CoreMedia class H265Player: NSObject, VideoDecoderDelegate { let displayLayer = AVSampleBufferDisplayLayer() private var decoder: H265Decoder? override init() { super.init() // Initial configuration for the display layer displayLayer.videoGravity = .resizeAspect // Initialize the decoder (delegate = self) decoder = H265Decoder(delegate: self) // For simple playback, set isBaseline to true decoder?.isBaseline = true } func startPlayback() { // Load the file "cars_320x240.h265" guard let url = Bundle.main.url(forResource: "temp2", withExtension: "h265") else { print("File not found") return } do { let data = try Data(contentsOf: url) // Set FPS and video size as needed let packet = VideoPacket(data: data, type: .h265, fps: 30, videoSize: CGSize(width: 1080, height: 1920)) // Decode as a single packet decoder?.decodeOnePacket(packet) } catch { print("Failed to load file: \(error)") } } // MARK: - VideoDecoderDelegate func decodeOutput(video: CMSampleBuffer) { // When decoding is complete, send the output to AVSampleBufferDisplayLayer displayLayer.enqueue(video) } func decodeOutput(error: DecodeError) { print("Decoding error: \(error)") } }
0
0
335
Feb ’25
[Crash Report] PHAssetResourceManager.writeDataForAssetResource causes app crash on iOS18
We are encountering a critical, intermittently occurring crash issue when accessing photo data using PHAssetResourceManager.writeDataForAssetResource on iOS 18. The problem does not arise on iOS 17 or earlier versions. We have been unable to identify a consistent reproduction path. Based on user feedback, the issue seems to involve Live Photo and Raw image files. Our investigation has revealed that the crash occurs in the +[PISchema identifier] method of the PhotoImaging Framework. When called manually, this method causes a crash on iOS 18 but works without issues on iOS 17. Reproduction Steps: 1.Fetch PHAsset. 2.Get PHAssetResource by [PHAssetResource assetResourcesForAsset:]. 3.Call [PHAssetResourceManager writeDataForAssetResource:toFile:options:completionHandler:]. Crash Log: Incident Identifier: CFD60092-FDB1-43B4-BA42-3F507F7B8B96 CrashReporter Key: 260b4780989083a54e0cb451930fe9a3bed64862 Hardware Model: iPhone13,4 AppStoreTools: 16C5031b AppVariant: 1:iPhone13,4:18 Code Type: ARM-64 (Native) Role: Foreground Parent Process: launchd [1] Date/Time: 2025-02-15 19:07:57.7054 +0800 Launch Time: 2025-02-15 19:07:55.4106 +0800 OS Version: iPhone OS 18.3.1 (22D72) Release Type: User Baseband Version: 5.20.03 Report Version: 104 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Termination Reason: SIGNAL 6 Abort trap: 6 Terminating Process: mCloud_iPhone [11109] Triggered by Thread: 11 Application Specific Information: abort() called Thread 11 name: Dispatch queue: com.apple.NSXPCConnection.m-user.com.apple.photos.service Thread 11 Crashed: 0 libsystem_kernel.dylib 0x1e850b2d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x221b4959c pthread_kill + 268 2 libsystem_c.dylib 0x19ec24b08 abort + 128 3 NeutrinoCore 0x1bdcdbdec -[NUAssertionPolicyAbort notifyAssertion:] + 68 4 NeutrinoCore 0x1bdcdbbf4 -[NUAssertionPolicyComposite notifyAssertion:] + 160 5 NeutrinoCore 0x1bdcdc098 -[NUAssertionPolicyUnique notifyAssertion:] + 176 6 NeutrinoCore 0x1bdcdb524 -[NUAssertionHandler handleFailureInFunction:file:lineNumber:currentlyExecutingJobName:description:arguments:] + 156 7 NeutrinoCore 0x1bdcdc4bc _NUAssertFailHandler + 176 8 NeutrinoCore 0x1bdc8ea98 -[NUIdentifier initWithNamespace:name:version:] + 2352 9 NeutrinoCore 0x1bdc8eba8 -[NUIdentifier initWithName:version:] + 84 10 NeutrinoCore 0x1bdc8ec10 -[NUIdentifier initWithName:] + 68 11 PhotoImaging 0x1bda54ce4 +[PISchema identifier] + 36 12 PhotoImaging 0x1bda550fc +[PISchema registeredPhotosSchemaIdentifier] + 32 13 PhotoImaging 0x1bd9d7128 +[PIPhotoEditHelper newComposition] + 28 14 PhotoImaging 0x1bd940798 +[PICompositionSerializer deserializeCompositionFromAdjustments:metadata:formatIdentifier:formatVersion:sidecarData:error:] + 160 15 PhotoImaging 0x1bd9412ec +[PICompositionSerializer deserializeCompositionFromData:formatIdentifier:formatVersion:sidecarData:error:] + 224 16 PhotoLibraryServices 0x1afabf75c -[PLPhotoEditPersistenceManager loadCompositionFrom:formatIdentifier:formatVersion:sidecarData:error:] + 1856 17 PhotoLibraryServices 0x1afabffe4 +[PLPhotoEditPersistenceManager validateAdjustmentData:formatIdentifier:formatVersion:error:] + 108 18 Photos 0x1af4ac360 __167+[PHContentEditingInputRequestContext contentEditingInputRequestContextForAsset:requestID:managerID:networkAccessAllowed:downloadIntent:progressHandler:resultHandler:]_block_invoke + 260 19 Photos 0x1af4ac67c -[PHAdjustmentData(ContentEditingInput) _contentEditing_readableByClientWithVerificationBlock:] + 136 20 Photos 0x1af4ac4b0 -[PHAdjustmentData(ContentEditingInput) _contentEditing_requiredBaseVersionReadableByClient:verificationBlock:] + 88 21 Photos 0x1af4abb8c -[PHContentEditingInputRequestContext _adjustmentBaseVersionFromResult:request:canHandleAdjustmentData:] + 404 22 Photos 0x1af4a911c -[PHContentEditingInputRequestContext produceChildRequestsForRequest:reportingIsLocallyAvailable:isDegraded:result:] + 624 23 Photos 0x1af2c1d10 -[PHMediaRequestContext _produceChildRequestsForRequest:withResult:] + 88 24 Photos 0x1af2c11e8 -[PHMediaRequestContext mediaRequest:didFinishWithResult:] + 88 25 Photos 0x1af505184 -[PHAdjustmentDataRequest _finishFromAsynchronousCallback] + 124 26 Photos 0x1af5050a0 __39-[PHAdjustmentDataRequest startRequest]_block_invoke + 584 27 PhotoLibraryServicesCore 0x1b001be8c __106-[PLAssetsdResourceClient adjustmentDataForAsset:networkAccessAllowed:trackCPLDownload:completionHandler:]_block_invoke.86 + 864 28 CoreFoundation 0x196dd8e34 __invoking___ + 148 29 CoreFoundation 0x196dd7e7c -[NSInvocation invoke] + 428 30 Foundation 0x195a64ae0 __NSXPCCONNECTION_IS_CALLING_OUT_TO_EXPORTED_OBJECT__ + 16 31 Foundation 0x195a63514 -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 532 32 Foundation 0x195a6653c __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188 33 libxpc.dylib 0x221babb80 _xpc_connection_reply_callout + 116 34 libxpc.dylib 0x221b9e2d0 _xpc_connection_call_reply_async + 80 35 libdispatch.dylib 0x19eb6b028 _dispatch_client_callout3 + 20 36 libdispatch.dylib 0x19eb88b64 _dispatch_mach_msg_async_reply_invoke + 340 37 libdispatch.dylib 0x19eb7242c _dispatch_lane_serial_drain + 352 38 libdispatch.dylib 0x19eb73158 _dispatch_lane_invoke + 432 39 libdispatch.dylib 0x19eb7e38c _dispatch_root_queue_drain_deferred_wlh + 288 40 libdispatch.dylib 0x19eb7dbd8 _dispatch_workloop_worker_thread + 540 41 libsystem_pthread.dylib 0x221b44680 _pthread_wqthread + 288 42 libsystem_pthread.dylib 0x221b42474 start_wqthread + 8
0
1
341
Feb ’25
Call Limit of the Apple Music API
We are planning to develop an application using the Apple Music API. We would like to design our system based on the details of the rate limits mentioned below and have a few questions: https://developer.apple.com/documentation/applemusicapi/generating-developer-tokens#Request-Rate-Limiting Regarding the Catalog API (/v1/catalog/*), we understand that server-side caching is enabled, making it less likely to reach the rate limit. Is this understanding correct? (Excluding the search API) For APIs like the Library API (/v1/me/library/*), where responses vary by user, we assume they are more likely to reach the rate limit. Is this correct? We plan to implement optimizations to minimize unnecessary API calls. Given this, would the current Music API be able to handle a significant increase in users? (Assuming a DAU of around 100,000 to 1,000,000) If the API cannot support this scale, would it be allowed under Apple’s policy to cache responses from the Catalog API (/v1/catalog/*) via our proxy server to avoid hitting the rate limit? The third question is the one we most want to confirm.
0
0
410
Feb ’25
Play audio data coming from serial port
Hi. I want to read ADPCM encoded audio data, coming from an external device to my Mac via serial port (/dev/cu.usbserial-0001) as 256 byte chunks, and feed it into an audio player. So far I am using Swift and SwiftSerial (GitHub - mredig/SwiftSerial: A Swift Linux and Mac library for reading and writing to serial ports. 3) to get the data via serialPort.asyncBytes() into a AsyncStream but I am struggling to understand how to feed the stream to a AVAudioPlayer or similar. I am new to Swift and macOS audio development so any help to get me on the right track is greatly appreciated. Thx
0
0
268
Oct ’24
Build a vision capture system for apple vision pro
Hi, I am a newbie here. We have been given a task to build a robotic vision system to capture an immersive video in a hazed environment, which will later be played on Apple Vision Pro. I am thinking of starting with 2 or 4 basic CMOS camera sensors, such as IMX378, AR0144, or VD66GY, and designing an FPGA-based circuit to synchronously capture and store raw frame-by-frame data. Some frame initial processing such as demosaicing and filtering can also be done by the FPGA. Then, I would use software for post-processing to convert the data into a compatible video format for Apple Vision Pro. Will this idea work? I can handle the raw data capture, but I’m unsure if this approach is feasible and what post-processing software I should use. Thanks a lot for your suggestions! Charlie
0
0
289
Feb ’25
Depth map is always in hdis format instead of hdep. Unable to capture depth map in kCVPixelFormatType_DepthFloat format even after setting the activeDepthDataFormat for AVCapture device
I'm trying to capture the depth map image using true depth camera in iPhone 15 plus. I was able to setup the AVCapture session with AVCaptureDeviceInput as builtInTrueDepthCamera and AVCapturePhotoOutput with isDepthDataDeliveryEnabled set as true. I also manually made the activeDepthDataFormat of AVCapture device to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32. Finally I have enabled isDepthDataDeliveryEnabled, embedsDepthDataInPhoto , embedsPortraitEffectsMatteInPhoto and embedsSemanticSegmentationMattesInPhoto in AVCapturePhotoSettings before capturing the photo using capturePhoto(with: photoSettings, delegate: self) method. I have checked manually printing the activeDepthDataFormat of AVCapture device. First before setting it by default it is Optional('dpth'/'hdis' 640x 480, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0) After forcing it to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32 the format is Optional('dpth'/'hdep' 160x 120, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0) But when I receive the captured photo in func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) The depth map is Optional(hdis 640x480 (high/abs) calibration:{intrinsicMatrix: [2723.07 0.00 2016.00 | 0.00 2723.07 1512.00 | 0.00 0.00 1.00], extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm, distortionCenter:{2016.00,1512.00}, ref:{4032x3024}}) Here it shows hdis instead of hdep, why is it capturing disparity map instead of true depth map. The depth quality is high and depth data accuracy is absolute. Here is my code import UIKit import AVKit import AVFoundation class ViewController: UIViewController, AVCapturePhotoCaptureDelegate { @IBOutlet weak var previewView: UIView! @IBOutlet weak var resultLbl: UILabel! private var session = AVCaptureSession() private var captureDevice: AVCaptureDevice? private var inputDevice: AVCaptureDeviceInput? private var photoOutput: AVCapturePhotoOutput? private var photoSettings: AVCapturePhotoSettings? private var cameraPreviewLayer: AVCaptureVideoPreviewLayer? override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. self.setupCaptureSession() } func setupCaptureSession(){ captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .unspecified) guard let captureDevice else{ print("ERROR::UNABLE TO SET TRUE DEPTH CAMERA ") return } session.beginConfiguration() do{ inputDevice = try AVCaptureDeviceInput(device: captureDevice) guard let inputDevice else{ print("ERROR: UNABLE TO SET UP INPUT DEVICE") return } if session.canAddInput(inputDevice){ session.addInput(inputDevice) } } catch{ print(error) } photoOutput = AVCapturePhotoOutput() guard let photoOutput else{ print("ERROR: UNABLE TO SET UP PHOTO OUTPUT") return } if session.canAddOutput(photoOutput){ session.addOutput(photoOutput) } session.sessionPreset = .photo photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported print("IS DEPTH ENABLED:: \(photoOutput.isDepthDataDeliveryEnabled)") session.commitConfiguration() let availableFormats = captureDevice.activeFormat.supportedDepthDataFormats let depthFormat = availableFormats.filter { format in let pixelFormatType = CMFormatDescriptionGetMediaSubType(format.formatDescription) return (pixelFormatType == kCVPixelFormatType_DepthFloat16 || pixelFormatType == kCVPixelFormatType_DepthFloat32) }.first session.beginConfiguration() try! captureDevice.lockForConfiguration() captureDevice.activeDepthDataFormat = depthFormat captureDevice.unlockForConfiguration() session.commitConfiguration() self.setupPreviewLayer() } func setupPreviewLayer(){ cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: session) cameraPreviewLayer?.videoGravity = .resizeAspectFill if let cameraPreviewLayer{ self.previewView.layer.addSublayer(cameraPreviewLayer) cameraPreviewLayer.frame = self.previewView.bounds } DispatchQueue.global(qos: .userInteractive).async { self.session.startRunning() } } @IBAction func captureBtnPressed(_ sender: Any) { photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg]) guard let photoSettings else{ print("ERROR: UNABLE TO SETUP PHOTO SETTINGS") return } guard let photoOutput else{ print("ERROR: UNABLE TO SET UP PHOTO OUTPUT") return } photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported photoSettings.embedsDepthDataInPhoto = true photoSettings.embedsPortraitEffectsMatteInPhoto = true photoSettings.embedsSemanticSegmentationMattesInPhoto = true photoOutput.capturePhoto(with: photoSettings, delegate: self) } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) { print(photo.depthData) switch photo.depthData?.depthDataQuality { case .low: print("Depth quality is low") case .high: print("Depth quality is high") case nil: print("Depth quality is nil") } switch photo.depthData?.depthDataAccuracy { case .relative: print("Depth accuarcy is relative") case .absolute: print("Depth accuarcy is absolute") case nil: print("Depth accuarcy is nil") } if let imageData = photo.fileDataRepresentation(){ if let image = UIImage(data: imageData){ UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil) } } } }
0
0
493
Nov ’24
Best Approach for Reliable Background Audio Playback with Audio Ducking on Command from Server
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it. Current Attempt & Observations I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer. This works consistently when the app is active, but in the background, it only works about 60% of the time. In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio. Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background. The app has background modes enabled for Audio, Background Fetch, and Remote Notifications. Best Approach to Achieve This? I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths: Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server? Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case? Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach? Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment? I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated. Thank you for your time and guidance.
0
1
333
Feb ’25
Torch Freezes Ultra-Wide Camera When Switching Between Wide & Ultra-Wide Lenses (AVFoundation Bug?)
I'm developing an iOS app using AVFoundation for real-time video capture and object detection. While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active. Issue If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well. Code snippet DispatchQueue.global(qos: .userInitiated).async { [weak self] in guard let self = self else { return } let isSwitchingToUltraWide = !self.isUsingFisheyeCamera let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide" guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else { DispatchQueue.main.async { self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.") } return } do { let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput self.videoCapture.captureSession.beginConfiguration() if isSwitchingToUltraWide && self.isFlashlightOn { self.forceEnableTorchThroughWide() } if let currentInput = currentInput { self.videoCapture.captureSession.removeInput(currentInput) } let videoInput = try AVCaptureDeviceInput(device: selectedCamera) self.videoCapture.captureSession.addInput(videoInput) self.videoCapture.captureSession.commitConfiguration() self.videoCapture.updateVideoOrientation() DispatchQueue.main.async { if let barButton = sender as? UIBarButtonItem { barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide" barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white } print("Switched to \(cameraName) camera.") } self.isUsingFisheyeCamera.toggle() } catch { DispatchQueue.main.async { self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)") } } } } Expected Behavior Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does. The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON. AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active. Questions & Help Needed Is this a known issue with AVFoundation? How does the iPhone Camera app allow using the torch while recording in Ultra-Wide? What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active? Info Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15 iOS Version: iOS 17.3 / iOS 18.0 Xcode Version: 16.2
0
0
438
Feb ’25
CGImageMetadataTagCreate error when create gain map tag
I extracted the gain map info from an image using let url = Bundle.main.url(forResource: "IMG_1181", withExtension: "HEIC") let source = CGImageSourceCreateWithURL(url! as CFURL, nil) let portraitData = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source!, 0, kCGImageAuxiliaryDataTypeHDRGainMap) as! [AnyHashable : Any] let metaData = portraitData[kCGImageAuxiliaryDataInfoMetadata] as! CGImageMetadata Then I printed all the metadata tags func printMetadataProperties(from metadata: CGImageMetadata) { guard let tags = CGImageMetadataCopyTags(metadata) as? [CGImageMetadataTag] else { return } for tag in tags { if let prefix = CGImageMetadataTagCopyPrefix(tag) as String?, let namespace = CGImageMetadataTagCopyNamespace(tag) as String?, let key = CGImageMetadataTagCopyName(tag) as String?, let value = CGImageMetadataTagCopyValue(tag){ print("Namespace: \(namespace), Key: \(key), Prefix: \(prefix), value: \(value)") } else { } } } //Namespace: http://ns.apple.com/ImageIO/1.0/, Key: hasXMP, Prefix: iio, value: True //Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapVersion, Prefix: HDRGainMap, value: 131072 //Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapHeadroom, Prefix: HDRGainMap, value: 3.586325 I want to create a new CGImageMetadata and tags. But when it comes to the HDR tags. It always fails to add to metadata. let tag = CGImageMetadataTagCreate( "http://ns.apple.com/HDRGainMap/1.0/" as CFString, "HDRGainMap" as CFString, "HDRGainMapHeadroom" as CFString, .default, 3.56 as CFNumber ) let path = "\(HDRGainMap):\(HDRGainMapHeadroom)" as CFString let success = CGImageMetadataSetTagWithPath(metadata, nil, path, tag)// always false The hasXMP works fine. Is HDR a private dict for Apple?
0
0
559
Oct ’24
Why do I get .ts and .aac requests to the server when playing a downloaded HLS?
I am developing an app to stream and download DRM protected HLS videos based on the official “FairPlay Streaming Server SDK”. When I play the downloaded video, it asks the server for .ts or .aac, even though I have passed the path of the downloaded video to AVURLAsset. As a result, playback fails when the device is offline, such as in airplane mode. This behavior depends on the playback time of the video and occurs when trying to download and play a video with a playback time of 19 hours or more. It did not occur for videos with a playback time of 18 hours. The environment we checked is iOS 18.3. The solution at this time is to limit the video playback time to 18 hours, but if possible, we would like to allow download playback of videos longer than 19 hours. Does anyone have any information or know of a solution to this problem, such as if you have experienced this type of event, or if you know that content longer than 19 hours cannot be played offline? // load let path = ".../***.movpkg" // Path of the downloaded file videoAsset = AVURLAsset(url: path) playerItem = AVPlayerItem(asset: videoAsset!) player.replaceCurrentItem(with: playerItem) // isPlayableOffline print("videoAsset.assetCache.isPlayableOffline = \(videoAsset.assetCache.isPlayableOffline)") // true
0
0
383
Feb ’25