AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

71 Posts

Post

Replies

Boosts

Views

Activity

Inconsistent FPS (20 FPS Issue) While Recording Video Using AVCaptureSession.
Hi, I am recording video using my app. And setting up fps also using below code. But sometime video is being recorded using 20 FPS. Can someone please let me know what I am doing wrong? private func eightBitVariantOfFormat() -> AVCaptureDevice.Format? { let activeFormat = self.videoDeviceInput.device.activeFormat let fpsToBeSupported: Int = 60 debugPrint("fpsToBeSupported - \(fpsToBeSupported)" as AnyObject) let allSupportedFormats = self.videoDeviceInput.device.formats debugPrint("all formats - \(allSupportedFormats)" as AnyObject) let activeDimensions = CMVideoFormatDescriptionGetDimensions(activeFormat.formatDescription) debugPrint("activeDimensions - \(activeDimensions)" as AnyObject) let filterBasedOnDimensions = allSupportedFormats.filter({ (CMVideoFormatDescriptionGetDimensions($0.formatDescription).width == activeDimensions.width) && (CMVideoFormatDescriptionGetDimensions($0.formatDescription).height == activeDimensions.height) }) if filterBasedOnDimensions.isEmpty { // Dimension not found. Required format not found to handle. debugPrint("Dimension not found" as AnyObject) return activeFormat } debugPrint("filterBasedOnDimensions - \(filterBasedOnDimensions)" as AnyObject) let filterBasedOnMaxFrameRate = filterBasedOnDimensions.compactMap({ format in let videoSupportedFrameRateRanges = format.videoSupportedFrameRateRanges if !videoSupportedFrameRateRanges.isEmpty { let contains = videoSupportedFrameRateRanges.contains(where: { Int($0.maxFrameRate) >= fpsToBeSupported }) if contains { return format } else { return nil } } else { return nil } }) debugPrint("allFormatsToBeSupported - \(filterBasedOnMaxFrameRate)" as AnyObject) guard !filterBasedOnMaxFrameRate.isEmpty else { debugPrint("Taking default active format as nothing found when filtered using desired FPS" as AnyObject) return activeFormat } var formatToBeUsed: AVCaptureDevice.Format! if let four_two_zero_v = filterBasedOnMaxFrameRate.first(where: { CMFormatDescriptionGetMediaSubType($0.formatDescription) == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange}) { // 'vide'/'420v' formatToBeUsed = four_two_zero_v } else { // Take the first one from above array. formatToBeUsed = filterBasedOnMaxFrameRate.first } do { try self.videoDeviceInput.device.lockForConfiguration() self.videoDeviceInput.device.activeFormat = formatToBeUsed self.videoDeviceInput.device.activeVideoMinFrameDuration = CMTimeMake(value: 1, timescale: Int32(fpsToBeSupported)) self.videoDeviceInput.device.activeVideoMaxFrameDuration = CMTimeMake(value: 1, timescale: Int32(fpsToBeSupported)) if videoDeviceInput.device.isFocusModeSupported(.continuousAutoFocus) { self.videoDeviceInput.device.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus } self.videoDeviceInput.device.unlockForConfiguration() } catch let error { debugPrint("\(error)" as AnyObject) } return formatToBeUsed }
1
0
410
Mar ’25
SDAVAssetExportSession or AVAssetWriter fail to process iphone16 video with spatial audio tracks
I have been using SDAVAssetExportSession to compress videos in an app I am building, everything goes very smoothly until I have my new Iphone16, on the device, the spatial audio in camera setting is turned on by default, then the SDAVAssetExportSession starts to fail. I know it has something to do with audioSetting. the current setting is something like this: exportSession.audioSettings = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVNumberOfChannelsKey: 2, AVSampleRateKey: 44100, AVEncoderBitRateKey: 128000 ] And also, this is passed to the underlying object AVAssetReader or AVAssetWriter. I am not experienced in this area, and I really had a hard time trying to figure out. Does anyone know how to set up AVAssetReader or AVAssetWriter to process video with spatial audio tracks ? thanks in advance.
1
0
302
Mar ’25
WebM audio playback
Is it possible to play WebM audio on iOS? Either with AVPlayer, AVAudioEngine, or some other API? Safari has supported this for a few releases now, and I'm wondering if I missed something about how to do this. By default these APIs don't seem to work (nor does ExtAudioFileOpen). Our usecase is making it possible for iOS users to play back audio recorded in our webapp (desktop versions of Chrome & Firefox only support webm as a destination format for MediaRecorder)
1
0
458
Mar ’25
CoreMediaErrorDomain -42709
Hello, I am developing a video streaming service that uses FairPlay. Since around February 20th, we have started receiving reports of CoreMediaErrorDomain -42709 errors. Unfortunately, there is no documentation from Apple that explains what this error means, so we are not sure how to address or fix the issue. Most of the users who reported this error are using iOS 18.2.1 and iOS 18.3.1. Could you please advise on what we should check or how we might resolve this error? Thank you very much for your help!
1
0
487
Mar ’25
Torch Strobe not working in light (ambient light) environments on iOS 18.1
As of iOS 18.1 being released we are having issues with our users experiencing issues with our app that relies on strobing the device torch. We have narrowed this down to being caused on devices with adaptive true-tone flash and have submitted a radar: FB15787160. The issue seems to be caused by ambient light levels. If run in a dark room, the torch strobes exactly as effectively as in previous iOS versions, if run in a light room, or outdoors, or near a window, the strobe will run for ~1s and then the torch will get stuck on for half a second or so (less frequently it gets stuck off) and then it will strobe again for ~1s and this behaviour repeats indefinitely. If we go to a darker environment, and background and then foreground the app (this is required) the issue is resolved, until moving to an area with higher ambient light levels again. We have done a lot of debugging, and also discovered that turning off "Auto-Brightness" from Settings -> Accessibility -> Display & Text Size resolves the issue. We have also viewed logs from Console.app at the time of the issue occurring and it seems to be that there are quite sporadic ambient light level readings at the time at which the issue occurs. The light readings transition from ~100 Lux to ~8000 Lux at the point that the issue starts occurring (seemingly caused by the rear sensor being affected by the torch). With "Auto-Brightness" turned off, it seems these readings stay at lower levels. This is rendering the primary use case of our app essentially useless, would be great to get to the bottom of it! We can't even really detect it in-app as I believe using SensorKit is restricted to research applications and requires a review process with Apple before accessing? Edit: It's worth noting this is also affecting other apps with strobe functionality in the exact same way
8
5
984
Mar ’25
AVPlayer playing protected HLS, when session expires and open new session created the player is stall for a small time
I am playing the protected HLS streams and the authorization token expires in 3 minutes. I am trying to achieve this with 'AVAssetResourceLoaderDelegate'. I can refresh the token and play it, but the problem is in between the session, the player stalls for a small time, LIKE 1 SECOND. Here's my code : class APLCustomAVARLDelegate: NSObject, AVAssetResourceLoaderDelegate { static let httpsScheme = "https" static let redirectErrorCode = 302 static let badRequestErrorCode = 400 private var token: String? private var retryDictionary = [String: Int]() private let maxRetries = 3 private func schemeSupported(_ scheme: String) -> Bool { let supported = ishttpSchemeValid(scheme) print("Scheme '\(scheme)' supported: \(supported)") return supported } private func reportError(loadingRequest: AVAssetResourceLoadingRequest, error: Int) { let nsError = NSError(domain: NSURLErrorDomain, code: error, userInfo: nil) print("Reporting error: \(nsError)") loadingRequest.finishLoading(with: nsError) } // Handle token renewal requests to prevent playback stalls func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForRenewalOfRequestedResource renewalRequest: AVAssetResourceRenewalRequest) -> Bool { print("Resource renewal requested for URL: \(renewalRequest.request.url?.absoluteString ?? "unknown URL")") // Handle renewal the same way we handle initial requests guard let scheme = renewalRequest.request.url?.scheme else { print("No scheme found in the renewal URL.") return false } if isHttpsSchemeValid(scheme) { return handleHttpsRequest(renewalRequest) } print("Scheme not supported for renewal.") return false } private func isHttpsSchemeValid(_ scheme: String) -> Bool { let isValid = scheme == APLCustomAVARLDelegate.httpsScheme print("httpsScheme scheme '\(scheme)' valid: \(isValid)") return isValid } private func generateHttpsURL(sourceURL: URL) -> URL? { // If you need to modify the URL, do it here // Currently this just returns the same URL let urlString = sourceURL.absoluteString print("Generated HTTPS URL: \(urlString)") return URL(string: urlString) } private func handleHttpsRequest(_ loadingRequest: AVAssetResourceLoadingRequest) -> Bool { print("Handling HTTPS request.") guard let sourceURL = loadingRequest.request.url, var redirectURL = generateHttpsURL(sourceURL: sourceURL) else { print("Failed to generate HTTPS URL.") reportError(loadingRequest: loadingRequest, error: APLCustomAVARLDelegate.badRequestErrorCode) return true } // Track retry attempts with a dictionary keyed by request URL let urlString = sourceURL.absoluteString let currentRetries = retryDictionary[urlString] ?? 0 if currentRetries < maxRetries { retryDictionary[urlString] = currentRetries + 1 } else { // Too many retries, report a more specific error reportError(loadingRequest: loadingRequest, error: NSURLErrorTimedOut) retryDictionary.removeValue(forKey: urlString) return true } if var urlComponents = URLComponents(url: redirectURL, resolvingAgainstBaseURL: false) { var queryItems = urlComponents.queryItems ?? [] // Generate a fresh token each time let freshToken = AESTimeBaseEncription.secureEncryptSecretText() // Check if the token already exists if let existingTokenIndex = queryItems.firstIndex(where: { $0.name == "token" }) { // Update the existing token queryItems[existingTokenIndex].value = freshToken } else { // Add the token if it doesn't exist queryItems.append(URLQueryItem(name: "token", value: freshToken)) } urlComponents.queryItems = queryItems redirectURL = urlComponents.url! } let redirectRequest = URLRequest(url: redirectURL) let response = HTTPURLResponse(url: redirectURL, statusCode: APLCustomAVARLDelegate.redirectErrorCode, httpVersion: nil, headerFields: nil) print("Redirecting HTTPS to URL: \(redirectURL)") loadingRequest.redirect = redirectRequest loadingRequest.response = response loadingRequest.finishLoading() // If successful, reset the retry counter if retryDictionary[urlString] == maxRetries { retryDictionary.removeValue(forKey: urlString) } return true } }
0
0
489
Mar ’25
HDR video metadata
On an iOS 18 phone, I use AVCaptureSession to capture HDR with x420 format. The output CMSampleBuffer is HLG colorspace, the propagated attachments contain kCVImageBufferAmbientViewingEnvironmentKey and kCVImageBufferSceneIlluminationKey. Now I use CAMetalLayer to render the CVPixelBuffer to the screen, but the brightness is brighter than AVSampleBufferDisplayLayer. Here is my code. - (void)_updateColorSpaceIfNeed:(CVPixelBufferRef)pixelBuffer { CAMetalLayer *layer = (CAMetalLayer *)_mtkView.layer; if (![layer isKindOfClass:CAMetalLayer.class]) return; layer.wantsExtendedDynamicRangeContent = YES; CFDataRef ambientViewingEnvironment = (CFDataRef)CVBufferCopyAttachment(pixelBuffer, kCVImageBufferAmbientViewingEnvironmentKey, NULL); NSData *data = (__bridge NSData *)ambientViewingEnvironment; if (ambientViewingEnvironment) CFRelease(ambientViewingEnvironment); CAEDRMetadata *metadata = [CAEDRMetadata HLGMetadataWithAmbientViewingEnvironment:data]; // CAEDRMetadata *metadata = [CAEDRMetadata HLGMetadata]; layer.EDRMetadata = metadata; layer.pixelFormat = MTLPixelFormatRGBA16Float; CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceITUR_2100_HLG); layer.colorspace = colorspace; if (colorspace) CGColorSpaceRelease(colorspace); } Why does the CAEDRMetadata class have "HLGMetadataWithAmbientViewingEnvironment:" and "HLGMetadata" methods, but does not provide the "HLGMetadataWithAmbientViewingEnvironment:sceneIllumination" method? I want to know how kCVImageBufferAmbientViewingEnvironmentKey and kCVImageBufferSceneIlluminationKey affect tone mapping. Is there any documentation I can refer to?
1
0
456
Mar ’25
CoreMediaErrorDomain -12035 error when playing a Fairplay-protected HLS stream on iOS 18+ through the Apple lightning AV Adapter
Our iOS/AppleTV video content playback app uses AVPlayer to play HLS video streams and supports both custom and system playback UIs. The Fairplay content key is retrieved using AVContentKeySession. AirPlay is supported too. When the iPhone is connected to a TV through the lightning Apple Digital AV Adapter (A1438), the app is mirrored as expected. Problem: when using an iPhone or iPad on iOS 18.1.1, FairPlay-protected HLS streams are not played and a CoreMediaErrorDomain -12035 error is received by the AVPlayerItem. Also, once the issue has occurred, the mirroring freezes (the TV indefinitely displays the app playback screen) although the app works fine on the iOS device. The content key retrieval works as expected (I can see that 2 content key requests are made by the system by the way, probably one for the local playback and one for the adapter, as when AirPlaying) and the error is thrown after providing the AVContentKeyResponse. Unfortunately, and as far as I know, there is not documentation on CoreMediaErrorDomain errors so I don't know what -12035 means. The issue does not occur: on an iPhone on iOS 17.7 (even with FairPlay-protected HLS streams) when playing DRM-free video content (whatever the iOS version) when using the USB-C AV Adapter (whatever the iOS version) Also worth noting: the issue does not occur with other video playback apps such as Apple TV or Netflix although I don't have any details on the kind of streams these apps play and the way the FairPlay content key is retrieved (if any) so I don't know if it is relevant.
5
2
1.4k
Feb ’25
SwiftUI infinite loop issue with @Environment(\.verticalSizeClass)
I see SwiftUI body being repeatedly called in an infinite loop in the presence of Environment variables like horizontalSizeClass or verticalSizeClass. This happens after device is rotated from portrait to landscape and then back to portrait mode. The deinit method of TestPlayerVM is repeatedly called. Minimally reproducible sample code is pasted below. The infinite loop is not seen if I remove size class environment references, OR, if I skip addPlayerObservers call in the TestPlayerVM initialiser. import AVKit import Combine struct InfiniteLoopView: View { @Environment(\.verticalSizeClass) var verticalSizeClass @Environment(\.horizontalSizeClass) var horizontalSizeClass @State private var openPlayer = false @State var playerURL: URL = URL(fileURLWithPath: Bundle.main.path(forResource: "Test_Video", ofType: ".mov")!) var body: some View { PlayerView(playerURL: playerURL) .ignoresSafeArea() } } struct PlayerView: View { @Environment(\.dismiss) var dismiss var playerURL:URL @State var playerVM = TestPlayerVM() var body: some View { VideoPlayer(player: playerVM.player) .ignoresSafeArea() .background { Color.black } .task { let playerItem = AVPlayerItem(url: playerURL) playerVM.playerItem = playerItem } } } @Observable class TestPlayerVM { private(set) public var player: AVPlayer = AVPlayer() var playerItem:AVPlayerItem? { didSet { player.replaceCurrentItem(with: playerItem) } } private var cancellable = Set<AnyCancellable>() init() { addPlayerObservers() } deinit { print("Deinit Video player manager") removeAllObservers() } private func removeAllObservers() { cancellable.removeAll() } private func addPlayerObservers() { player.publisher(for: \.timeControlStatus, options: [.initial, .new]) .receive(on: DispatchQueue.main) .sink { timeControlStatus in print("Player time control status \(timeControlStatus)") } .store(in: &cancellable) } }
2
0
450
Feb ’25
Accessing AV External Storage
Is it possible to use the AVExternalStorageDevice to access external storage from a connected camera or usb drive (via USB C or Lightning connector) on an iPad/iPhone. I have tested the following code on an iPhone 14 (iOS 18.1.1) and an iPad Gen 10 (18.3.1), and both return false for: // returns false on iPhone 14, iPad gen 10 print(AVExternalStorageDeviceDiscoverySession.isSupported) The following code returns null, when I try to access the external storage discovery session. // returns null on iOS devices print(AVExternalStorageDeviceDiscoverySession.shared) The following returns false, without displaying a permission dialog: AVExternalStorageDevice.requestAccess(completionHandler: { (granted: Bool) in // returns false with no permission dialog print(granted); What type of iOS devices are supported by AVExternalStorageDeviceDiscoverySession? What situations has it been used for (e.g. connecting to Camera via the external storage protocol, accessing photos from a SD card with an adapter, accessing photos from usb drive). Is there are sample code for using the AV External Storage api?
0
0
429
Feb ’25
AVContentKeySession reuse
Context We develop an iOS/Apple TV app that allows to play HLS+FP Live streams (custom playback UI), some of which use the same FairPlay content key id. All FairPlay content keys are requested to the same content key server. Implementation Despite Apple documentation warning to not reuse AVContentKeySessions, we use only one AVContentKeySession for all channels which allows the system to reuse the content key when a content key id is met again. As seen in another thread, people seems to think this is OK. Issue When reusing the AVContentKeySession and the user quickly tunes channels multiple times (up to 2 or 3 times per second using gestures), an inconsistency may occur where the content key request for a previous streams is asked to the delegate after a new stream is already being prepared and its AVURLAsset already assigned as the content key session AVContentKeyRecipient. Note that the previous content key recipient is removed before the new one is added. We also have been reported for crashes (though I haven't experienced it myself) when performing multiple channels tunings which makes us think that the AVContentKeySession should definitely not been reused. Note: On the other hand if a new AVContentKeySession is used for each stream, the system systematically requests a content key even if previous streams have used the same content key id. In this case, neither the crash nor the inconsistency issue are observed but it dramatically increases the number of calls to the content key server. Questions Should AVContentKeySessions definitely not be reused? Otherwise, how to handle the inconsistency issue described above?
0
0
437
Feb ’25
Distortion corrected images
Hi, Currently I am developing a 3D reconstruction project. Which requires images to be distortion-free (rectilinear) and with known intrinsics. The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data. The distortion correction parameters such as lensDistortionLookupTable are used. The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation. There are two questions I would like to get support on: A way to set the calibration parameters in each image. I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach. For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array. For example (last 10 elements are zero): "LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000" The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong. In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted. Including zeros (full array): Excluding zeros (array size changed): Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)? What is the criteria to shrink/exclude elements of the array? Any advice is very much welcome.
0
0
431
Feb ’25
Bundle Display Name Swift Playgrounds
I am attempting to use the AVSpeechSynthesizer to include text to speech in my Swift Playgrounds project, but when I attempt to use it on my IPhone i get the following errors: It works just fine on the simulator, but no audio is produced when I run it on-device. Is there a way to set this "bundle display name" within Swift Playgrounds, or are there any workarounds? My code: import AVFoundation class Speaker { let synthesizer = AVSpeechSynthesizer() func speak() { let utterance = AVSpeechUtterance(string: "Test, I am the speaker") synthesizer.speak(utterance) } }
1
1
475
Feb ’25
Distorted Audio When Recording External Mics With AVCaptureSession and AVAssetWriter
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB. For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example). The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app). The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter: Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335). I’ve filed a feedback including a demo project → FB15333298 🎟️ I would greatly appreciate any help! Have a great day, Martin
6
0
896
Feb ’25
New playback error on iOS/tvOS 18.x "CoreMediaErrorDomain Code=-15486"
Hello, Our users have started to see a new fatal AVPlayer error during playback starting with iOS/tvOS 18.0. The error is defined as "CoreMediaErrorDomain Code=-15486". We have not been able to reproduce this issue locally within our development team. Is there any documentation on the cause of this error or steps to recover from this error? Thank you, Howard
0
2
693
Feb ’25
Question about how to access to camera on Swift Playground
When I was working on my project for Swift Student Challenge, I found an interesting fact regarding camera access. If I create an App project on Xcode, the camera capture works well on it. However, if I copied and pasted the code on an App Playground project on Xcode, it crashed and output several errors. I am wondering why this is happening.
1
0
487
Feb ’25
On iOS 18, Mandarin is read aloud as Cantonese
Please include the line below in follow-up emails for this request. Case-ID: 11089799 When using AVSpeechUtterance and setting it to play in Mandarin, if Siri is set to Cantonese on iOS 18, it will be played in Cantonese. There is no such issue on iOS 17 and 16. 1.let utterance = AVSpeechUtterance(string: textView.text) let voice = AVSpeechSynthesisVoice(language: "zh-CN") utterance.voice = voice 2.In the phone settings, Siri is set to Cantonese
3
1
554
Feb ’25
How to receive AVMetricEvent performance data?
I am would like to look at AVMetricEvent data during video playback, so I have added this code to a test video player app: let playerItem: AVPlayerItem = ... let allMetrics = playerItem.allMetrics() Task.init { print("metrics task started") do { for try await metricEvent in allMetrics { print("metric event: \(metricEvent.description)") } } catch { print("unexpected metric iterator error \(error)") } } Running this in Simulator on iPhone 16 Pro (18.0) does not result in any "metric event" diagnostic messages being printed when the video associated with this AVPlayerItem is playing. Only the "metric task started" diagnostic message is seen. What am I doing wrong that prevents metric data being received?
2
0
396
Feb ’25