Hello,
I work on a video streaming app, and I have been working on this crash that we are seeing quite frequently (it is our #2 crasher at the moment). The stack trace indicates that an AVPictureInPictureController is being deallocated on a background thread. This leads to dangling AutoLayout constraints getting cleaned up, further resulting in an exception being thrown from the framework about the layout engine being accessed from a background thread.
We have internal analytics which indicate that the crash occurs after the user comes back to the app after putting it in the background, and switches playback from Picture in Picture mode back to the app's regular playback UI.
What has me puzzled here is that, as I'm sure you know, we have no control over the PIP UI. It is entirely system-provided, and there is none of our app's code in the stack trace. The whole process is even initiated by a KVO on an AVPlayerController property, and our app doesn't use that class directly anywhere. So how did we manage to cause this process to happen on a background thread?
Add to this the fact that the bug only appeared once we switched to compiling with Xcode 16, and is overwhelmingly present only on devices running iOS 18.
These factors lead me to believe that this is probably an OS issue. But before I go to file a feedback, I thought I would post here in case anyone has any ideas. I have attached an instance of the crash log to this post.
2025-01-08_17-32-45.7003_-0800-ea8d5c3323e0f1fc059cf83f6ec86377bdae1788.crash
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have AVPlayer with AVPictureInPictureController. Play video in app and picture In Picture works except one situation. Issue is: I pause video in application and during switch to background is not PiP activate. What do I wrong?
import UIKit
import AVKit
import AVFoundation
class ViewControllerSec: UIViewController,AVPictureInPictureControllerDelegate {
var pipPlayer: AVPlayer!
var avCanvas : UIView!
var pipCanvas: AVPlayerLayer?
var pipController: AVPictureInPictureController!
var mainViewControler : UIViewController!
var playerItem : AVPlayerItem!
var videoAvasset : AVAsset!
public func link(to parentViewController : UIViewController) {
mainViewControler = parentViewController
setup()
}
@objc func appWillResignActiveNotification(application: UIApplication) {
guard let pipController = pipController else {
print("PiP not supported")
return
}
print("PIP isSuspend: \(pipController.isPictureInPictureSuspended)")
print("PIP isPossible: \(pipController.isPictureInPicturePossible)"
if playerItem.status == .readyToPlay {
if pipPlayer.rate == 0 {
pipPlayer.play()
}
pipController.startPictureInPicture(). ---> Errorin log: Failed to start picture in picture.
} else {
print("Player not ready for PiP.")
}
}
private func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .moviePlayback)
try session.setActive(true)
} catch {
print("Audio session setup failed: \(error.localizedDescription)")
}
}
@objc func playerItemDidFailToPlayToEnd(_ notification: Notification) {
if let error = notification.userInfo?[AVPlayerItemFailedToPlayToEndTimeErrorKey] as? Error {
print("Failed to play to end: \(error.localizedDescription)")
}
}
func setup() {
setupAudio()
guard let videoURL = URL(string: "https://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.mp4/.m3u8") else { return }
videoAvasset = AVAsset(url: videoURL)
playerItem = AVPlayerItem(asset: videoAvasset)
addPlayerObservers()
pipPlayer = AVPlayer(playerItem: playerItem)
avCanvas = UIView(frame: view.bounds)
pipCanvas = AVPlayerLayer(player: pipPlayer)
guard let pipCanvas else { return }
pipCanvas.frame = avCanvas.bounds
//pipCanvas.videoGravity = .resizeAspectFill
mainViewControler.view.addSubview(avCanvas)
avCanvas.layer.addSublayer(pipCanvas)
if AVPictureInPictureController.isPictureInPictureSupported() {
pipController = AVPictureInPictureController(playerLayer: pipCanvas)
pipController?.delegate = self
pipController?.canStartPictureInPictureAutomaticallyFromInline = true
}
let playButton = UIButton(frame: CGRect(x: 20, y: 50, width: 100, height: 50))
playButton.setTitle("Play", for: .normal)
playButton.backgroundColor = .blue
playButton.addTarget(self, action: #selector(playTapped), for: .touchUpInside)
mainViewControler.view.addSubview(playButton)
let pauseButton = UIButton(frame: CGRect(x: 140, y: 50, width: 100, height: 50))
pauseButton.setTitle("Pause", for: .normal)
pauseButton.backgroundColor = .red
pauseButton.addTarget(self, action: #selector(pauseTapped), for: .touchUpInside)
mainViewControler.view.addSubview(pauseButton)
let pipButton = UIButton(frame: CGRect(x: 260, y: 50, width: 150, height: 50))
pipButton.setTitle("Start PiP", for: .normal)
pipButton.backgroundColor = .green
pipButton.addTarget(self, action: #selector(startPictureInPicture), for: .touchUpInside)
mainViewControler.view.addSubview(pipButton)
print("Error:\(String(describing: pipPlayer.error?.localizedDescription))")
NotificationCenter.default.addObserver(forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: nil) { [weak self] _ in
guard let self = self else { return }
if self.pipPlayer.rate == 0 {
self.pipPlayer.play()
pipController?.startPictureInPicture()
}
}
func addPlayerObservers() {
playerItem?.addObserver(self, forKeyPath: "status", options: [.old, .new], context: nil)
NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(_:)), name: .AVPlayerItemDidPlayToEndTime, object: playerItem)
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if keyPath == "status" {
if let statusNumber = change?[.newKey] as? NSNumber {
let status = AVPlayer.Status(rawValue: statusNumber.intValue)!
switch status {
case .readyToPlay:
print("Player is ready to play")
case .failed:
print("Player failed: \(String(describing: playerItem?.error))")
case .unknown:
print("Player status is unknown")
@unknown default:
fatalError()
}
}
}
}
@objc func playerDidFinishPlaying(_ notification: Notification) {
print("Video finished playing.")
}
deinit {
playerItem?.removeObserver(self, forKeyPath: "status")
NotificationCenter.default.removeObserver(self)
}
@objc func playTapped() {
pipPlayer.play()
}
@objc func pauseTapped() {
pipPlayer.pause()
}
@objc func startPictureInPicture() {
if let pipController = pipController, !pipController.isPictureInPictureActive {
pipController.startPictureInPicture()
}
}
@objc func stopPictureInPicture() {
if let pipController = pipController, pipController.isPictureInPictureActive {
pipController.stopPictureInPicture()
}
}
func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: Error) {
print("Failed to start PiP: \(error.localizedDescription)")
if let underlyingError = (error as NSError).userInfo[NSUnderlyingErrorKey] {
print("Underlying error: \(underlyingError)")
}
}
}
In my project, i want to add emoji to my video but emoji image becomes dark when add in hdr video, im trying to convert my emoji image to hdr format or create with hdr format but seems not work. I hope someone has experiences in this case could help me.
I’m experiencing a crash at runtime when trying to extract audio from a video. This issue occurs on both iOS 18 and earlier versions. The crash is caused by the following error:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once.'
*** First throw call stack:
(0x1875475ec 0x184ae1244 0x1994c49c0 0x217193358 0x217199899 0x192e208b9 0x217192fd9 0x30204c88d 0x3019e5155 0x301e5fb41 0x301af7add 0x301aff97d 0x301af888d 0x301aff27d 0x301ab5fa5 0x301ab6101 0x192e5ee39)
libc++abi: terminating due to uncaught exception of type NSException
My previous code worked fine, but it's crashing with Swift 6.
Does anyone know a solution for this?
## **Previous code:**
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler = exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
// Periodically check the progress of the export session
let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in
audioExportProgressPublisher.send(exportSession.progress)
}
// Export the audio track asynchronously
exportSession.exportAsynchronously {
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
timer.invalidate()
}
}
## New Code:
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) async {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
if #available(iOS 18.0, *) {
do {
try await exportSession.export(to: outputURL, as: .m4a)
let states = exportSession.states(updateInterval: 0.1)
for await state in states {
switch state {
case .pending, .waiting:
break
case .exporting(progress: let progress):
print("Exporting: \(progress.fractionCompleted)")
if progress.isFinished {
completion(.success(outputURL))
}else if progress.isCancelled {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
}else {
audioExportProgressPublisher.send(Float(progress.fractionCompleted))
}
}
}
}catch let error {
print(error.localizedDescription)
}
}else {
// Periodically check the progress of the export session
let publishTimer = Timer.publish(every: 0.1, on: .main, in: .common)
.autoconnect()
.sink { [weak exportSession] _ in
guard let exportSession else { return }
audioExportProgressPublisher.send(exportSession.progress)
}
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
await exportSession.export()
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
publishTimer.cancel()
}
}
Hi all, we try migrate project to Swift 6
Project use AVPlayer in MainActor
Selection audio and subtitiles not work
Task { @MainActor in let group = try await item.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible)
get error: Non-sendable type 'AVMediaSelectionGroup?' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary
and second example
`if #available(iOS 15.0, *) {
player?.currentItem?.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible, completionHandler: { group, error in
if error != nil {
return
}
if let groupWrp = group {
DispatchQueue.main.async {
self.setupAudio(groupWrp, audio: audioLang)
}
}
})
}`
get error: Sending 'groupWrp' risks causing data races
We have a Push To Talk application which allow user to record video and audio.
When user is recording a video using AVCaptureSession and receive's an Push To Talk call, from moment the Push To Talk call is received the audio in the video which is being captured is stopped while the video capture is still in progress.
Here after the PTT call is completed, we have tried restarting the audio session, there are no errors that are getting printed but we still don't see the audio getting restarted in video capture.
We have also tried to add a new input for AVCaptureSession we are receiving error that is resulting in video capture stopping, error mentioned below:
[OS-PLT] [CameraManager] Movie file finished with error: Error Domain=AVFoundationErrorDomain Code=-11818 "Recording Stopped" UserInfo={AVErrorRecordingSuccessfullyFinishedKey=true, NSLocalizedDescription=Recording Stopped, NSLocalizedRecoverySuggestion=Stop any other actions using the recording device and try again., AVErrorRecordingFailureDomainKey=1, NSUnderlyingError=0x3026bff60 {Error Domain=NSOSStatusErrorDomain Code=-16414 "(null)"}}, success
We have also raised a Feedback Ticket on same: https://feedbackassistant.apple.com/feedback/16050598
I see some demo show convert HDR video to SDR Pixelbuffer,such AVAssetReader、 AVVideoComposition 、AVComposition 、AVFoundation.
But In some cases,I want to render HDR Pixelbuffer and record video.
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([videoDevice isVideoHDRSupported]) {
NSError *error = nil;
if ([videoDevice lockForConfiguration:&error]) {
videoDevice.automaticallyAdjustsVideoHDREnabled = NO;
videoDevice.videoHDREnabled = YES; // 开启 HDR
[videoDevice unlockForConfiguration];
} else {
NSLog(@"Error: %@", error.localizedDescription);
}
}
Real-time processing of HDR data requires processing of video frame data (such as filters), ensuring that the processing chain supports 10-bit color depth and HDR metadata. And use imagesBuffer to object tracking, etc.
How to solve this problem?
Hi,
I am recording video using my app. And setting up fps also using below code. But sometime video is being recorded using 20 FPS. Can someone please let me know what I am doing wrong?
private func eightBitVariantOfFormat() -> AVCaptureDevice.Format? {
let activeFormat = self.videoDeviceInput.device.activeFormat
let fpsToBeSupported: Int = 60
debugPrint("fpsToBeSupported - \(fpsToBeSupported)" as AnyObject)
let allSupportedFormats = self.videoDeviceInput.device.formats
debugPrint("all formats - \(allSupportedFormats)" as AnyObject)
let activeDimensions = CMVideoFormatDescriptionGetDimensions(activeFormat.formatDescription)
debugPrint("activeDimensions - \(activeDimensions)" as AnyObject)
let filterBasedOnDimensions = allSupportedFormats.filter({ (CMVideoFormatDescriptionGetDimensions($0.formatDescription).width == activeDimensions.width) && (CMVideoFormatDescriptionGetDimensions($0.formatDescription).height == activeDimensions.height) })
if filterBasedOnDimensions.isEmpty {
// Dimension not found. Required format not found to handle.
debugPrint("Dimension not found" as AnyObject)
return activeFormat
}
debugPrint("filterBasedOnDimensions - \(filterBasedOnDimensions)" as AnyObject)
let filterBasedOnMaxFrameRate = filterBasedOnDimensions.compactMap({ format in
let videoSupportedFrameRateRanges = format.videoSupportedFrameRateRanges
if !videoSupportedFrameRateRanges.isEmpty {
let contains = videoSupportedFrameRateRanges.contains(where: { Int($0.maxFrameRate) >= fpsToBeSupported })
if contains {
return format
} else {
return nil
}
} else {
return nil
}
})
debugPrint("allFormatsToBeSupported - \(filterBasedOnMaxFrameRate)" as AnyObject)
guard !filterBasedOnMaxFrameRate.isEmpty else {
debugPrint("Taking default active format as nothing found when filtered using desired FPS" as AnyObject)
return activeFormat
}
var formatToBeUsed: AVCaptureDevice.Format!
if let four_two_zero_v = filterBasedOnMaxFrameRate.first(where: { CMFormatDescriptionGetMediaSubType($0.formatDescription) == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange}) {
// 'vide'/'420v'
formatToBeUsed = four_two_zero_v
} else {
// Take the first one from above array.
formatToBeUsed = filterBasedOnMaxFrameRate.first
}
do {
try self.videoDeviceInput.device.lockForConfiguration()
self.videoDeviceInput.device.activeFormat = formatToBeUsed
self.videoDeviceInput.device.activeVideoMinFrameDuration = CMTimeMake(value: 1, timescale: Int32(fpsToBeSupported))
self.videoDeviceInput.device.activeVideoMaxFrameDuration = CMTimeMake(value: 1, timescale: Int32(fpsToBeSupported))
if videoDeviceInput.device.isFocusModeSupported(.continuousAutoFocus) {
self.videoDeviceInput.device.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
}
self.videoDeviceInput.device.unlockForConfiguration()
} catch let error {
debugPrint("\(error)" as AnyObject)
}
return formatToBeUsed
}
Hi,
I am recording a video at 240 FPS within my application and saving it to the Photos app. The recorded video retains 240 FPS in the Photos app. However, after trimming the video using the Photos app and importing it back into my app, the FPS is reduced to 30 FPS.
Steps to Reproduce:
Record a video inside the application at 240 FPS.
Save the recorded video to the Photos app.
Verify that the video retains 240 FPS in the Photos app.
Trim the video using the built-in Photos app editor.
Import the trimmed video back into the application.
The FPS of the imported video is now reduced to 30 FPS.
Code Used for Importing Video:
I am using the following code to fetch the video from the Photos app:
let options: PHVideoRequestOptions = PHVideoRequestOptions()
options.version = .current // Using `.original` preserves FPS, but I need `.current` for other changes
options.deliveryMode = .highQualityFormat
options.isNetworkAccessAllowed = true
PHImageManager.default().requestAVAsset(forVideo: self, options: options) { (avAsset, audioMix, info) in
if let urlAsset = avAsset as? AVURLAsset {
completionHandler(urlAsset.url, self)
} else {
self.askForOriginal(completionHandler: completionHandler)
}
}
Observations:
The original video retains 240 FPS until it is trimmed in the Photos app.
After trimming, the FPS automatically drops to 30 FPS when imported back into the app.
If I use options.version = .original, the FPS is preserved, but I need .current to apply other modifications.
Questions:
Is this an expected behavior of PHImageManager when requesting a video with options.version = .current?
Is there a way to preserve the original FPS while still using .current?
Are there any workarounds to extract the trimmed video without FPS reduction?
Any insights or solutions would be greatly appreciated. Thanks in advance!
We are using AVAssetDownloadURLSession to download content with multiple audio tracks. Still, we are facing an issue where only one audio language is being downloaded, despite explicitly requesting multiple audio languages. However, all subtitle variants are being downloaded successfully.
Issue Details:
Observed Behaviour: When initiating a download using AVAssetDownloadURLSession, only one audio track (Hindi, in this case) is downloaded, even though the content contains multiple audio tracks.
Expected Behaviour: All requested audio tracks should be downloaded, similar to how subtitle variants are successfully downloaded.
Please find sample app implementation details: https://drive.google.com/file/d/1DLcBGNnuWFYsY0cipzxpIHqZYUDJujmN/view?usp=sharing
Manifest file for the asset looks something like below
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="A1",NAME="Hindi",LANGUAGE="hi",URI="indexHindi/Hindi.m3u8",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="A2",NAME="Bengali",LANGUAGE="bn",URI="indexBengali/Bengali.m3u8",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="A3",NAME="Kannada",LANGUAGE="kn",URI="indexKannada/Kannada.m3u8",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="A4",NAME="Malayalam",LANGUAGE="ml",URI="indexMalayalam/Malayalam.m3u8",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="A5",NAME="Tamil",LANGUAGE="ta",URI="indexTamil/Tamil.m3u8",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="A6",NAME="Telugu",LANGUAGE="te",URI="indexTelugu/Telugu.m3u8",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-STREAM-INF:BANDWIDTH=832196,AVERAGE-BANDWIDTH=432950,RESOLUTION=640x360,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A1",SUBTITLES="subs"
index-4k360p/360p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=832196,AVERAGE-BANDWIDTH=432950,RESOLUTION=640x360,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A2",SUBTITLES="subs"
index-4k360p/360p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=832196,AVERAGE-BANDWIDTH=432950,RESOLUTION=640x360,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A3",SUBTITLES="subs"
index-4k360p/360p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=832196,AVERAGE-BANDWIDTH=432950,RESOLUTION=640x360,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A4",SUBTITLES="subs"
index-4k360p/360p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=832196,AVERAGE-BANDWIDTH=432950,RESOLUTION=640x360,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A5",SUBTITLES="subs"
index-4k360p/360p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=832196,AVERAGE-BANDWIDTH=432950,RESOLUTION=640x360,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A6",SUBTITLES="subs"
index-4k360p/360p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1317051,AVERAGE-BANDWIDTH=607343,RESOLUTION=854x480,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A1",SUBTITLES="subs"
index-4k480p/480p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1317051,AVERAGE-BANDWIDTH=607343,RESOLUTION=854x480,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A2",SUBTITLES="subs"
index-4k480p/480p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1317051,AVERAGE-BANDWIDTH=607343,RESOLUTION=854x480,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A3",SUBTITLES="subs"
index-4k480p/480p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1317051,AVERAGE-BANDWIDTH=607343,RESOLUTION=854x480,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A4",SUBTITLES="subs"
index-4k480p/480p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1317051,AVERAGE-BANDWIDTH=607343,RESOLUTION=854x480,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A5",SUBTITLES="subs"
index-4k480p/480p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1317051,AVERAGE-BANDWIDTH=607343,RESOLUTION=854x480,FRAME-RATE=25.0,CODECS="hvc1.2.4.L93.b0,mp4a.40.2",AUDIO="A6",SUBTITLES="subs"
index-4k480p/480p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1715498,AVERAGE-BANDWIDTH=717018,RESOLUTION=1024x576,FRAME-RATE=25.0,CODECS="hvc1.2.4.L123.b0,mp4a.40.2",AUDIO="A1",SUBTITLES="subs"
index-4k576p/576p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1715498,AVERAGE-BANDWIDTH=717018,RESOLUTION=1024x576,FRAME-RATE=25.0,CODECS="hvc1.2.4.L123.b0,mp4a.40.2",AUDIO="A2",SUBTITLES="subs"
index-4k576p/576p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1715498,AVERAGE-BANDWIDTH=717018,RESOLUTION=1024x576,FRAME-RATE=25.0,CODECS="hvc1.2.4.L123.b0,mp4a.40.2",AUDIO="A3",SUBTITLES="subs"
index-4k576p/576p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1715498,AVERAGE-BANDWIDTH=717018,RESOLUTION=1024x576,FRAME-RATE=25.0,CODECS="hvc1.2.4.L123.b0,mp4a.40.2",AUDIO="A4",SUBTITLES="subs"
index-4k576p/576p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1715498,AVERAGE-BANDWIDTH=717018,RESOLUTION=1024x576,FRAME-RATE=25.0,CODECS="hvc1.2.4.L123.b0,mp4a.40.2",AUDIO="A5",SUBTITLES="subs"
index-4k576p/576p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1715498,AVERAGE-BANDWIDTH=717018,RESOLUTION=1024x576,FRAME-RATE=25.0,CODECS="hvc1.2.4.L123.b0,mp4a.40.2",AUDIO="A6",SUBTITLES="subs"
index-4k576p/576p.m3u8
#EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="English",DEFAULT=YES,AUTOSELECT=YES,FORCED=NO,LANGUAGE="en",URI="subtitle_en/sub_en_vtt.m3u8"
No video is playing and the same error is coming. I have tried everything by resetting. Please get an update as soon as possible.
error-The operation couldn't be completed. (CoreMediaErrorDomain error -42709.)
operation couldn't be completed. (CoreMediaErrorDomain error -42709.)
this error shows after update
He has been coming here for the last 10 days. Please!
please correct it as soon as possible
When we use AVPlayer to play DRM encrypted streams, it will not play normally under iOS 17.6.1 system version, and there is a high probability of system restart. This is the relevant core error log:
error 14:47:53.323369+0800 audiomxd [AirPlayError] carManager_copyProperty_block_invoke:499: got error -12784/0xFFFFCE10 kCMBaseObjectError_PropertyNotFound
error 14:47:53.323414+0800 audiomxd [SPEndpointManagerFactory] SidePlay Endpoint Manager creation failed with -72390/0xFFFEE53A
error 14:47:53.364949+0800 audiomxd [APBrowserCarSessionHelper] [0xF6AA] [Bonjour/WiFi] Unrecognized ConnectivityHelper event 101
error 14:47:53.375313+0800 audiomxd AddInstanceForFactory: No factory registered for id <CFUUID 0xa5c5118c0> F8BB1C28-BAE8-11D6-9C31-00039315CD46
I have been using SDAVAssetExportSession to compress videos in an app I am building, everything goes very smoothly until I have my new Iphone16, on the device, the spatial audio in camera setting is turned on by default, then the SDAVAssetExportSession starts to fail. I know it has something to do with audioSetting. the current setting is something like this:
exportSession.audioSettings = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100,
AVEncoderBitRateKey: 128000
]
And also, this is passed to the underlying object AVAssetReader or AVAssetWriter. I am not experienced in this area, and I really had a hard time trying to figure out.
Does anyone know how to set up AVAssetReader or AVAssetWriter to process video with spatial audio tracks ? thanks in advance.
I'm using an iPhone 15 Pro, which has switched from Lightning to USB Type-C. My iOS version is 18.3. According to Apple's documentation, AVCaptureDevice.DeviceType should support external device types.
🔗 Apple's Official Documentation:
https://developer.apple.com/documentation/avfoundation/avcapturedevice/devicetype-swift.struct/external
The documentation clearly states that iPadOS 17.0+ and iOS 17.0+ support external devices. However, in my actual tests:
On iPhone, discoverySession does not detect any external devices.
On iPad, discoverySession can detect external devices without any issues.
My Question:
Does iPhone USB-C actually support external devices (e.g., UVC cameras)?
If not, why does Apple's documentation claim that iOS 17 supports external devices instead of specifying iPadOS 17 only?
It's 2025, and I see that trends in video storage and streaming have changed significantly. Nowadays, CDN combined with domain-based video protection is the most popular solution.
Does anyone have more insights into this technology or real-world experience with it?
Topic:
Media Technologies
SubTopic:
Video
We're experiencing significant issues with AVPlayer when attempting to play partially downloaded HLS content in offline mode. Our app downloads HLS video content for offline viewing, but users encounter the following problems:
Excessive Loading Delay: When offline, AVPlayer attempts to load resources for up to 60 seconds before playing the locally available segments
Asset Loss: Sometimes AVPlayer completely loses the asset reference and fails to play the video on subsequent attempts
Inconsistent Behavior: The same partially downloaded asset might play immediately in one session but take 30+ seconds in another
Network Activity Despite Offline Settings: Despite configuring options to prevent network usage, AVPlayer still appears to be attempting network connections
These issues severely impact our offline user experience, especially for users with intermittent connectivity.
Technical Details
Implementation Context
Our app downloads HLS videos for offline viewing using AVAssetDownloadTask. We store the downloaded content locally and maintain a dictionary mapping of file identifiers to local paths. When attempting to play these videos offline, we experience the described issues.
Current Implementation
Here's our current implementation for playing the videos:
- (void)presentNativeAvplayerForVideo:(Video *)video navContext:(NavContext *)context {
NSString *localPath = video.localHlsPath;
if (localPath) {
NSURL *videoURL = [NSURL URLWithString:localPath];
NSDictionary *options = @{
AVURLAssetPreferPreciseDurationAndTimingKey: @YES,
AVURLAssetAllowsCellularAccessKey: @NO,
AVURLAssetAllowsExpensiveNetworkAccessKey: @NO,
AVURLAssetAllowsConstrainedNetworkAccessKey: @NO
};
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURL options:options];
AVPlayerViewController *playerViewController = [[AVPlayerViewController alloc] init];
NSArray *keys = @[@"duration", @"tracks"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
playerViewController.player = player;
[player play];
});
}];
playerViewController.modalPresentationStyle = UIModalPresentationFullScreen;
[context presentViewController:playerViewController animated:YES completion:nil];
}
}
Attempted Solutions
We've tried several approaches to mitigate these issues:
Modified Asset Options:
NSDictionary *options = @{
AVURLAssetPreferPreciseDurationAndTimingKey: @NO, // Changed to NO
AVURLAssetAllowsCellularAccessKey: @NO,
AVURLAssetAllowsExpensiveNetworkAccessKey: @NO,
AVURLAssetAllowsConstrainedNetworkAccessKey: @NO,
AVAssetReferenceRestrictionsKey: @(AVAssetReferenceRestrictionForbidRemoteReferenceToLocal)
};
Skipped Asynchronous Key Loading:
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset automaticallyLoadedAssetKeys:nil];
Modified Player Settings:
player.automaticallyWaitsToMinimizeStalling = NO;
[playerItem setPreferredForwardBufferDuration:2.0];
Added Network Resource Restrictions:
playerItem.canUseNetworkResourcesForLiveStreamingWhilePaused = NO;
Used File URLs Instead of HTTP URLs where possible
Despite these attempts, the issues persist.
Expected vs. Actual Behavior
Expected Behavior:
AVPlayer should immediately begin playback of locally available HLS segments
When offline, it should not attempt to load from network for more than a few seconds
Once an asset is successfully played, it should be reliably available for future playback
Actual Behavior:
AVPlayer waits 10-60 seconds before playing locally available segments
Network activity is observed despite all network-restricting options
Sometimes the player fails completely to play a previously available asset
Behavior is inconsistent between playback attempts with the same asset
Questions:
What is the recommended approach for playing partially downloaded HLS content offline with minimal delay?
Is there a way to force AVPlayer to immediately use available local segments without attempting to load from the network?
Are there any known issues with AVPlayer losing references to locally stored HLS assets?
What diagnostic steps would you recommend to track down the specific cause of these delays?
Does AVFoundation have specific timeouts for offline HLS playback that could be configured?
Any guidance would be greatly appreciated as this issue is significantly impacting our user experience.
Device Information
iOS Versions Tested: 14.5 - 18.1
Device Models: iPhone 12, iPhone 13, iPhone 14, iPhone 15
Xcode Version: 15.3-16.2.1
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds.
However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, I get this warning in the console :- Visual isTranslatable: NO; reason: observation failure: noObservations
Here is the screenshot :-
My entire code :-
import AVFoundation
import Metal
class GradientVideoCompositorTest: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String: Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
]
var requiredPixelBufferAttributesForRenderContext: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
]
private var renderContext: AVVideoCompositionRenderContext?
private var metalDevice: MTLDevice!
private var metalCommandQueue: MTLCommandQueue!
private var metalLibrary: MTLLibrary!
private var metalPipeline: MTLComputePipelineState!
override init() {
super.init()
setupMetal()
}
func setupMetal() {
guard let device = MTLCreateSystemDefaultDevice(),
let queue = device.makeCommandQueue(),
let library = try? device.makeDefaultLibrary(),
let function = library.makeFunction(name: "gradientShader") else {
fatalError("Metal setup failed")
}
self.metalDevice = device
self.metalCommandQueue = queue
self.metalLibrary = library
self.metalPipeline = try? device.makeComputePipelineState(function: function)
}
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
renderContext = newRenderContext
}
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
guard let outputPixelBuffer = renderContext?.newPixelBuffer(),
let metalTexture = createMetalTexture(from: outputPixelBuffer) else {
request.finish(with: NSError(domain: "com.example.gradient", code: -1, userInfo: nil))
return
}
var time = Float(request.compositionTime.seconds)
renderGradient(to: metalTexture, time: time)
request.finish(withComposedVideoFrame: outputPixelBuffer)
}
private func createMetalTexture(from pixelBuffer: CVPixelBuffer) -> MTLTexture? {
var texture: MTLTexture?
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .bgra8Unorm,
width: width,
height: height,
mipmapped: false
)
textureDescriptor.usage = [.shaderWrite, .shaderRead]
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
if let textureCache = createTextureCache(), let cvTexture = createCVMetalTexture(from: pixelBuffer, cache: textureCache) {
texture = CVMetalTextureGetTexture(cvTexture)
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
return texture
}
private func renderGradient(to texture: MTLTexture, time: Float) {
guard let commandBuffer = metalCommandQueue.makeCommandBuffer(),
let commandEncoder = commandBuffer.makeComputeCommandEncoder() else { return }
commandEncoder.setComputePipelineState(metalPipeline)
commandEncoder.setTexture(texture, index: 0)
var mutableTime = time
commandEncoder.setBytes(&mutableTime, length: MemoryLayout<Float>.size, index: 0)
let threadsPerGroup = MTLSize(width: 16, height: 16, depth: 1)
let threadGroups = MTLSize(
width: (texture.width + 15) / 16,
height: (texture.height + 15) / 16,
depth: 1
)
commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadsPerGroup)
commandEncoder.endEncoding()
commandBuffer.commit()
}
private func createTextureCache() -> CVMetalTextureCache? {
var cache: CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &cache)
return cache
}
private func createCVMetalTexture(from pixelBuffer: CVPixelBuffer, cache: CVMetalTextureCache) -> CVMetalTexture? {
var cvTexture: CVMetalTexture?
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
CVMetalTextureCacheCreateTextureFromImage(
kCFAllocatorDefault,
cache,
pixelBuffer,
nil,
.bgra8Unorm,
width,
height,
0,
&cvTexture
)
return cvTexture
}
}
class GradientCompositionInstructionTest: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange
var enablePostProcessing: Bool = true
var containsTweening: Bool = true
var requiredSourceTrackIDs: [NSValue]? = nil
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
init(timeRange: CMTimeRange) {
self.timeRange = timeRange
}
}
func createGradientVideoComposition(duration: CMTime, size: CGSize) -> AVMutableVideoComposition {
let composition = AVMutableComposition()
let instruction = GradientCompositionInstructionTest(timeRange: CMTimeRange(start: .zero, duration: duration))
let videoComposition = AVMutableVideoComposition()
videoComposition.customVideoCompositorClass = GradientVideoCompositorTest.self
videoComposition.renderSize = size
videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // 30 FPS
videoComposition.instructions = [instruction]
return videoComposition
}
#include <metal_stdlib>
using namespace metal;
kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]],
constant float &time [[buffer(0)]],
uint2 id [[thread_position_in_grid]]) {
float2 uv = float2(id) / float2(output.get_width(), output.get_height());
// Animated colors based on time
float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0);
float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3);
// Linear interpolation for gradient
float3 gradientColor = mix(color1, color2, uv.y);
output.write(float4(gradientColor, 1.0), id);
}
I was advised to post here by a Code-Level Support representative. Below will be a copy of my initial issue report, and my minimally reproductive test project can be found at the following GitHub repository URL...
https://github.com/PierceLBrooks/vtUudSeiNalCmake
DESCRIPTION OF PROBLEM
When encoding H264 video codec data using the VTCompressionSession API facilities available through the VideoToolbox framework on MacOS, the resultant bitstream will invariably include Unregistered User Data SEI NAL units that carry the UUID "47564adc-5c4c-433f-94ef-c5113cd143a8".
The proprietary decoders we are working with currently struggle with filtering out these NAL units.
Can you explain what purpose this serves, what the meaning of the byte-wise unit payloads are, and which configuration settings the VideoToolbox encoder instance specifically depends upon for triggering the insertion of them?
STEPS TO REPRODUCE
1. Invoke the instantiation of a new VideoToolbox H264 encoder object by calling VTCompressionSessionCreate with appropriate configuration flags.
2. Push frames through the encoder, receiving their encoded byte buffer counterparts through an asynchronous callback.
3. Write that encoded data to some buffer which will contain the totality of the encoder's output.
4. Inspect the NAL units of the initial portion of this output bitstream buffer.
5. Observe the presence of at least one Unregistered User Data SEI NAL unit carrying the "47564adc-5c4c-433f-94ef-c5113cd143a8" UUID near the beginning of the output segment.
Dear Developers and DTS team,
This is writing to seek your expert guidance on a persistent memory leak issue I've discovered while implementing video playback in a SwiftUI application.
Environment Details:
iOS 17+, Swift (SwiftUI, AVKit), Xcode 16.2
Target Devices:
iPhone 15 Pro (iOS 18.3.2)
iPhone 16 Plus (iOS 18.3.2)
Detailed Issue Description:
I am experiencing consistent memory leaks when using UIViewControllerRepresentable with AVPlayerViewController for FullscreenVideoPlayer and native VideoPlayer during video playback termination.
Code Context:
I have implemented the following approaches:
Added static func dismantleUIViewController(: coordinator:)
Included deinit in Coordinator
Utilized both UIViewControllerRepresentable and native VideoPlayer
/// A custom AVPlayer integrated with AVPlayerViewController for fullscreen video playback.
///
/// - Parameters:
/// - videoURL: The URL of the video to be played.
struct FullscreenVideoPlayer: UIViewControllerRepresentable {
// @Binding something for controlling fullscreen
let videoURL: URL?
func makeUIViewController(context: Context) -> AVPlayerViewController {
let controller = AVPlayerViewController()
controller.delegate = context.coordinator
print("AVPlayerViewController created: \(String(describing: controller))")
return controller
}
/// Updates the `AVPlayerViewController` with the provided video URL and playback state.
///
/// - Parameters:
/// - uiViewController: The `AVPlayerViewController` instance to update.
/// - context: The SwiftUI context for updates.
func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) {
guard let videoURL else {
print("Invalid videoURL")
return
}
// Initialize AVPlayer if it's not already set
if uiViewController.player == nil || uiViewController.player?.currentItem == nil {
uiViewController.player = AVPlayer(url: videoURL)
print("AVPlayer updated: \(String(describing: uiViewController.player))")
}
// Handle playback state
}
func makeCoordinator() -> Coordinator {
Coordinator(parent: self)
}
static func dismantleUIViewController(_ uiViewController: AVPlayerViewController, coordinator: Coordinator) {
uiViewController.player?.pause()
uiViewController.player?.replaceCurrentItem(with: nil)
uiViewController.player = nil
print("dismantleUIViewController called for \(String(describing: uiViewController))")
}
}
extension FullscreenVideoPlayer {
class Coordinator: NSObject, AVPlayerViewControllerDelegate {
var parent: FullscreenVideoPlayer
init(parent: FullscreenVideoPlayer) {
self.parent = parent
}
deinit {
print("Coordinator deinitialized")
}
}
}
struct ContentView: View {
private let videoURL: URL? = URL(string: "https://interactive-examples.mdn.mozilla.net/media/cc0-videos/flower.mp4")
var body: some View {
NavigationStack {
Text("My Userful View")
List {
Section("VideoPlayer") {
NavigationLink("FullscreenVideoPlayer") {
FullscreenVideoPlayer(videoURL: videoURL)
.frame(height: 500)
}
NavigationLink("Native VideoPlayer") {
VideoPlayer(player: .init(url: videoURL!))
.frame(height: 500)
}
}
}
}
}
}
Reproducibility Steps:
Run application on target devices
Scenario A - FullscreenVideoPlayer:
Tap FullscreenVideoPlayer
Play video to completion
Repeat process 5 times
Scenario B - VideoPlayer:
Navigate back to main screen
Tap Video Player
Play video to completion
Repeat process 5 times
Observed Memory Leak Characteristics:
Per Iteration (Debug Memory Graph):
4 instances of NSMutableDictionary (Storage) leaked
4 instances of __NSDictionaryM leaked
4 × 112-byte malloc blocks leaked
Cumulative Effects:
Debug console prints: "dismantleUIViewController called for <AVPlayerViewController: 0x{String}> Coordinator deinitialized" when navigate back to main screen
After multiple iterations, leak instances double
Specific Questions:
What underlying mechanisms are causing these memory leaks in UIViewControllerRepresentable and VideoPlayer?
What are the recommended strategies to comprehensively prevent and resolve these memory management issues?