Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

iPhone 15 Pro Has USB-C, but AVCaptureDevice Doesn't Support External Devices?
I'm using an iPhone 15 Pro, which has switched from Lightning to USB Type-C. My iOS version is 18.3. According to Apple's documentation, AVCaptureDevice.DeviceType should support external device types. 🔗 Apple's Official Documentation: https://developer.apple.com/documentation/avfoundation/avcapturedevice/devicetype-swift.struct/external The documentation clearly states that iPadOS 17.0+ and iOS 17.0+ support external devices. However, in my actual tests: On iPhone, discoverySession does not detect any external devices. On iPad, discoverySession can detect external devices without any issues. My Question: Does iPhone USB-C actually support external devices (e.g., UVC cameras)? If not, why does Apple's documentation claim that iOS 17 supports external devices instead of specifying iPadOS 17 only?
1
0
371
Mar ’25
Configuring CaptureVideoDelegate to avoid gamma/transfer function
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive video in as unprocessed format as possible. In particular, I'm interested in getting pixel buffers that contain pretty much the bayer data as the sensor sees it - with the minimum processing of color possible. Currently we configure the AVCaptureDevice to fix the focus and exposure, use a low ISO with no gain and set the white balance gains to 1. AVCaptureVideoDataOutput is using 32BGRA. What I'd like to do is remove any additional color and brightness processing such that the data is effectively processed with a linear transfer function (i.e. gamma function is 1). I thought that this might be down to using the AVCaptureDevice activeColorSpace - we currently use P3_D65 for this. But there only seems to be a few choices (e.g. sRGB, HLG_BT2020) all of which I think affect the gamma. So: is it possible to control or specify the gamma / transfer function when using CaptureVideoDelegate? if not, does one of the color space settings have a defined gamma function that I can effectively reverse it from the pixel data without losing too much information? or is there a better way to capture video-ish speed images (15-30fps) from the camera sensor that skips processing like this? Many thanks for any suggestions.
3
0
75
Mar ’25
Case-ID: 12759603: Memory Leak in UIViewControllerRepresentable and VideoPlayer
Dear Developers and DTS team, This is writing to seek your expert guidance on a persistent memory leak issue I've discovered while implementing video playback in a SwiftUI application. Environment Details: iOS 17+, Swift (SwiftUI, AVKit), Xcode 16.2 Target Devices: iPhone 15 Pro (iOS 18.3.2) iPhone 16 Plus (iOS 18.3.2) Detailed Issue Description: I am experiencing consistent memory leaks when using UIViewControllerRepresentable with AVPlayerViewController for FullscreenVideoPlayer and native VideoPlayer during video playback termination. Code Context: I have implemented the following approaches: Added static func dismantleUIViewController(: coordinator:) Included deinit in Coordinator Utilized both UIViewControllerRepresentable and native VideoPlayer /// A custom AVPlayer integrated with AVPlayerViewController for fullscreen video playback. /// /// - Parameters: /// - videoURL: The URL of the video to be played. struct FullscreenVideoPlayer: UIViewControllerRepresentable { // @Binding something for controlling fullscreen let videoURL: URL? func makeUIViewController(context: Context) -> AVPlayerViewController { let controller = AVPlayerViewController() controller.delegate = context.coordinator print("AVPlayerViewController created: \(String(describing: controller))") return controller } /// Updates the `AVPlayerViewController` with the provided video URL and playback state. /// /// - Parameters: /// - uiViewController: The `AVPlayerViewController` instance to update. /// - context: The SwiftUI context for updates. func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) { guard let videoURL else { print("Invalid videoURL") return } // Initialize AVPlayer if it's not already set if uiViewController.player == nil || uiViewController.player?.currentItem == nil { uiViewController.player = AVPlayer(url: videoURL) print("AVPlayer updated: \(String(describing: uiViewController.player))") } // Handle playback state } func makeCoordinator() -> Coordinator { Coordinator(parent: self) } static func dismantleUIViewController(_ uiViewController: AVPlayerViewController, coordinator: Coordinator) { uiViewController.player?.pause() uiViewController.player?.replaceCurrentItem(with: nil) uiViewController.player = nil print("dismantleUIViewController called for \(String(describing: uiViewController))") } } extension FullscreenVideoPlayer { class Coordinator: NSObject, AVPlayerViewControllerDelegate { var parent: FullscreenVideoPlayer init(parent: FullscreenVideoPlayer) { self.parent = parent } deinit { print("Coordinator deinitialized") } } } struct ContentView: View { private let videoURL: URL? = URL(string: "https://interactive-examples.mdn.mozilla.net/media/cc0-videos/flower.mp4") var body: some View { NavigationStack { Text("My Userful View") List { Section("VideoPlayer") { NavigationLink("FullscreenVideoPlayer") { FullscreenVideoPlayer(videoURL: videoURL) .frame(height: 500) } NavigationLink("Native VideoPlayer") { VideoPlayer(player: .init(url: videoURL!)) .frame(height: 500) } } } } } } Reproducibility Steps: Run application on target devices Scenario A - FullscreenVideoPlayer: Tap FullscreenVideoPlayer Play video to completion Repeat process 5 times Scenario B - VideoPlayer: Navigate back to main screen Tap Video Player Play video to completion Repeat process 5 times Observed Memory Leak Characteristics: Per Iteration (Debug Memory Graph): 4 instances of NSMutableDictionary (Storage) leaked 4 instances of __NSDictionaryM leaked 4 × 112-byte malloc blocks leaked Cumulative Effects: Debug console prints: "dismantleUIViewController called for <AVPlayerViewController: 0x{String}> Coordinator deinitialized" when navigate back to main screen After multiple iterations, leak instances double Specific Questions: What underlying mechanisms are causing these memory leaks in UIViewControllerRepresentable and VideoPlayer? What are the recommended strategies to comprehensively prevent and resolve these memory management issues?
1
0
121
Mar ’25
Coverting CVPixelBuffer 2VUY to a Metal Texture
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image. I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off. I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors. On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues. My code for the conversion is also attached. If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it. Many Thanks Conversion_Logs.txt ConversionCode.swift
3
0
139
Apr ’25
AVCaptureSession video and audio out of sync
I'm using an AVCaptureSession to send video and audio samples to an AVAssetWriter. When I play back the resultant video, sometimes there is a significant lag between the audio compared with the video, so they're just not in sync. But sometimes they are, with the same code. If I look at the very first presentation time stamps of the buffers being sent to the delegate, via func captureOutput(_: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) I see something like this: Adding audio samples for pts time 227711.0855328798, Adding video samples for pts time 227710.778785374 That is, the clock for audio vs video is behind: the first audio sample I receive is at 11.08 something, while the video video sample is earlier in time, at 10.778 something. The times are the presentation time stamps of the buffer, and the outputPresentationTimeStamp is the exact same number. It feels like "video" vs the "audio" clock are just mismatched. This doesn't always happen: sometimes they're synced. Sometimes they're not. Any ideas? The device I'm recording is a webcam, on iPadOS, connected via the usb-c port.
3
0
80
Apr ’25
CVPixelBufferCreate EXC_BAD_ACCESS
I am doing something similar to this post Within an AVCaptureDataOutputSynchronizerDelegate method, I create a pixelBuffer using CVPixelBufferCreate with the following attributes: kCVPixelBufferIOSurfacePropertiesKey as String: true, kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey as String: true When I copy the data from the vImagePixelBuffer "rotatedImageBuffer", I get the following error: Thread 10: EXC_BAD_ACCESS (code=1, address=0x14caa8000) I get the same error with memcpy and data.copyBytes (not running them at the same time obviously). If I use CVPixelBufferCreateWithBytes, I do not get this error. However, CVPixelBufferCreateWithBytes does not let you include attributes (see linked post above). I am using vImage because I need the original CVPixelBuffer from the camera output and a rotated version with a different color scheme. // Copy to pixel buffer let attributes: NSDictionary = [ true : kCVPixelBufferIOSurfacePropertiesKey, true : kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey, ] var colorBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, attributes, &colorBuffer) //let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes, nil, nil, attributes as CFDictionary, &colorBuffer) // does NOT produce error, but also does not have attributes guard status == kCVReturnSuccess, let colorBuffer = colorBuffer else { print("Failed to create buffer") return } let lockFlags = CVPixelBufferLockFlags(rawValue: 0) guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(colorBuffer, lockFlags) else { print("Failed to lock base address") return } let colorBufferMemory = CVPixelBufferGetBaseAddress(colorBuffer)! let data = Data(bytes: rotatedImageBuffer.data, count: rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) data.copyBytes(to: colorBufferMemory.assumingMemoryBound(to: UInt8.self), count: data.count) // Fails here //memcpy(colorBufferMemory, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) // Also produces the same error CVPixelBufferUnlockBaseAddress(colorBuffer, lockFlags)
1
0
93
Apr ’25
Save MPEG-TS (h264 or HEVC) video stream using AVAssetWriter.
I'm capturing video stream from GoPro camera (I demux UDP MPEG-TS packets) and create CMSampleBuffers from them, this works fine when I display them using CMSampleBufferLayer. However when I dump them to disk using AVAssetWriter and then playback it with AVPlayer, AVPlayer has problems with scrubbing, it also cannot render previous frames, it needs to go back to key frames. Also thumbnails generated with AVAssetImageGenerator are mostly distorted and green, even though I set the requestedTimeToleranceAfter longer than the key frames frequency. When I re-encode saved video once again with AVAssetExportSession and play it back then I can scrub the video just fine. Is it because re-transcoding adds additional metadata to enable generating frames when rewinding the video and scrubbing? If so is there a way to achieve it with AVAssetWriter without much time penalty? I need the dump/save operation to be very fast. I also considered the following: Instead of de-muxing video and creating CMSampleBuffers, maybe I could directly dump the stream to disk and somehow add moov atoms with timing information. Would this approach work? If so where I can find information how to do it? Thank you!
3
0
100
Apr ’25
videoCaptureQueue would make the app crashed when I using IOS 18.4.1
Hi All I have some problem when I using the IOS 18.4.1 I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1 I tried to following sample code. However, when I run the app around 30 seconds to 1 minutes, the application would be crashed When I using another Ipad with IOS 17, it would not have the same problem. https://developer.apple.com/documentation/createml/creating-an-action-classifier-model https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
6
0
106
May ’25
FxPlug SDK 4.3.2 causes dyld errors when loaded on versions of macOS prior to 14.6
FxPlug is one of Apple’s official SDKs, recently updated to version 4.3.2. In theory the SDK should guarantee third-parties can build plug-ins that are backward compatible with older versions of Final Cut Pro, Motion and Compressor. FxPlug SDK includes two frameworks that third-party developers like me end up bundling inside our third-party plugins: FxPlug.framework and PlugInManager.framework. Behind the scenes, the SDK relies on PlugInKit, but the FxPlug.framework provides abstractions so that third-parties don't have to handle the intricacies of XPC directly. The most recent version of FxPlug.framework included with the SDK was possibly built with an error: the Info.plist shows a LSMinimumSystemVersion entry of 14.6, suggesting the binary may have been compiled and linked with MACOSX_DEPLOYMENT_TARGET set to 14.6 by accident. The problem: when older versions of Final Cut Pro or Motion load a third-party plugin (itself built with the appropriate deployment target, macOS 11 or 12, for example) on pre-macOS 14.6, the dynamic linker immediately loads Apple’s own FxPlug.framework, but this causes the process to crash immediately: Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libobjc.A.dylib 0x7ff81e065955 map_images_nolock + 5399 1 libobjc.A.dylib 0x7ff81e0643d6 map_images + 67 2 dyld 0x10bd551fb invocation function for block in dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 275 3 dyld 0x10bd506c9 dyld4::RuntimeState::withLoadersReadLock(void () block_pointer) + 41 4 dyld 0x10bd550e2 dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 82 5 dyld 0x10bd68d45 dyld4::APIs::_dyld_objc_notify_register(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 79 6 libobjc.A.dylib 0x7ff81e064244 _objc_init + 1279 7 libdispatch.dylib 0x7ff81e01d993 _os_object_init + 13 8 libdispatch.dylib 0x7ff81e02b1b8 libdispatch_init + 311 9 libSystem.B.dylib 0x7ff828fd585f libSystem_initializer + 238 10 dyld 0x10bd5ae4f invocation function for block in dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 182 11 dyld 0x10bd81aad invocation function for block in dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 242 12 dyld 0x10bd78e26 invocation function for block in dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 557 13 dyld 0x10bd47db3 dyld3::MachOFile::forEachLoadCommand(Diagnostics&, void (load_command const*, bool&) block_pointer) const + 129 14 dyld 0x10bd78bb7 dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 179 15 dyld 0x10bd81604 dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 466 16 dyld 0x10bd5ad82 dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 144 17 dyld 0x10bd6165a dyld4::PrebuiltLoader::runInitializers(dyld4::RuntimeState&) const + 30 18 dyld 0x10bd6e76e dyld4::APIs::runAllInitializersForMain() + 38 19 dyld 0x10bd4c38d dyld4::prepare(dyld4::APIs&, dyld3::MachOAnalyzer const*) + 3443 20 dyld 0x10bd4b4e4 start + 388 Can someone at Apple with the right domain expertise confirm that this is the type of crash you would see because the framework was built assuming it would run on macOS 14.6 and later, and when facing an older environment (e.g. ObjC runtime) it lacks extra code that would ensure backward compatibility with the earlier ObjC runtime found on macOS 12.x?
2
0
91
Jun ’25
Importing pictures with non-QT metadata
Movies taken with Android phones store their location metadata (and probably others) in ways that are ignored by Apple's ecosystem (QuickTime Player, Photos.app). I am considering creating a Spotlight importer so that this metadata is available to the sytem. But I have a couple of questions: Can a Spotlight importer add new data (like location) to the data that the standard importer already captured? Or would the new importer need to take over the whole data gathering? If so, would macOS allow that? Would that Spotlight importer be somehow used by e.g. Photos.app and QT Player to capture the location? Or would this end up in Spotlight "knowing" the location but Photos.app ignoring it? If so, maybe there is something more broadly useful than a Spotlight importer?
2
0
59
May ’25
AVPlayer freezes after ReplaceCurrentItemWithPlayerItem + immediate seek on iOS 18.4 (streaming only)
Starting in iOS 18.4, (and still in the iOS 18.5 beta), the AVPlayer seems to freeze when we: Replace the current AVPlayerItem, ReplaceCurrentItemWithPlayerItem and then: Call Seek very shortly afterwards (seekToTime:toleranceBefore:toleranceAfter: / seek(to:)) And then subsequent calls to play after have no effect. However, it feels scrubbing to see after works and also changing the playback rate (i.e. fast forward) tends to clear up the frozen state. Our primary workflow involves video playback, replacing video to show new clips and in some cases seeking to specific frames. This appears to only be occurring while streaming video, reports are all that local downloaded video playback remains fine. This same code path has worked without issue on 17.x and 18.3.2 and for years before that. What is particularly strange is that time observers log that video is still playing or feeding frames. The reported status is ReadyToPlay, IsLikelyToKeepUp is true, and there are no indications of stalling or buffering. A similar issue is true for our web application in Safari. While on Sonoma and Safari 17.x, there is no issue. When you update to macOS Sequoia 15.4.1 and Safari 18.4, you begin observing a similar freezing. The same does not occur on Chrome or other tested browsers. There appears to be in the release notes for Safari 18.4, an interesting "fix" note that seems similar to what we are now experiencing: https://developer.apple.com/documentation/safari-release-notes/safari-18_4-release-notes "Fixed an issue where playback doesn’t always resume after a seek. (140097993)" "Fixed playing video generating non-monotonic ‘timeupdate’ events. (142275184) (FB16222910)" "Fixed websites calling play() during a seek() is allowed by the specification so that the play event is fired even if the seek hasn’t completed. (142517488)" "Fixed seek not completing for WebM under some circumstances. (143372794)" "Fixed MediaRecorderPrivateEncoder writing frames out of order. (143956063)"
2
0
142
May ’25
I’m using ScreenCaptureKit on macOS to grab frames and measure end-to-end latency (capture → my delegate callback). For each CMSampleBuffer I read:
I’m using ScreenCaptureKit on macOS to grab frames and measure end-to-end latency (capture → my delegate callback). For each CMSampleBuffer I read: let pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds to get the “capture” timestamp, and I also extract the mach-absolute display time: let attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] let displayMach = attachments?.first?[.displayTime] as? UInt64 // convert mach ticks to seconds... Then I compare both against the current time: let now = CACurrentMediaTime() let latencyFromPTS = now - pts let latencyFromDisplay = now - displayTimeSeconds But I consistently see negative values for both calculations—i.e. the PTS or displayTime often end up numerically larger than now. This suggests that the “presentation timestamp” and the mach-absolute display time are coming from a different epoch or clock domain than CACurrentMediaTime(). Questions: Which clocks/epochs does ScreenCaptureKit use for PTS and for .displayTime? How can I align these timestamps with CACurrentMediaTime() so that now - pts and now - displayTime reliably yield non-negative real-world latencies? Any pointers on the correct clock conversions or APIs to use would be greatly appreciated.
1
0
113
May ’25
WideFOV - APMP - Stereo
Does anyone have a template of an Apple Projected Media Profile Format Description or a File of a Stereo wideFOV video? Use case I have 2 compatible cameras that I stereo sync and I want to move the projection information from the compatible video to the Spatial video that combines them. Every version I can come up with crashes the AVP and when viewing as Spatial in Tahoe I just get a black screen.
4
0
142
Jun ’25
Obtain the screen rotation direction in the background
I use replaykit for system-level screen recording. I want to determine whether the screen is in landscape mode by calling back CMSamplebuffer, but CMSamplebuffer does not come with this information. The other several apis related to obtaining the screen orientation are also restricted by the background. I want to know whether the information of the screen rotation direction can be obtained in real time in the background
1
0
58
Jun ’25
WideCamera consumes more CPU that telePhotoCamera
I have beet taking images from the iOS video camera feed and have encountered an issue. When you take images form the wideCamera this consumes about half the phone's CPU. The same is not the case when you take images from the telephotoCamera video stream. Is there a way of disabling the extra processing that is being done?
1
0
46
Jun ’25
AVFoundation — MJPEG Custom-Resolution UVC Stream Not Working on macOS
Hello, I'm Soonwon. We’re currently developing a UVC camera device and trying to stream MJPEG video via AVFoundation on macOS. However, we’re running into a problem with custom resolutions. When we try to use AVFoundation on macOS to capture MJPEG video at 1000x6000, the stream is not accepted or simply doesn’t work. Lower resolutions work fine. (Interestingly, using the same device on iPadOS, we can capture the 1000x6000 MJPEG stream successfully by using AVCaptureSessionPresetInputPriority.) Is there any way to receive custom-resolution MJPEG streams (like 1000x6000) from a UVC device using AVFoundation on macOS? Are there specific session presets, entitlements, or known limitations that affect MJPEG handling at custom resolutions on macOS? Does macOS handle MJPEG differently from iPadOS in AVFoundation? Any insight or guidance would be greatly appreciated. Thank you! NSError *error = nil; if ([selectedDevice lockForConfiguration:&error]) { [session beginConfiguration]; session.sessionPreset = AVCaptureSessionPresetHigh; bool foundFormat = false; for (AVCaptureDeviceFormat *format in selectedDevice.formats) { CMVideoDimensions dims = CMVideoFormatDescriptionGetDimensions(format.formatDescription); FourCharCode pixelFormat = CMFormatDescriptionGetMediaSubType(format.formatDescription); foundFormat = true; if (dims.width == 1000 && dims.height == 6000) { selectedDevice.activeFormat = format; foundFormat = true; break; } } if(foundFormat == false) { NSLog(@"Failed to foundFormat : "); [session commitConfiguration]; return false; } NSError* error = nil; AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:selectedDevice error:&error]; if (error || ![session canAddInput:input]) { NSLog(@"Failed to add video input: %@", error.localizedDescription); [session commitConfiguration]; return false; } [session addInput:input]; AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init]; output.alwaysDiscardsLateVideoFrames = YES; output.videoSettings = @{ (NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) }; [output setSampleBufferDelegate:delegate queue:queue]; if ([session canAddOutput:output]) { [session addOutput:output]; } [session commitConfiguration]; [selectedDevice unlockForConfiguration]; } else { NSLog(@"Failed to lock device for configuration: %@", error.localizedDescription); } // start~
1
0
361
Jul ’25
play videos in webM format on iOS Deveices
I would like to play videos in webM format on my iPhone. I understand that it is basically impossible to play videos in webM format on an iPhone, but is there any way to display videos in webM format? I would like to know if there is an official Swift SDK or development kit released by Apple. Or if there are any third-party products, please let me know.
1
0
266
Jul ’25
AVAssetReaderOutput.Provider Missing symbols
Recurring crash on install of any app with the new sourceVideoTrackProvider.next() dyld[41966]: Symbol not found: _$sSo19AVAssetReaderOutputC12AVFoundationE8ProviderC4nextxSgyYaKFTjTu Referenced from: <79AA2BE0-A6B4-32F5-A804-E84BBE5D1AEA> /Users/<username>/Library/Developer/Xcode/DerivedData/TrackProviderCrash-bbbhjptcxnmfdcackxtpucnunxyc/Build/Products/Debug-maccatalyst/TrackProviderCrash.app/Contents/MacOS/TrackProviderCrash.debug.dylib Expected in: <1B847AF9-7973-3B28-95C2-09E73F6DD50B> /usr/lib/swift/libswiftAVFoundation.dylib Can be reproduced with the current Xcode Beta 4 by running on to MacCatalyst and macOS https://developer.apple.com/documentation/AVFoundation/converting-projected-video-to-apple-projected-media-profile Crash goes away of you comment out lines 154-158 and 164-170 which are while let sampleBuffer = try await sourceVideoTrackProvider.next(){/*other code*/} Can also be reproduced if you add the code below to a MacCatalyst project import AVKit let asset: AVURLAsset = .init(url: Bundle.main.url(forResource: "SomeVideo.mp4", withExtension: nil)!) let videoReader = try! AVAssetReader(asset: asset) let videoTracks = try! await asset.loadTracks(withMediaCharacteristic: .visual) // Get the side-by-side video track. let videoTrack = videoTracks.first! let videoInputTrack = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil) let sourceVideoTrackProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>> = videoReader.outputProvider(for: videoInputTrack) //Comment out this while let sb = try! await sourceVideoTrackProvider.next() { }
1
0
586
Jul ’25
AVPlayerViewController `customInfoViewControllers` crash/workaround on tvOS 26
One thing I've noticed on tvOS 26 is that if you try to set the AVPlayerViewController customInfoViewControllers property while the Content Tabs are on screen, your app will crash. *** Terminating app due to uncaught exception 'UIViewControllerHierarchyInconsistency', reason: 'trying to add child view controller that is already presented: <AVInfoPanelViewController: 0x1030cdc00>' *** First throw call stack: (0x18a7167bc 0x189a77510 0x18a7166a8 0x1ab425658 0x1b2ee9d54 0x1b2efcd60 0x1b2eaf3f0 0x1080f744c 0x107e021a8 0x107e01b3c 0x18de41c14 0x18de41ba8 0x18de48d28 0x18ad9e358 0x101fac5f0 0x101fc6228 0x101fe7278 0x101fbc6fc 0x101fbc63c 0x18a67a2e0 0x18a679418 0x18a673b34 0x1937e4d5c 0x1abb36588 0x1abb3ae80 0x1aae9dec4 0x108610174 0x1086100e4 0x108615140 0x189abd4d0) I've logged a feedback (FB19554461) but it's getting awfully late in the dev cycle. So I've been trying to think of a workaround. The problem is that customInfoViewControllers is pretty declarative in nature. There are no properties or delegate methods I am aware of that let me know when they are displaying or not. One trick I came up with was seeing if my custom info view controller's view was "visible" or not - I put that in quotes because it turns out it can be visible even when I think it's not, as when the transport bar is scrolled to the top my custom VC still has its top pixels showing, so it gets a viewDidAppear call. So, I tried to see if my view controllers view is completely visible, ie based on the results of the GGRect contains method. And that works! But the problem is it only accounts for my own custom info view controllers, and not the standard one that Apple provides. I can't think of a way at all to know whether that is showing. Any ideas?
1
0
94
Aug ’25