Core Media

RSS for tag

Efficiently process media samples and manage queues of media data using Core Media.

Posts under Core Media tag

22 Posts

Post

Replies

Boosts

Views

Activity

How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
351
4d
[AVFCore] IOS 26.0 EXC_BAD_ACCESS from _customCompositorShouldCancelPendingFrames
Hi, I'm working an a video editing software that lets you composite and export videos. I use a custom compositor to apply my effects etc. In my crash dashboard, I am seeing a report of an EXC_BAD_ACCESS crash from objc_msgSend. Below is the stacktrace. libobjc.A.dylib objc_msgSend libdispatch.dylib _dispatch_sync_invoke_and_complete_recurse libdispatch.dylib _dispatch_sync_f_slow [symbolication failed] libdispatch.dylib _dispatch_client_callout libdispatch.dylib _dispatch_lane_barrier_sync_invoke_and_complete AVFCore -[AVCustomVideoCompositorSession(AVCustomVideoCompositorSession_FigCallbackHandling) _customCompositorShouldCancelPendingFrames] AVFCore _customCompositorShouldCancelPendingFramesCallback MediaToolbox remoteVideoCompositor_HandleVideoCompositorClientMessage CoreMedia __figXPCConnection_CallClientMessageHandlers_block_invoke libdispatch.dylib _dispatch_call_block_and_release libdispatch.dylib _dispatch_client_callout libdispatch.dylib _dispatch_lane_serial_drain libdispatch.dylib _dispatch_lane_invoke libdispatch.dylib _dispatch_root_queue_drain_deferred_wlh libdispatch.dylib _dispatch_workloop_worker_thread libsystem_pthread.dylib _pthread_wqthread libsystem_pthread.dylib start_wqthread What stood out to me is that this is only being reported from IOS 26.0+ devices. A part of the stacktrace failed to be symbolicated [symbolication failed]. I'm 90% confident that this is Apple code, not my app's code. I cannot reproduce this locally. Is this a known issue? What are the possible root-causes, and how can I verify/eliminate them? Thanks,
0
0
34
1w
Why a driverkit extension needs a CMIO extension
I developed a driverkit extension based on overriding-the-default-usb-video-class-extension, but the link didn’t give the details of realization. I asked DTS who gave two tips: 1, Do you also have a CMIO extension to load in place of the default overriding-the-default-usb-video-class-extension 2, Your DriverKit extension’s info.plist is also missing the CameraAssistantBundleID. I want to know why a driverkit extension needs a CMIO extension, what’s the data and control flow?
2
0
94
Jun ’25
WideFOV - APMP - Stereo
Does anyone have a template of an Apple Projected Media Profile Format Description or a File of a Stereo wideFOV video? Use case I have 2 compatible cameras that I stereo sync and I want to move the projection information from the compatible video to the Spatial video that combines them. Every version I can come up with crashes the AVP and when viewing as Spatial in Tahoe I just get a black screen.
4
0
146
Jun ’25
how to enumerate build-in media devices?
Hello, I need to enumerate built-in media devices (cameras, microphones, etc.). For this purpose, I am using the CoreAudio and CoreMediaIO frameworks. According to the table 'Daemon-Safe Frameworks' in Apple’s TN2083, CoreAudio is daemon-safe. However, the documentation does not mention CoreMediaIO. Can CoreMediaIO be used in a daemon? If not, are there any documented alternatives to detect built-in cameras in a daemon (e.g., via device classes in IOKit)? Thank you in advance, Pavel
0
0
75
May ’25
CVPixelBufferCreate EXC_BAD_ACCESS
I am doing something similar to this post Within an AVCaptureDataOutputSynchronizerDelegate method, I create a pixelBuffer using CVPixelBufferCreate with the following attributes: kCVPixelBufferIOSurfacePropertiesKey as String: true, kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey as String: true When I copy the data from the vImagePixelBuffer "rotatedImageBuffer", I get the following error: Thread 10: EXC_BAD_ACCESS (code=1, address=0x14caa8000) I get the same error with memcpy and data.copyBytes (not running them at the same time obviously). If I use CVPixelBufferCreateWithBytes, I do not get this error. However, CVPixelBufferCreateWithBytes does not let you include attributes (see linked post above). I am using vImage because I need the original CVPixelBuffer from the camera output and a rotated version with a different color scheme. // Copy to pixel buffer let attributes: NSDictionary = [ true : kCVPixelBufferIOSurfacePropertiesKey, true : kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey, ] var colorBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, attributes, &colorBuffer) //let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes, nil, nil, attributes as CFDictionary, &colorBuffer) // does NOT produce error, but also does not have attributes guard status == kCVReturnSuccess, let colorBuffer = colorBuffer else { print("Failed to create buffer") return } let lockFlags = CVPixelBufferLockFlags(rawValue: 0) guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(colorBuffer, lockFlags) else { print("Failed to lock base address") return } let colorBufferMemory = CVPixelBufferGetBaseAddress(colorBuffer)! let data = Data(bytes: rotatedImageBuffer.data, count: rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) data.copyBytes(to: colorBufferMemory.assumingMemoryBound(to: UInt8.self), count: data.count) // Fails here //memcpy(colorBufferMemory, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) // Also produces the same error CVPixelBufferUnlockBaseAddress(colorBuffer, lockFlags)
1
0
96
Apr ’25
Audio / Video sync issue on iOS using AVSampleBufferRenderSynchronizer
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers: an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers, and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers. The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image. Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized. However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time. On the other hand, with the same player code and network streams on macOS, the synchronization always works fine. This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again. So, any help / hints on this sync problem will be greatly appreciated! :)
2
0
1.1k
Apr ’25
Assistance Needed: CoreMediaErrorDomain Error -12971
Hello Apple Developer Community, I am trying to play an HLS stream using the React Native Video player (underneath it's using AvPlayer). I am able to play the stream smoothly, but in some cases the player can not play the stream properly. Behaviour: react-native-video: I am getting the below error. Error details from react-native-video player: Error Code: -12971 Domain: CoreMediaErrorDomain Localised Description: The operation couldn’t be completed. (CoreMediaErrorDomain error -12971.) Target: 2457 The error does not provide a specific failure reason or recovery suggestion, which makes troubleshooting challenging. AvPlayer on native iOS project: Video playback stopped after playing a few seconds. AVPlayer configuration: player.currentItem?.preferredForwardBufferDuration = 1 player.automaticallyWaitsToMinimizeStalling = true N.B.: The same buffer duration is working perfectly for others. Stream properties: video resolution: 1280 x 720 I have attached an overview report generated from MediaStreamValidator. I would appreciate any insights or suggestions on how to address this error. Has anyone in the community experienced a similar issue or have any advice on potential solutions? Thank you for your help!
0
1
112
Apr ’25
Alternative for crashing API MPMediaItemArtwork
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734). On iOS 17 this gave the warning that the completion handler was not run on the main thread. I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231 but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue. .task { await MainActor.run { let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default() var nowPlayingInfo = [String: Any]() let image = NSImage(named: "image")! // warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in // Not on main thread here! return image }) nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo } } I'm wondering if there is an alternative method to set the now playing artwork?
4
0
872
Feb ’25
Macro-mode in AVCaptureDevice(custom camera)
Hi, I would like to use macro-mode for the custom camera using AVCaptureDevice in my project. This feature might help to automatically adjust and switch between lenses to get a close up clear image. It looks like this feature is not available and there are no open apis to achieve macro mode from Apple. Is there a way to get this functionality in the custom camera without losing the image quality. Please let me know if this is possible. Thanks you, Adil Thamarasseri
7
1
1.2k
Feb ’25
Debug MediaExtension plugin in system exectutable?
I am developing a macOS 15 MediaExtension plugin to enable additional codecs and container formats in AVFoundation My Plugin is sort of working, but i'd like to debug the XPC process that AVFoundation 'hoists' for me from the calling app (ie - the process hosting my plugin instance that is managing the MESampleBuffer protocol calls for example) Is there a method to configure XCode for interactive attaching to this background process for interactive debugging? Right now I have to use Console + Print which is not fun or productive. Does Apple have a working example of a MediaExtension anywhere? This is an exciting API that is very under-documented. I'm willing to spend a Code Review 'credit' for this, but my issues are not quite focused. Any assistance is highly appreciated!
0
0
497
Dec ’24
Error Domain=NSOSStatusErrorDomain Code=-16384, -16155, -16512
I’ve built a custom media player using AVSampleBufferAudioRenderer and AVSampleBufferRenderSynchronizer, and overall, it works great! However, I’ve noticed some unusual logs popping up: Domain: NSOSStatusErrorDomain Error Codes: -16384, -16155, -16512 *That error -16512 keeps happening repeatedly for one of our users, preventing them from playing any media at all. I’ve searched around but can’t find any documentation explaining what these errors mean. Has anyone run into this issue or have any suggestions? Any help would be hugely appreciated! Thanks!
1
0
967
Dec ’24
Selecting an appropriate AVCaptureDeviceFormat
My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions. There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step. However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented? Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator? If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property. Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats. I suspect I may be missing something obvious. Any help would be greatly appreciated!
3
0
728
Dec ’24
AVASSETREADER and AVAssetWriter: ideal settings
Hello, To create a test project, I want to understand how the video and audio settings would look for a destination video whose content comes from a source video. I obtained the output from the source video in the audio and video tracks as follows: let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] var audioOutput = AVAssetReaderTrackOutput(track: audioTrack!, outputSettings: audioSettings) // Video let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoTrack!.naturalSize.width, kCVPixelBufferHeightKey: videoTrack!.naturalSize.height ] as [String: Any] var videoOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: videoSettings) With this, I'm obtaining the CMSampleBuffer using AVAssetReader.copyNextSampleBuffer . How can I add it to the destination video? Should I use a while loop, considering I already have the AVAssetWriter set up? Something like this: while let buffer = videoOutput.copyNextSampleBuffer() { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let frame = imgBuffer as CVPixelBuffer let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) adaptor.append(frame, withMediaTime: time) } } Lastly, regarding the destination video. How should the AVAssetWriterInput for audio and PixelBuffer of the destination video be set up? Provide an example, something like: let audioSettings = […] as [String: Any] Looking forward to your response.
5
0
592
Dec ’24
AV1 Hardware Decoding
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support: VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following: { mediaType:'vide' mediaSubType:'av01' mediaSpecific: { codecType: 'av01' dimensions: 394 x 852 } extensions: {{ CVFieldCount = 1; CVImageBufferChromaLocationBottomField = Left; CVImageBufferChromaLocationTopField = Left; CVPixelAspectRatio = { HorizontalSpacing = 1; VerticalSpacing = 1; }; FullRangeVideo = 0; }} } but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume). So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work 😿. VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1. As of today I am using XCode 15.0 with iOS 17.0.0 SDK.
8
3
8.9k
Nov ’24
IOSurface with System Extensions
Hi All, I'm working on a camera system extension where the main app is supposed to transfer a video stream using IOSurface memory sharing to the cam extension. I have built a sample app that does contains all the logic, but without a camera extension. So I'm essentially using IOSurface to render a video in one SwiftUI view and show the result in another SwiftUI view. Just for testing purposes. And everything works fine so far. Now, when moving the receiver code to the camera extensions, I'm having problems in accessing the IOSurface via ID. I am sharing the IOSurface ID via UserDefaults. I know from the logs the ID is correctly transferred. Here is the code that uses IOSurfaceLookup to get the IOSurface. But this fails with the given message. The error message prints the surface ID which is the correct one. I know this from the main app where I get the ID and print it as well. private var surfaceId: Int = -1 { didSet { logger.info("surfaceId has changed") if surfaceId == -1 { stopReceivingFrames() ioSurface = nil } else { guard let surface = IOSurfaceLookup(IOSurfaceID(surfaceId)) else { logger.error("failed to lookup IOSurface with ID: \(self.surfaceId)") return } self.ioSurface = surface logger.info("surface set, now starting receiving frames") startReceivingFrames() } } } My gut feeling says that this issue might be related to some missing entitlement, sandboxing. In general, I have a working camera extension. I'm just not able to render a video in the main app, and send it over to the camera extension to overlay another web cam. Both, the main app and camera extension are in the same XCode workspace and share the same AppGroup. In short, my actual questions are: Is there any entitlement required for using IOSurface between app and camera system extension? Is using IOSurface actually possible in system extensions? Is there any specific setting/requirement that I need to handle to make this work?
0
1
584
Nov ’24
Decode video frames in lower resolution before processing
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export). For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that. However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example: let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let input = request.sourceImage // <- this still has the video's original size // ... }) composition.renderSize = CGSize(width: 1280, heigth: 720) // for example So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler. Is there a way to tell AVFoundation to load smaller video frames?
0
1
580
Nov ’24
Managing Excessive Memory Usage with AVAssetReader and AVASSETWriter
Hello, I.m deaf-blind programmer. I'm experiencing memory issues in my app. Essentially, I'm writing a video. In this output video, I get content from two sources. The first source is an already recorded video of 18 seconds (just for testing). It will be shown at the beginning of the output video. The second source is an array with photos and another array with audio buffers from AVSpeechSynthesizer.write(). The photos will be added along with the audio buffers to the output video, right after adding the 18-second video. So, in the end, the output video should be: 18-second video + array of photos as video images and, for audio, the buffers from AVSpeechSynthesizer.write(). However, my app crashes as soon as I start the first process. I'm using AVAssetWriter to write the video and AVAssetReader to read the video. Below, I'll show the code where I get the CMSampleBuffer. I'd like an example of how to add the 18-second video to the beginning of the output video. It doesn't need to be a big piece of code. Here it is: // Variables var audioReaderBuffers = [CMSAMPLEBUFFER]() var videoReaderBuffers = [(frame: CVPixelBuffer, time: CMTIME)]() // Get CMSampleBuffer of a video asset if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first! let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width, kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height ] as [String: Any] let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerVideoOutput) reader.add(readerAudioOutput) reader.startReading() // Video CMSampleBuffer while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() { autoreleasepool { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let pixBuf = imgBuffer as CVPixelBuffer let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) videoReaderBuffers.append((frame: pixBuf, time: pTime)) } } } if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first! let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width, kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height ] as [String: Any] let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerVideoOutput) reader.add(readerAudioOutput) reader.startReading() while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() { autoreleasepool { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let pixBuf = imgBuffer as CVPixelBuffer let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) }
1
0
527
Nov ’24