Core Media

RSS for tag

Efficiently process media samples and manage queues of media data using Core Media.

Core Media Documentation

Posts under Core Media tag

29 Posts
Sort by:
Post not yet marked as solved
1 Replies
364 Views
Given an AVAsset, I'm performing a Vision trajectory request on it and would like to write out a video asset that only contains frames with trajectories (filter out downtime in sports footage where there's no ball moving). I'm unsure what would be a good approach, but as a starting point I tried the following pipeline: Copy sample buffer from the source AVAssetReaderOutput. Perform trajectory request on a vision handler parameterized by the sample buffer. For each resulting VNTrajectoryObservation (trajectory detected), use its associated CMTimeRange to configure a new AVAssetReader set to that time range. Append the time range constrained sample buffer to one AVAssetWriterInput until the forEach is complete. In code: private func transferSamplesAsynchronously(from readerOutput: AVAssetReaderOutput, to writerInput: AVAssetWriterInput, onQueue queue: DispatchQueue, sampleBufferProcessor: SampleBufferProcessor, completionHandler: @escaping () -> Void) { /* The writerInput continously invokes this closure until finished or cancelled. It throws an NSInternalInconsistencyException if called more than once for the same writer. */ writerInput.requestMediaDataWhenReady(on: queue) { var isDone = false /* While the writerInput accepts more data, process the sampleBuffer and then transfer the processed sample to the writerInput. */ while writerInput.isReadyForMoreMediaData { if self.isCancelled { isDone = true break } // Get the next sample from the asset reader output. guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else { // The asset reader output has no more samples to vend. isDone = true break } let visionHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer, orientation: self.orientation, options: [:]) do { try visionHandler.perform([self.detectTrajectoryRequest]) if let results = self.detectTrajectoryRequest.results { try results.forEach { result in let assetReader = try AVAssetReader(asset: self.asset) assetReader.timeRange = result.timeRange let trackOutput = AVTrackOutputs.firstTrackOutput(ofType: .video, fromTracks: self.asset.tracks, withOutputSettings: nil) assetReader.add(trackOutput) assetReader.startReading() guard let sampleBuffer = trackOutput.copyNextSampleBuffer() else { // The asset reader output has no more samples to vend. isDone = true return } // Append the sample to the asset writer input. guard writerInput.append(sampleBuffer) else { /* The writer could not append the sample buffer. The `readingAndWritingDidFinish()` function handles any error information from the asset writer. */ isDone = true return } } } } catch { print(error) } } if isDone { /* Calling `markAsFinished()` on the asset writer input does the following: 1. Unblocks any other inputs needing more samples. 2. Cancels further invocations of this "request media data" callback block. */ writerInput.markAsFinished() /* Tell the caller the reader output and writer input finished transferring samples. */ completionHandler() } } } private func readingAndWritingDidFinish(assetReaderWriter: AVAssetReaderWriter, completionHandler: @escaping FinishHandler) { if isCancelled { completionHandler(.success(.cancelled)) return } // Handle any error during processing of the video. guard sampleTransferError == nil else { assetReaderWriter.cancel() completionHandler(.failure(sampleTransferError!)) return } // Evaluate the result reading the samples. let result = assetReaderWriter.readingCompleted() if case .failure = result { completionHandler(result) return } /* Finish writing, and asynchronously evaluate the results from writing the samples. */ assetReaderWriter.writingCompleted { result in completionHandler(result) return } } When run I get the following: No error is caught in the first catch clause, and none are caught in private func readingAndWritingDidFinish(assetReaderWriter: AVAssetReaderWriter, completionHandler: @escaping FinishHandler), the completion handler is called. Help with any of the following questions would be appreciated: What is causing what appears to be indefinite loading? How might I isolate the problem further? Am I misusing or misunderstanding how to selectively read from time ranges of AVAssetReader objects? Should I forego the AVAssetReader / AVAsssetWriter route entirely, and use the time ranges with AVAssetExportSession instead? I don't know how the two approaches compare, or what to consider when choosing between the two.
Posted
by
Post marked as solved
3 Replies
1.7k Views
Using a few webcams that worked previously on both M1 and Intel Macbooks, testing out on the new MacBook Pro 2021 and the UVC streams are not showing across apps. Here's my setup Tested the Anker, OBSBot, and an Opal, and not seeing any of them show up on UVC streams E.g. The Anker shows up in System Information > USB but in UVC apps like Zoom, etc it does not show up. Running system_profiler SPCameraDataType -json Returns just { "SPCameraDataType" : [ { "_name" : "FaceTime HD Camera", "spcamera_model-id" : "FaceTime HD Camera", "spcamera_unique-id" : "47B4B64B70674B9CAD2BAE273A71F4B5" } ] }%
Posted
by
Post not yet marked as solved
0 Replies
348 Views
I’m using AVFoundation to access camera on iPad. But with AVFoundation, CoreMedia is also imported, which in-turn imports CoreAudio and CoreVideo. Keeping privacy concerns in mind, is there any way by which I can ensure that the app is never able to access Microphone or Video Recording? AVfoundation CoreMedia
Posted
by
Post not yet marked as solved
0 Replies
341 Views
I’m using AVFoundation for image capture using camera on iPad. But I’m not using Video or Audio related functionality. Looks like with AVFoundation; CoreMedia, CoreVideo and CoreAudio are also imported in any project. Is there any way by which I can remove these libraries(CoreMedia, CoreVideo and CoreAudio) from my app. I have used otool to list all the frameworks and libraries being used by my framework.
Posted
by
Post not yet marked as solved
1 Replies
396 Views
Facing strange issue in two devices, iPhone 6s and 12 mini. Not getting audio from the media when ringer is off, while playing media. Is this device specific issue, any settings issue, OS issue or ultimately app issue. But this works fine with you tube and other media apps.
Posted
by
Post not yet marked as solved
0 Replies
290 Views
In my application I am capturing window using CGWindowListCreateImage let windowID = 12345; let windowImage = CGWindowListCreateImage(.null, .optionIncludingWindow, CGWindowID(windowID), [.bestResolution, .boundsIgnoreFraming]) This is working nicely. How can I capture window with cursor using this approach?
Posted
by
Post not yet marked as solved
0 Replies
325 Views
The CoreMediaIO Device Abstraction Layer (DAL) is analogous to CoreAudio’s Hardware Abstraction Layer (HAL). Just as the HAL deals with audio streams from audio hardware, the DAL handles video (and muxed) streams from video devices. DAL Pludins resides at /Library/CoreMediaIO/Plug-Ins/DAL/ What is life cycle of these DAL Plugins? When they get started running? When they get stopped? When they get paused? Where can I see their logs? What happens when they are not in use? How can I see their performance if they are efficient or not? One of the famous example of CoreMediaIO DAL Plugin is OBS Virtual Camera if someone does not know. Note: This question should not be marked too broad. I am not asking multiple questions. It's only one question to know the life cycle of CoreMediaIO DAL Plugin.
Posted
by
Post not yet marked as solved
1 Replies
446 Views
I have a CoreMediaIO based DAL plugin written in Swift which currently poll a website to get that string. Which is not a good approach. I want to send that string to DAL plugin via Operating System supported IPC(Inter Process Communication). But there are many ways to do IPC on MacOS like Apple Events Distributed Notifications in Cocoa BSD Notifications Transferring Raw Data With CFMessagePort Communicating With BSD Sockets Communicating With BSD Pipes In my case I just want one way communication from a application to DAL Plugin. I am new to MacOS development so not sure which approch will be efficiant and best for my case of one way comminucation from Application to DAL Plugin?
Posted
by
Post not yet marked as solved
1 Replies
749 Views
When using AVCaptureVideoDataOutput/AVCaptureAudioDataOutput & AVAsset writer to record video with cinematic extended video stabilization, the audio lags video upto 1-1.5 seconds and as a result in the video recording, video playback is frozen in the last 1-1.5 seconds. This does not happen when using AVCaptureMovieFileOutput. I want to know if this can be fixed or there is a workaround to synchronize audio/video frames? How does AVCaptureMovieFileOutput handle it?
Posted
by