Core Video

RSS for tag

Process digital video using a pipeline-based API and support for both Metal and OpenGL using Core Video.

Core Video Documentation

Posts under Core Video tag

17 Posts
Sort by:
Post not yet marked as solved
0 Replies
157 Views
in Swift languange, CVMetalTextureCacheCreateTextureFromImage return CVMetalTexture, and CVMetalTexture is Swift class, so. it doesn't need to call CVBufferRelease manually. My question is : should I use a variable to keep strong reference until GPU finished (until addCompleteHandler callback ) ? cvmetaltexturecachecreatetexture
Posted
by ZoGo996.
Last updated
.
Post not yet marked as solved
0 Replies
253 Views
Hi guys, I'm implementing FairPlay support for a video streaming application. I've managed to get as far as generating the SPC and acquiring a license from the license server. However when it comes to parsing the license (CKC) returned from the server, the FPS module returns error code -42671. Has anyone else faced this before and / or knows what the fix is? I thought passing it the license should be enough unless additional data is required?
Posted
by ThetaSeg.
Last updated
.
Post not yet marked as solved
2 Replies
282 Views
Hi everyone, I need to add spatial video maker in my app which was wrote in objective-c. I found some reference code by swift, can you help me with converting the code to objective -c? let left = CMTaggedBuffer( tags: [.stereoView(.leftEye), .videoLayerID(leftEyeLayerIndex)], pixelBuffer: leftEyeBuffer) let right = CMTaggedBuffer( tags: [.stereoView(.rightEye), .videoLayerID(rightEyeLayerIndex)], pixelBuffer: rightEyeBuffer) let result = adaptor.appendTaggedBuffers( [left, right], withPresentationTime: leftPresentationTs)
Posted
by pinkywon.
Last updated
.
Post not yet marked as solved
0 Replies
332 Views
Our DJ application Mixxx renders scrolling waveforms with 60 Hz. This looks perfectly smooth on an older 2015 MacBook Pro. However it looks jittery on a new M1 device with "ProMotion" enabled. Selecting 60 Hz fixes the issue. We are looking for a way to tell macOS that it can expect 60 Hz renderings from Mixxx and must not display them early (at 120 Hz) even if the pictures are ready. The alternative would be to read out the display settings and ask the user to select 60 Hz. Is there an API to: hint the display diver that we render with 60 Hz read out the refresh rate settings?
Posted
by daschuer.
Last updated
.
Post not yet marked as solved
0 Replies
448 Views
I am trying to carefully process HDR pixel buffers (10-bit YCbCr buffers) from the camera. I have watched all WWDC videos on this topic but have some doubts expressed below. Q. What assumptions are safe to make about sample values in Metal Core Image Kernels? Are the sample values received in Metal Core Image kernel linear or gamma corrected? Or does that depend on workingColorSpace property, or the input image that is supplied (though imageByMatchingToColorSpace() API, etc.)? And what could be the max and min values of these samples in either case? I see that setting workingColorSpace to NSNull() in context creation options will guarantee receiving the samples as is and normalised to [0-1]. But then it's possible the values are non-linear gamma corrected, and extracting linear values would involve writing conversion functions in the shader. In short, how do you safely process HDR pixel buffers received from the camera (which are in YCrCr420_10bit, which I believe have gamma correction applied, so Y in YCbCr is actually Y'. Can AVFoundation team clarify this?) ?
Posted Last updated
.
Post not yet marked as solved
0 Replies
397 Views
I have set AVCaptureVideoDataOutput with 10-bit 420 YCbCr sample buffers. I use Core Image to process these pixel buffers for simple scaling/translation. var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size /* *srcImage is created from sample buffer received from Video Data Output */ _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) I then set the color attachments to this dstPixelBuffer using set colorProfile in the app settings (BT.709 or BT.2020). switch colorProfile { case .BT709: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate) case .HLG2100: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate) } These pixel buffers are then vended to AVAssetWriter whose videoSettings is set to recommendedSettings by VDO. But the output seems to be washed out completely, esp. for SDR (BT.709). What am I doing wrong?
Posted Last updated
.
Post not yet marked as solved
0 Replies
383 Views
I have been allocating pixel buffers from CVPixelBufferPool and the code has been adapted from older various Apple sample codes such as RosyWriter. I see direct API such as CVPixelBufferCreate are highly performant and rarely cause frame drops as opposed to allocating from pixel buffer pool where I regularly get frame drops. Is this a known issue or a bad use of API? Here is the code for creating pixel buffer pool: private func createPixelBufferPool(_ width: Int32, _ height: Int32, _ pixelFormat: FourCharCode, _ maxBufferCount: Int32) -> CVPixelBufferPool? { var outputPool: CVPixelBufferPool? = nil let sourcePixelBufferOptions: NSDictionary = [kCVPixelBufferPixelFormatTypeKey: pixelFormat, kCVPixelBufferWidthKey: width, kCVPixelBufferHeightKey: height, kCVPixelFormatOpenGLESCompatibility: true, kCVPixelBufferIOSurfacePropertiesKey: [:] as CFDictionary] let pixelBufferPoolOptions: NSDictionary = [kCVPixelBufferPoolMinimumBufferCountKey: maxBufferCount] CVPixelBufferPoolCreate(kCFAllocatorDefault, pixelBufferPoolOptions, sourcePixelBufferOptions, &outputPool) return outputPool } private func createPixelBufferPoolAuxAttributes(_ maxBufferCount: size_t) -> NSDictionary { // CVPixelBufferPoolCreatePixelBufferWithAuxAttributes() will return kCVReturnWouldExceedAllocationThreshold if we have already vended the max number of buffers return [kCVPixelBufferPoolAllocationThresholdKey: maxBufferCount] } private func preallocatePixelBuffersInPool(_ pool: CVPixelBufferPool, _ auxAttributes: NSDictionary) { // Preallocate buffers in the pool, since this is for real-time display/capture var pixelBuffers: [CVPixelBuffer] = [] while true { var pixelBuffer: CVPixelBuffer? = nil let err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes(kCFAllocatorDefault, pool, auxAttributes, &pixelBuffer) if err == kCVReturnWouldExceedAllocationThreshold { break } assert(err == noErr) pixelBuffers.append(pixelBuffer!) } pixelBuffers.removeAll() } And here is the usage: bufferPool = createPixelBufferPool(outputDimensions.width, outputDimensions.height, outputPixelFormat, Int32(maxRetainedBufferCount)) if bufferPool == nil { NSLog("Problem initializing a buffer pool.") success = false break bail } bufferPoolAuxAttributes = createPixelBufferPoolAuxAttributes(maxRetainedBufferCount) preallocatePixelBuffersInPool(bufferPool!, bufferPoolAuxAttributes!) And then creating pixel buffers from pool err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes( kCFAllocatorDefault, bufferPool!, bufferPoolAuxAttributes, &dstPixelBuffer ) if err == kCVReturnWouldExceedAllocationThreshold { // Flush the texture cache to potentially release the retained buffers and try again to create a pixel buffer err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes( kCFAllocatorDefault, bufferPool!, bufferPoolAuxAttributes, &dstPixelBuffer ) } if err != 0 { if err == kCVReturnWouldExceedAllocationThreshold { NSLog("Pool is out of buffers, dropping frame") } else { NSLog("Error at CVPixelBufferPoolCreatePixelBuffer %d", err) } break bail } When used with AVAssetWriter, I see lot of frame drops caused due to kCVReturnWouldExceedAllocationThreshold error. No frame drops are seen when I directly allocate the pixel buffer without using a pool: CVPixelBufferCreate(kCFAllocatorDefault, Int(dimensions.width), Int(dimensions.height), outputPixelFormat, sourcePixelBufferOptions, &dstPixelBuffer) What could be the cause?
Posted Last updated
.
Post not yet marked as solved
0 Replies
302 Views
In MacOS14 system, the transparency of CMSampleBuffer is 0. Why after sending through the send function of CMIOExtensionStream, the receiving end uses AVCaptureSession to addOutput: AVCaptureVideoDataOutput object, AVCaptureVideoDataOutput sets videoSettings to kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA), the receiving end's receiving delegate method - (void)captureOutput:( AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection The transparency of the sampleBuffer returned is actually 255. The correct sampleBuffer transparency of the MacOS13 system is 0. In this case, how can I return the same as the 13 system? What needs to be set? Attribute?
Posted Last updated
.
Post not yet marked as solved
0 Replies
394 Views
I'm working on a MV-HEVC transcoder, based on the VTEncoderForTranscoding sample code. In swift the following code snippet generates a linker error on macOS 14.0 and 14.1. let err = VTCompressionSessionEncodeMultiImageFrame(compressionSession, taggedBuffers: taggedBuffers, presentationTimeStamp: pts, duration: .invalid, frameProperties: nil, infoFlagsOut: nil) { (status: OSStatus, infoFlags: VTEncodeInfoFlags, sbuf: CMSampleBuffer?) -> Void in outputHandler(status, infoFlags, sbuf, thisFrameNumber) } Error: ld: Undefined symbols: VideoToolbox.VTCompressionSessionEncodeMultiImageFrame(_: __C.VTCompressionSessionRef, taggedBuffers: [CoreMedia.CMTaggedBuffer], presentationTimeStamp: __C.CMTime, duration: __C.CMTime, frameProperties: __C.CFDictionaryRef?, infoFlagsOut: Swift.UnsafeMutablePointer<__C.VTEncodeInfoFlags>?, outputHandler: (Swift.Int32, __C.VTEncodeInfoFlags, __C.CMSampleBufferRef?) -> ()) -> Swift.Int32, referenced from: (3) suspend resume partial function for VTEncoderForTranscoding_Swift.(compressFrames in _FE7277D5F28D8DABDFC10EA0164D825D)(from: VTEncoderForTranscoding_Swift.VideoSource, options: VTEncoderForTranscoding_Swift.Options, expectedFrameRate: Swift.Float, outputHandler: @Sendable (Swift.Int32, __C.VTEncodeInfoFlags, __C.CMSampleBufferRef?, Swift.Int) -> ()) async throws -> () in VTEncoderForTranscoding.o Using VTCompressionSessionEncodeMultiImageFrameWithOutputHandler in ObjC doesn't trigger a linker error. Anybody knows how to get it to work in Swift?
Posted
by map.
Last updated
.
Post not yet marked as solved
1 Replies
570 Views
In case when I have locked white balance and custom exposure, on black background when I introduce new object in view, both objects become brighter. How to turn off this feature or compensate for that change in a performant way? This is how I configure the session, note that Im setting a video format which supports at least 180 fps which is required for my needs. private func configureSession() { self.sessionQueue.async { [self] in //MARK: Init session guard let session = try? validSession() else { fatalError("Session is unexpectedly nil") } session.beginConfiguration() guard let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for:AVMediaType.video, position: .back) else { fatalError("Video Device is unexpectedly nil") } guard let videoDeviceInput: AVCaptureDeviceInput = try? AVCaptureDeviceInput(device:device) else { fatalError("videoDeviceInput is unexpectedly nil") } guard session.canAddInput(videoDeviceInput) else { fatalError("videoDeviceInput could not be added") } session.addInput(videoDeviceInput) self.videoDeviceInput = videoDeviceInput self.videoDevice = device //MARK: Connect session IO let dataOutput = AVCaptureVideoDataOutput() dataOutput.setSampleBufferDelegate(self, queue: sampleBufferQueue) session.automaticallyConfiguresCaptureDeviceForWideColor = false guard session.canAddOutput(dataOutput) else { fatalError("Could not add video data output") } session.addOutput(dataOutput) dataOutput.alwaysDiscardsLateVideoFrames = true dataOutput.videoSettings = [ String(kCVPixelBufferPixelFormatTypeKey): pixelFormat.rawValue ] if let captureConnection = dataOutput.connection(with: .video) { captureConnection.preferredVideoStabilizationMode = .off captureConnection.isEnabled = true } else { fatalError("No Capture Connection for the session") } //MARK: Configure AVCaptureDevice do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } if let format = format(fps: fps, minWidth: minWidth, format: pixelFormat) { // 180FPS, YUV layout device.activeFormat = format device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) } else { fatalError("Compatible format not found") } device.activeColorSpace = .sRGB device.isGlobalToneMappingEnabled = false device.automaticallyAdjustsVideoHDREnabled = false device.automaticallyAdjustsFaceDrivenAutoExposureEnabled = false device.isFaceDrivenAutoExposureEnabled = false device.setFocusModeLocked(lensPosition: 0.4) device.isSubjectAreaChangeMonitoringEnabled = false device.exposureMode = AVCaptureDevice.ExposureMode.custom let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) device.setExposureModeCustom(duration: exp, iso: isoValue) { t in } device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) { (timestamp:CMTime) -> Void in } device.unlockForConfiguration() session.commitConfiguration() onAVSessionReady() } } This post (https://stackoverflow.com/questions/34511431/ios-avfoundation-different-photo-brightness-with-the-same-manual-exposure-set) suggests that the effect can be mitigated by settings camera exposure to .locked right after setting device.setExposureModeCustom(). This works properly only if used with async api and still does not influence the effect. Async approach: private func onAVSessionReady() { guard let device = device() else { fatalError("Device is unexpectedly nil") } guard let sesh = try? validSession() else { fatalError("Device is unexpectedly nil") } MCamSession.shared.activeFormat = device.activeFormat MCamSession.shared.currentDevice = device self.observer = SPSDeviceKVO(device: device, session: sesh) self.start() Task { await lockCamera(device) } } private func lockCamera(_ device: AVCaptureDevice) async { do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } _ = await device.setFocusModeLocked(lensPosition: 0.4) let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) _ = await device.setExposureModeCustom(duration: exp, iso: isoValue) _ = await device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) device.exposureMode = AVCaptureDevice.ExposureMode.locked device.unlockForConfiguration() } private func configureSession() { // same session init as before ... onAVSessionReady() }
Posted
by linkov.
Last updated
.
Post not yet marked as solved
0 Replies
481 Views
This is verified to be a framework bug (occurs on Mac Catalyst but not iOS or iPadOS), and it seems the culprit is AVVideoCompositionCoreAnimationTool? /// Exports a video with the target animating. func exportVideo() { let destinationURL = createExportFileURL(from: Date()) guard let videoURL = Bundle.main.url(forResource: "black_video", withExtension: "mp4") else { delegate?.exporterDidFailExporting(exporter: self) print("Can't find video") return } // Initialize the video asset let asset = AVURLAsset(url: videoURL, options: [AVURLAssetPreferPreciseDurationAndTimingKey: true]) guard let assetVideoTrack: AVAssetTrack = asset.tracks(withMediaType: AVMediaType.video).first, let assetAudioTrack: AVAssetTrack = asset.tracks(withMediaType: AVMediaType.audio).first else { return } let composition = AVMutableComposition() guard let videoCompTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)), let audioCompTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return } videoCompTrack.preferredTransform = assetVideoTrack.preferredTransform // Get the duration let videoDuration = asset.duration.seconds // Get the video rect let videoSize = assetVideoTrack.naturalSize.applying(assetVideoTrack.preferredTransform) let videoRect = CGRect(origin: .zero, size: videoSize) // Initialize the target layers and animations animationLayers = TargetView.initTargetViewAndAnimations(atPoint: CGPoint(x: videoRect.midX, y: videoRect.midY), atSecondsIntoVideo: 2, videoRect: videoRect) // Set the playback speed let duration = CMTime(seconds: videoDuration, preferredTimescale: CMTimeScale(600)) let appliedRange = CMTimeRange(start: .zero, end: duration) videoCompTrack.scaleTimeRange(appliedRange, toDuration: duration) audioCompTrack.scaleTimeRange(appliedRange, toDuration: duration) // Create the video layer. let videolayer = CALayer() videolayer.frame = CGRect(origin: .zero, size: videoSize) // Create the parent layer. let parentlayer = CALayer() parentlayer.frame = CGRect(origin: .zero, size: videoSize) parentlayer.addSublayer(videolayer) let times = timesForEvent(startTime: 0.1, endTime: duration.seconds - 0.01) let timeRangeForCurrentSlice = times.timeRange // Insert the relevant video track segment do { try videoCompTrack.insertTimeRange(timeRangeForCurrentSlice, of: assetVideoTrack, at: .zero) try audioCompTrack.insertTimeRange(timeRangeForCurrentSlice, of: assetAudioTrack, at: .zero) } catch let compError { print("TrimVideo: error during composition: \(compError)") delegate?.exporterDidFailExporting(exporter: self) return } // Add all the non-nil animation layers to be exported. for layer in animationLayers.compactMap({ $0 }) { parentlayer.addSublayer(layer) } // Configure the layer composition. let layerComposition = AVMutableVideoComposition() layerComposition.frameDuration = CMTimeMake(value: 1, timescale: 30) layerComposition.renderSize = videoSize layerComposition.animationTool = AVVideoCompositionCoreAnimationTool( postProcessingAsVideoLayer: videolayer, in: parentlayer) let instructions = initVideoCompositionInstructions( videoCompositionTrack: videoCompTrack, assetVideoTrack: assetVideoTrack) layerComposition.instructions = instructions // Creates the export session and exports the video asynchronously. guard let exportSession = initExportSession( composition: composition, destinationURL: destinationURL, layerComposition: layerComposition) else { delegate?.exporterDidFailExporting(exporter: self) return } // Execute the exporting exportSession.exportAsynchronously(completionHandler: { if let error = exportSession.error { print("Export error: \(error), \(error.localizedDescription)") } self.delegate?.exporterDidFinishExporting(exporter: self, with: destinationURL) }) } Not sure how to implement a custom compositor that performs the same animations as this reproducible case: class AnimationCreator: NSObject { // MARK: - Target Animations /// Creates the target animations. static func addAnimationsToTargetView(_ targetView: TargetView, startTime: Double) { // Add the appearance animation AnimationCreator.addAppearanceAnimation(on: targetView, defaultBeginTime: AVCoreAnimationBeginTimeAtZero, startTime: startTime) // Add the pulse animation. AnimationCreator.addTargetPulseAnimation(on: targetView, defaultBeginTime: AVCoreAnimationBeginTimeAtZero, startTime: startTime) } /// Adds the appearance animation to the target private static func addAppearanceAnimation(on targetView: TargetView, defaultBeginTime: Double = 0, startTime: Double = 0) { // Starts the target transparent and then turns it opaque at the specified time targetView.targetImageView.layer.opacity = 0 let appear = CABasicAnimation(keyPath: "opacity") appear.duration = .greatestFiniteMagnitude // stay on screen forever appear.fromValue = 1.0 // Opaque appear.toValue = 1.0 // Opaque appear.beginTime = defaultBeginTime + startTime targetView.targetImageView.layer.add(appear, forKey: "appear") } /// Adds a pulsing animation to the target. private static func addTargetPulseAnimation(on targetView: TargetView, defaultBeginTime: Double = 0, startTime: Double = 0) { let targetPulse = CABasicAnimation(keyPath: "transform.scale") targetPulse.fromValue = 1 // Regular size targetPulse.toValue = 1.1 // Slightly larger size targetPulse.duration = 0.4 targetPulse.beginTime = defaultBeginTime + startTime targetPulse.autoreverses = true targetPulse.repeatCount = .greatestFiniteMagnitude targetView.targetImageView.layer.add(targetPulse, forKey: "pulse_animation") } }
Posted Last updated
.
Post not yet marked as solved
2 Replies
468 Views
It seems AVAssetWriter is rejecting CVPixelBuffers with error -12743 when appending NSData for kCVImageBufferAmbientViewingEnvironmentKey for HDR videos. Here is my code: var ambientViewingEnvironment:CMFormatDescription.Extensions.Value? var ambientViewingEnvironmentData:NSData? ambientViewingEnvironment = sampleBuffer.formatDescription?.extensions[.ambientViewingEnvironment] let plist = ambientViewingEnvironment?.propertyListRepresentation ambientViewingEnvironmentData = plist as? NSData And then attaching this data, CVBufferSetAttachment(dstPixelBuffer, kCVImageBufferAmbientViewingEnvironmentKey, ambientViewingEnvironmentData! as CFData, .shouldPropagate) No matter what I do, including copying the attachment from sourcePixelBuffer to destinationPixelBuffer as it is, the error remains! var attachmentMode:CVAttachmentMode = .shouldPropagate let attachment = CVBufferCopyAttachment(sourcePixelBuffer!, kCVImageBufferAmbientViewingEnvironmentKey, &attachmentMode) NSLog("Attachment \(attachment!), mode \(attachmentMode)") CVBufferSetAttachment(dstPixelBuffer, kCVImageBufferAmbientViewingEnvironmentKey, attachment!, attachmentMode) I need to know if there is anything wrong in the way metadata is copied.
Posted Last updated
.
Post not yet marked as solved
1 Replies
758 Views
Hello 👋 I try to implement picture in picture on iOS with webRTC but I have some issue. I started by following this Apple article : https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls At least when my app is in background, the picture in picture view appear, but nothing is display within it : So by searching on internet I found this post in Stackoverflow (https://stackoverflow.com/questions/71419635/how-to-add-picture-in-picture-pip-for-webrtc-video-calls-in-ios-swift), who says : It's interesting but unfortunately, I don't know what I have to do... Here is my PictureInPictureManager : final class VideoBufferView: UIView { override class var layerClass: AnyClass { AVSampleBufferDisplayLayer.self } var sampleBufferDisplayLayer: AVSampleBufferDisplayLayer { layer as! AVSampleBufferDisplayLayer } } final class PictureInPictureManager: NSObject { static let shared: PictureInPictureManager = .init() private override init() { } private var pipController: AVPictureInPictureController? private var bufferView: VideoBufferView = .init() func configure(for videoView: UIView) { if AVPictureInPictureController.isPictureInPictureSupported() { let bufferView: VideoBufferView = .init() let pipVideoCallViewController: AVPictureInPictureVideoCallViewController = .init() pipVideoCallViewController.preferredContentSize = CGSize(width: 108, height: 192) pipVideoCallViewController.view.addSubview(bufferView) let pipContentSource: AVPictureInPictureController.ContentSource = .init( activeVideoCallSourceView: videoView, contentViewController: pipVideoCallViewController ) pipController = .init(contentSource: pipContentSource) pipController?.canStartPictureInPictureAutomaticallyFromInline = true pipController?.delegate = self } else { print("❌ PIP not supported...") } } } With this code, the picture in picture view appear empty. I read multiple article who talk about using the buffer but I'm not sure how to do it with webRTC... I tried by adding this function to my PictureInPictureManager : func updateBuffer(with pixelBuffer: CVPixelBuffer) { if let sampleBuffer = createSampleBufferFrom(pixelBuffer: pixelBuffer) { bufferView.sampleBufferDisplayLayer.enqueue(sampleBuffer) } else { print("❌ Sample buffer error...") } } private func createSampleBufferFrom(pixelBuffer: CVPixelBuffer) -> CMSampleBuffer? { var presentationTime = CMSampleTimingInfo() // Create a format description for the pixel buffer var formatDescription: CMVideoFormatDescription? let formatDescriptionError = CMVideoFormatDescriptionCreateForImageBuffer( allocator: kCFAllocatorDefault, imageBuffer: pixelBuffer, formatDescriptionOut: &formatDescription ) guard formatDescriptionError == noErr else { print("❌ Error creating format description: \(formatDescriptionError)") return nil } // Create a sample buffer var sampleBuffer: CMSampleBuffer? let sampleBufferError = CMSampleBufferCreateReadyWithImageBuffer( allocator: kCFAllocatorDefault, imageBuffer: pixelBuffer, formatDescription: formatDescription!, sampleTiming: &presentationTime, sampleBufferOut: &sampleBuffer ) guard sampleBufferError == noErr else { print("❌ Error creating sample buffer: \(sampleBufferError)") return nil } return sampleBuffer } but by doing that, I get this error message : Any help is welcome ! 🙏 Thanks, Alexandre
Posted Last updated
.
Post not yet marked as solved
0 Replies
475 Views
Hi, our app build suddenly got rejected: TMS-90338: Non-public API usage - The app references non-public symbols in Frameworks/BitmovinPlayer.framework/BitmovinPlayer: _CMTimebaseCreateWithMasterClock. If method names in your source code match the private Apple APIs listed above, altering your method names will help prevent this app from being flagged in future submissions. In addition, note that one or more of the above APIs may be located in a static library that was included with your app. If so, they must be removed. Developers of BitmovinPlayer confirmed that they don't use _CMTimebaseCreateWithMasterClock directly. They use public method CMTimebaseCreateWithMasterClock. What new requirements from Apple side appears yesterday? Before our application was never rejected by this reason.
Posted Last updated
.
Post marked as solved
2 Replies
729 Views
I'm trying the sample app from here: https://developer.apple.com/documentation/vision/detecting_moving_objects_in_a_video I made a tweak to read the video from library instead of document picker: var recordedVideoURL: AVAsset? @IBAction func uploadVideoForAnalysis(_ sender: Any) { var configuration = PHPickerConfiguration() configuration.filter = .videos configuration.selectionLimit = 1 let picker = PHPickerViewController(configuration: configuration) picker.delegate = self present(picker, animated: true, completion: nil) } The delegation method: extension HomeViewController: PHPickerViewControllerDelegate { func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { guard let selectedResult = results.first else { print("assetIdentifier: null") dismiss(animated: true, completion: nil) return } selectedResult.itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { [weak self] url, error in guard error == nil, let url = url else { print(error?.localizedDescription ?? "Failed to load image") return } let asset = AVAsset(url: url) self?.recordedVideoURL = asset DispatchQueue.main.async { [weak self] in self?.dismiss(animated: true) { // dismiss the picker self?.performSegue(withIdentifier: ContentAnalysisViewController.segueDestinationId, sender: self!) self?.recordedVideoURL = nil } } } } } everything else is pretty much the same. Then, in the camera controller, it raised an error: "The requested URL was not found on this server." I put a debug break point and it show the error was from the line init AssetReader: let reader = AVAssetReader(asset: asset) func startReadingAsset(_ asset: AVAsset, reader: AVAssetReader? = nil) { videoRenderView = VideoRenderView(frame: view.bounds) setupVideoOutputView(videoRenderView) videoFileReadingQueue.async { [weak self] in do { guard let track = asset.tracks(withMediaType: .video).first else { throw AppError.videoReadingError(reason: "No video tracks found in the asset.") } let reader = AVAssetReader(asset: asset) let settings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] let output = AVAssetReaderTrackOutput(track: track, outputSettings: settings) if reader.canAdd(output) { reader.add(output) } else { throw AppError.videoReadingError(reason: "Couldn't add a track to the asset reader.") } if !reader.startReading() { throw AppError.videoReadingError(reason: "Couldn't start the asset reader.") } ... I tried to create a reader directly in the asset creation block, it worked: selectedResult.itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { [weak self] url, error in guard error == nil, let url = url else { print(error?.localizedDescription ?? "Failed to load image") return } let asset = AVAsset(url: url) do { let reader = try AVAssetReader(asset: asset) self?.assetReader = reader print("reader: \(reader)") } catch let e { print("No reader: \(e)") } ... but if I just move it a little bit to the Dispatch.main.async block, it printed out "No reader: The requested URL was not found on this server. Therefore, I have to keep an instance of the reader and pass it to the cameraVC. Can someone please explain why is this happening? What's the logic behind this?
Posted Last updated
.
Post marked as solved
2 Replies
978 Views
Hello, I am attempting to simultaneously stream video to a remote client and run inference on a neural network (on the same video frame) locally. I have done this in other platforms, using Gstreamer on linux and on android using libstreaming for compression and packetization. I've attempted this now on iPhone using ffmpeg to stream and a capture session to feed the neural network but I run into the problem of multiple camera access. Most of the posts I see are concerned with receiving RTP streams in iOS but I need to do the opposite. As I am new to iOS and Swift I was hoping someone could provide method for RTP packetization? Any library recommendations or example code for something similar? Best,
Posted
by km2023.
Last updated
.