Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Post

Replies

Boosts

Views

Activity

Video Quality selection in HLS streams
Hello there, in our team we were requested to add the possibility to manually select the video quality. I know that HLS is an adaptive stream and that depending on the network condition it choose the best quality that fits to the current situation. I also tried some setting with preferredMaximumResolution and preferredPeakBitRate but none of them worked once the user was watching the steam. I also tried something like replacing the currentPlayerItem with the new configuration but anyway this only allowed me to downgrade the quality of the video. When I wanted to set it for example to 4k it did not change to that track event if I set a very high values to both params mentioned above. My question is if there is any method which would allow me to force certain quality from the manifest file. I already have some kind of extraction which can parse the manifest file and provide me all the available information but I couldn't still figure out how to make the player reproduce specific stream with my desired quality from the available playlist.
4
0
6.5k
Mar ’21
Grabbing arbitrary images in FxPlug 4 API
Hi, We're updating our plugins to offer native arm64 support in Final Cut Pro X and thus porting our existing code over to the FxPlug4 API. Here's were we're running into issues: Out plugin features a custom parameter (essentially a push button) that opens a configuration window on the UI thread. This window is supposed to display the frame at the current timeline cursor position. Since the FxTemporalImageAPI which we had been using before for that purpose has been deprecated, how could something like that be implemented? We tried adding a hidden int slider parameter that gets incremented once our selector is hit in order to force a re-render and then grab the current frame in renderDestinationImage:... . However, the render pipeline seems to be stalled until our selector has exited so our force render mechanism seems to be ineffective. So how do we relieably filter out and grab the source image at the current time position in FxPlug 4 ? Thanks! Best, Ray
1
0
1.1k
Jun ’22
FxPlug, Titles in FCPX where to start?
Hi, trying to wrap my head around Xcode's FXPlug. I already sell Final Cut Pro titles for a company. These titles were built in motion. However, they want me to move them to an app and I'm looking for any help on how to accomplish this *What the app should do is: Allow users with an active subscription to our website the ability to access titles within FCPX and if they are not an active subscriber, for access to be denied.
1
0
1.6k
Oct ’22
VideoToolbox AV1 decoding on Apple platforms
As some have clocked, this was added in a recent SDK release... kCMVideoCodecType_AV1 ...does anyone know if and when AV1 decode support, even if software-only, is going to be available on Apple platforms? At the moment, one must decode using dav1d, (which is pretty performant, to be fair) but are we expecting at least software AV1 support on existing hardware any time soon, does anybody know?
4
0
3.1k
Jan ’23
AVPlayer, AVAssetReader, AVAssetWriter - Using MXF files
Hi, I would like to read in .mxf files using AVPlayer and also AVAssetReader. I would like to write out to .mxf files using AVAssetWriter, Should this be possible ? Are there any examples of how to do this ? I found the VTRegisterProfessionalVideoWorkflowVideoDecoders() call but this did not seem to help. I would be grateful for any suggestions. Regards Tom
1
0
1.6k
Jan ’23
Does iOS support H.265 GDR (gradual decoding refresh)?
Hello, I'm trying to play an H.265 video stream, containing no I-frames but using GDR. No video is shown using VTDecompressionSessionDecodeFrame, other H.265 videos work fine. In the device log I can see the following errors: AppleAVD: VADecodeFrame(): isRandomAccessSkipPicture fail followed by An unknown error occurred (-17694) Is it possible to play such a video stream? Do I need to configure CMVideoFormatDescription in some way to enable it? Thanks for your help.
2
0
907
May ’23
AVPlayer queries redirecting URL twice
Hello, consider this very simple example: guard let url = URL(string: "some_url") else { return } let player = AVPlayer(url: url) let controller = AVPlayerViewController() controller.player = player present(controller, animated: true) { player.play() } When the video URL is using redirection and returns 302 when queried, AVPlayer's internal implementation is querying twice, which is proven by proxying. I'm not sure if I can provide the actual links, thus the screenshot is blurred. It can be seen though, that the redirecting URL, which receives 302 as a response, it is queried twice, and only after the 2nd attempt the actual redirection is taking place. This behavior is problematic to the backend services and we need to remediate it somehow. Do you have any idea on how to address this problem, please?
1
5
642
Jun ’23
App rejected because "Non-public API usage _CMTimebaseCreateWithMasterClock"
Hi, our app build suddenly got rejected: TMS-90338: Non-public API usage - The app references non-public symbols in Frameworks/BitmovinPlayer.framework/BitmovinPlayer: _CMTimebaseCreateWithMasterClock. If method names in your source code match the private Apple APIs listed above, altering your method names will help prevent this app from being flagged in future submissions. In addition, note that one or more of the above APIs may be located in a static library that was included with your app. If so, they must be removed. Developers of BitmovinPlayer confirmed that they don't use _CMTimebaseCreateWithMasterClock directly. They use public method CMTimebaseCreateWithMasterClock. What new requirements from Apple side appears yesterday? Before our application was never rejected by this reason.
0
0
573
Jul ’23
Video's sound become so low then to normal when recording using AVCaptureSession
Hi everyone, In my app i use AVCaptureSession to record video. I add videoDeviceInput and audioDeviceInput. And as output I use AVCaptureMovieFileOutput. And the result for some iPhone specially the iPhones after IPhoneX(iPhone 11,12, 13,14) has bad audio quality, The sound is like so low then after some seconds(around 7 secs) become normal. I have tried setting the audio setting for movie file out but it's still happen. Anyone know how to solve this issue?
0
0
355
Jul ’23
Seeking to specific frame index with AVAssetReader
We're using code based on AVAssetReader to get decoded video frames through AVFoundation. The decoding part per se works great but the seeking just doesn't work reliably. For a given H.264 file (in the MOV container) the decoded frames have presentation time stamps that sometimes don't correspond to the actual decoded frames. So for example: the decoded frame's PTS is 2002/24000 but the frame's actual PTS is 6006/24000. The frames have burnt-in timecode so we can clearly tell. Here is our code: - (BOOL) setupAssetReaderForFrameIndex:(int32_t) frameIndex { NSError* theError = nil; NSDictionary* assetOptions = @{ AVURLAssetPreferPreciseDurationAndTimingKey: @YES }; self.movieAsset = [[AVURLAsset alloc] initWithURL:self.filePat options:assetOptions]; if (self.assetReader) [self.assetReader cancelReading]; self.assetReader = [AVAssetReader assetReaderWithAsset:self.movieAsset error:&theError]; NSArray<AVAssetTrack*>* videoTracks = [self.movieAsset tracksWithMediaType:AVMediaTypeVideo]; if ([videoTracks count] == 0) return NO; self.videoTrack = [videoTracks objectAtIndex:0]; [self retrieveMetadata]; NSDictionary* outputSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey: @(self.cvPixelFormat) }; self.videoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:self.videoTrack outputSettings:outputSettings]; self.videoTrackOutput.alwaysCopiesSampleData = NO; [self.assetReader addOutput:self.videoTrackOutput]; CMTimeScale timeScale = self.videoTrack.naturalTimeScale; CMTimeValue frameDuration = (CMTimeValue)round((float)timeScale/self.videoTrack.nominalFrameRate); CMTimeValue startTimeValue = (CMTimeValue)frameIndex * frameDuration; CMTimeRange timeRange = CMTimeRangeMake(CMTimeMake(startTimeValue, timeScale), kCMTimePositiveInfinity); self.assetReader.timeRange = timeRange; [self.assetReader startReading]; return YES; } This is then followed by this code to actually decode the frame: CMSampleBufferRef sampleBuffer = [self.videoTrackOutput copyNextSampleBuffer]; CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); if (!imageBuffer) { CMSampleBufferInvalidate(sampleBuffer); AVAssetReaderStatus theStatus = self.assetReader.status; NSError* theError = self.assetReader.error; NSLog(@"[AVAssetVideoTrackOutput copyNextSampleBuffer] didn't deliver a frame - %@", theError); return false; } Is this method by itself the correct way of seeking and if not: what is the correct way? Thanks!
0
0
542
Jul ’23
iOS17 beta5 Video Encode with VideoToolBox, hang or stuck after VTCompressionSessionInvalidate appears kVTInvalidSessionErr
Use iOS17 VTCompressionSessionEncodeFrame beta5 version of video coding, if App switch back to the foreground from the background, Can appear kVTInvalidSessionErr reset according to the error invoked when VTCompressionSessionInvalidate cause gets stuck, if anyone else encountered this kind of situation, how should solve?
0
0
564
Aug ’23
Black frames in resulting AVComposition.
I have multiple AVAssets with that I am trying to merge together into a single video track using AVComposition. What I'm doing is iterating over my avassets and inserting them in to a single AVCompositionTrack like so: - (AVAsset *)combineAssets { // Create a mutable composition AVMutableComposition *composition = [AVMutableComposition composition]; AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid]; // Keep track of time offset CMTime currentOffset = kCMTimeZero; for (AVAsset *audioAsset in _audioSegments) { AVAssetTrack *audioTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] firstObject]; // Add the audio track to the composition audio track NSError *audioError; [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAsset.duration) ofTrack:audioTrack atTime:currentOffset error:&audioError]; if (audioError) { NSLog(@"Error combining audio track: %@", audioError.localizedDescription); return nil; } currentOffset = CMTimeAdd(currentOffset, audioAsset.duration); } // Reset offset to do the same with videos. currentOffset = kCMTimeZero; for (AVAsset *videoAsset in _videoSegments) { { // Get the video track AVAssetTrack *videoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] firstObject]; // Add the video track to the composition video track NSError *videoError; [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoTrack.timeRange.duration) ofTrack:videoTrack atTime:currentOffset error:&videoError]; if (videoError) { NSLog(@"Error combining audio track: %@", videoError.localizedDescription); return nil; } // Increment current offset by asset duration currentOffset = CMTimeAdd(currentOffset, videoTrack.timeRange.duration); } } return composition; } The issue is that when I export the composition using an AVExportSession I notice that there's a black frame between the merged segments in the track. In other words, if there were two 30second AVAssets merged into the composition track to create a 60 second video. You would see a black frame for a split second at the 30 second mark where the two assets combine. I don't really want to re encode the assets I just want to stitch them together. How can I fix the black frame issue?.
1
0
436
Aug ’23
How to prevent camera from adjusting brightness in manual mode?
In case when I have locked white balance and custom exposure, on black background when I introduce new object in view, both objects become brighter. How to turn off this feature or compensate for that change in a performant way? This is how I configure the session, note that Im setting a video format which supports at least 180 fps which is required for my needs. private func configureSession() { self.sessionQueue.async { [self] in //MARK: Init session guard let session = try? validSession() else { fatalError("Session is unexpectedly nil") } session.beginConfiguration() guard let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for:AVMediaType.video, position: .back) else { fatalError("Video Device is unexpectedly nil") } guard let videoDeviceInput: AVCaptureDeviceInput = try? AVCaptureDeviceInput(device:device) else { fatalError("videoDeviceInput is unexpectedly nil") } guard session.canAddInput(videoDeviceInput) else { fatalError("videoDeviceInput could not be added") } session.addInput(videoDeviceInput) self.videoDeviceInput = videoDeviceInput self.videoDevice = device //MARK: Connect session IO let dataOutput = AVCaptureVideoDataOutput() dataOutput.setSampleBufferDelegate(self, queue: sampleBufferQueue) session.automaticallyConfiguresCaptureDeviceForWideColor = false guard session.canAddOutput(dataOutput) else { fatalError("Could not add video data output") } session.addOutput(dataOutput) dataOutput.alwaysDiscardsLateVideoFrames = true dataOutput.videoSettings = [ String(kCVPixelBufferPixelFormatTypeKey): pixelFormat.rawValue ] if let captureConnection = dataOutput.connection(with: .video) { captureConnection.preferredVideoStabilizationMode = .off captureConnection.isEnabled = true } else { fatalError("No Capture Connection for the session") } //MARK: Configure AVCaptureDevice do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } if let format = format(fps: fps, minWidth: minWidth, format: pixelFormat) { // 180FPS, YUV layout device.activeFormat = format device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) } else { fatalError("Compatible format not found") } device.activeColorSpace = .sRGB device.isGlobalToneMappingEnabled = false device.automaticallyAdjustsVideoHDREnabled = false device.automaticallyAdjustsFaceDrivenAutoExposureEnabled = false device.isFaceDrivenAutoExposureEnabled = false device.setFocusModeLocked(lensPosition: 0.4) device.isSubjectAreaChangeMonitoringEnabled = false device.exposureMode = AVCaptureDevice.ExposureMode.custom let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) device.setExposureModeCustom(duration: exp, iso: isoValue) { t in } device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) { (timestamp:CMTime) -> Void in } device.unlockForConfiguration() session.commitConfiguration() onAVSessionReady() } } This post (https://stackoverflow.com/questions/34511431/ios-avfoundation-different-photo-brightness-with-the-same-manual-exposure-set) suggests that the effect can be mitigated by settings camera exposure to .locked right after setting device.setExposureModeCustom(). This works properly only if used with async api and still does not influence the effect. Async approach: private func onAVSessionReady() { guard let device = device() else { fatalError("Device is unexpectedly nil") } guard let sesh = try? validSession() else { fatalError("Device is unexpectedly nil") } MCamSession.shared.activeFormat = device.activeFormat MCamSession.shared.currentDevice = device self.observer = SPSDeviceKVO(device: device, session: sesh) self.start() Task { await lockCamera(device) } } private func lockCamera(_ device: AVCaptureDevice) async { do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } _ = await device.setFocusModeLocked(lensPosition: 0.4) let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) _ = await device.setExposureModeCustom(duration: exp, iso: isoValue) _ = await device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) device.exposureMode = AVCaptureDevice.ExposureMode.locked device.unlockForConfiguration() } private func configureSession() { // same session init as before ... onAVSessionReady() }
1
0
713
Sep ’23
Timestamps in AVPlayer
I want to show the user actual start and end dates of the video played on the AVPlayer time slider, instead of the video duration data. I would like to show something like this: 09:00:00 ... 12:00:00 (which indicates that the video started at 09:00:00 CET and ended at 12:00:00 CET), instead of: 00:00:00 ... 02:59:59. I would appreciate any pointers to this direction.
1
1
475
Sep ’23
iOS17 VideoToolBox video encode h264 file set average bitrate not work?
I use VTCompressionSession and set the average bit rate by kVTCompressionPropertyKey_AverageBitRate and kVTCompressionPropertyKey_DataRateLimits. The code like this: VTCompressionSessionRef vtSession = session; if (vtSession == NULL) { vtSession = _encoderSession; } if (vtSession == nil) { return; } int tmp = bitrate; int bytesTmp = tmp * 0.15; int durationTmp = 1; CFNumberRef bitrateRef = CFNumberCreate(NULL, kCFNumberSInt32Type, &tmp); CFNumberRef bytes = CFNumberCreate(NULL, kCFNumberSInt32Type, &bytesTmp); CFNumberRef duration = CFNumberCreate(NULL, kCFNumberSInt32Type, &durationTmp); if ([self isSupportPropertyWithSession:vtSession key:kVTCompressionPropertyKey_AverageBitRate]) { [self setSessionPropertyWithSession:vtSession key:kVTCompressionPropertyKey_AverageBitRate value:bitrateRef]; }else { NSLog(@"Video Encoder: set average bitRate error"); } NSLog(@"Video Encoder: set bitrate bytes = %d, _bitrate = %d",bytesTmp, bitrate); CFMutableArrayRef limit = CFArrayCreateMutable(NULL, 2, &kCFTypeArrayCallBacks); CFArrayAppendValue(limit, bytes); CFArrayAppendValue(limit, duration); if([self isSupportPropertyWithSession:vtSession key:kVTCompressionPropertyKey_DataRateLimits]) { OSStatus ret = VTSessionSetProperty(vtSession, kVTCompressionPropertyKey_DataRateLimits, limit); if(ret != noErr){ NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:ret userInfo:nil]; NSLog(@"Video Encoder: set DataRateLimits failed with %s",error.description.UTF8String); } }else { NSLog(@"Video Encoder: set data rate limits error"); } CFRelease(bitrateRef); CFRelease(limit); CFRelease(bytes); CFRelease(duration); } This work fine on iOS16 and below. But on iOS17 the bitrate of the generate video file is much lower than the value I set. For exmaple, I set biterate 600k but on iOS17 the encoded video bitrate is lower than 150k. What went wrong?
3
2
1k
Sep ’23
iOS 17 & iOS 16.7 videotoolbox crash
0 VideoToolbox ___vtDecompressionSessionRemote_DecodeFrameCommon_block_invoke_2() 1 libdispatch.dylib __dispatch_client_callout() 2 libdispatch.dylib __dispatch_client_callout() 3 libdispatch.dylib __dispatch_lane_barrier_sync_invoke_and_complete() 4 VideoToolbox _vtDecompressionSessionRemote_DecodeFrameCommon() After the release of iOS 17 & iOS 16.7, thousands of crashes were added as videotoolbox each day. Has anyone encountered similar problems?
6
2
1.3k
Sep ’23