I am working on a macOS app that uses AVFoundation to record the screen. During a recording if I make a window full screen, AVFoundation stops capturing screen frames (or does it at a very slow rate). In my logs I get the following error:
Error Domain=AVFoundationErrorDomain Code=-11844
note that I have had instances where I could not reproduce the error but they were rare.
The screen recording sometimes resumes normally if I switch desktops or minimize the full screen window.
Did anyone ever run across a similar issue or knows how to fix it ?
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
Hello,
I used following technical note to develop app that record mov file with SMPTE timecode.
https://developer.apple.com/library/archive/technotes/tn2310/_index.html
As result, a timecode track is present within .mov file (other tracks are audio and video)
Unfortunately, QuickTime Player doesn't display timecode information.
Analyser tools like mediainfo or online service as https://media-analyzer.pro/app show that timecode track has null duration (and so no "time code of last frame"
example n° of TC track :
Other
ID : 3
Type : Time code
Format : QuickTime TC
Frame rate : 60.000 FPS
Time code of first frame : 17:39:59:00
Time code, stripped : Yes
Title : Core Media Time Code
Encoded date : 2024-09-10 15:39:46 UTC
Tagged date : 2024-09-10 15:39:59 UTC
example 2 of Timecode track :
0000569562Quicktime Timecode #0
00007f6b8a'trak' Track atom #1
00007f6b92'tkhd' Track header atom #2
size 92 (0x5C)
type 'tkhd' (hex 74 6B 68 64)
version 0
flags 15 (0xF)
creation_time 0xE30618C2, '2024-09-10 15:39:46'
modification_time 0xE30618CF, '2024-09-10 15:39:59'
track_ID 3
reserved 0
duration 0
reserved [0, 0]
In each case, duration is considered as null even if the record's duration is more than 20s.
STEPS TO REPRODUCE
Use AVAssetWriter for video and audio.
Create AVAssetWrite for timecode and associate it with video track.
Just before stopping record, a sample buffer containing SMPTE is generated and added.
All track are marked as finished before stopping the record with finishWritingWithCompletionHandler.
Hello all,
This is my first post on the developer forums.
I am developing an app that records the screen of my app, using AVAssetWriter and RPScreenRecorder startCapture.
Everything is working as it should on most cases. There are some seemingly random times where the file generated is of some kb and it is corrupted. There seems to be no pattern on what the device is or the iOS version is. It can happen on various phones and iOS versions.
The steps I have followed in order to create the file are:
configuring the AssetWritter
videoAssetWriter = try? AVAssetWriter(outputURL: url!, fileType: AVFileType.mp4)
let size = UIScreen.main.bounds.size
let width = (Int(size.width / 4)) * 4
let height = (Int(size.height / 4)) * 4
let videoOutputSettings: Dictionary<String, Any> = [
AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : width,
AVVideoHeightKey : height
]
videoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoOutputSettings)
videoInput?.expectsMediaDataInRealTime = true
guard let videoInput = videoInput else { return }
if videoAssetWriter?.canAdd(videoInput) ?? false {
videoAssetWriter?.add(videoInput)
}
let audioInputsettings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputsettings)
audioInput?.expectsMediaDataInRealTime = true
guard let audioInput = audioInput else { return }
if videoAssetWriter?.canAdd(audioInput) ?? false {
videoAssetWriter?.add(audioInput)
}
The urlForVideo returns the URL to the documentDirectory, after appending and creating the folders needed. This part seems to be working as it should as the directories are created and the video file exists on them.
Start the recording
if RPScreenRecorder.shared().isRecording { return }
RPScreenRecorder.shared().startCapture(handler: { [weak self] sample, bufferType, error in
if let error = error {
onError?(error.localizedDescription)
} else {
if (!RPScreenRecorder.shared().isMicrophoneEnabled) {
RPScreenRecorder.shared().stopCapture { error in
if let error = error { return }
}
onError?("Microphone was not enabled")
}
else {
succesCompletion?()
succesCompletion = nil
self?.processSampleBuffer(sample, with: bufferType)
}
}
}) { error in
if let error = error {
onError?(error.localizedDescription)
}
}
Process the sampleBuffers
guard CMSampleBufferDataIsReady(sampleBuffer) else { return }
DispatchQueue.main.async { [weak self] in
switch sampleBufferType {
case .video:
self?.handleVideoBaffer(sampleBuffer)
case .audioMic:
self?.add(sample: sampleBuffer, to: self?.audioInput)
self?.audioInput)
default:
break
}
}
// The add function from above
fileprivate func add(sample: CMSampleBuffer, to writerInput: AVAssetWriterInput?) {
if writerInput?.isReadyForMoreMediaData ?? false {
writerInput?.append(sample)
}
// The handleVideoBaffer function from above
fileprivate func handleVideoBaffer(_ sampleBuffer: CMSampleBuffer) {
if self.videoAssetWriter?.status == AVAssetWriter.Status.unknown {
self.videoAssetWriter?.startWriting()
self.videoAssetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
} else {
if (self.videoInput?.isReadyForMoreMediaData) ?? false {
if self.videoAssetWriter?.status == AVAssetWriter.Status.writing {
self.videoInput?.append(sampleBuffer)
}
}
}
}
}
Finally the stop recording
func stopRecording(completion: @escaping (URL?, URL?, Error?) -> Void) {
RPScreenRecorder.shared().stopCapture { error in
if let error = error {
completion(nil, nil, error)
return
}
self.finish { videoURL, _ in
completion(videoURL, nil, nil)
}
}
}
// The finish function mentioned above
fileprivate func finish(completion: @escaping (URL?, URL?) -> Void) {
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
finishRecordVideo {
dispatchGroup.leave()
}
dispatchGroup.notify(queue: .main) {
print("Finish with url:\(String(describing: self.urlForVideo()))")
completion(self.urlForVideo(), nil)
}
}
// The finishRecordVideo mentioned above
fileprivate func finishRecordVideo(completion: @escaping ()-> Void) {
videoInput?.markAsFinished()
audioInput?.markAsFinished()
videoAssetWriter?.finishWriting {
if let writer = self.videoAssetWriter {
if writer.status == .completed {
completion()
}
else if writer.status == .failed {
// Print the error to find out what went wrong
if let error = writer.error {
print("Video asset writing failed with error: \(error.localizedDescription). Url: \(writer.outputURL.path)")
} else {
print("Video asset writing failed, but no error description available.")
}
completion()
}else {
completion()
}
}
}
}
What could it be the reason of the corrupted files generated? This issue has never happened to my devices so there is no way to debug using xcode. Also there are no errors popping out on the logs.
Can you spot any issues on the code that can create this kind of issue? Do you have any suggestions on the problem at hand?
Thanks
Hi there,
I have some code that's been working fine for the last few versions of iOS and macOS and all the others, and now causes a runtime crash in iOS 18/macOS 15 etc.
I have an actor called Player which is basically a big wrapper around an AVPlayer. It all gets compiled down to a Framework, and my clients use it by dropping it in to their video player app code. It handles everything needed for them to be able to talk to our media infrastructure and handles telemetry.
It has its own property called avplayer which is an AVPlayer. Gets created at the init().
It has a function called load(_ avPlayerItem: AVPlayerItem) which the clients use to load a new video into player.
The offending code (which used to work!) looks like this:
Task { @MainActor in
avplayer.replaceCurrentItem(with: avPlayerItem)
}
No warnings in Xcode. When you run it, it crashes on iOS 18 and macOS 15 with this error in the debugger:
Incorrect actor executor assumption
I thought, "Okay well maybe replaceCurrentItem has changed and doesn't need to be on the main actor anymore, so even if you say this outside of a Main Actor-scoped task:
avplayer.replaceCurrentItem(with: avPlayerItem)
...it still crashes the exact same way.
Does anyone have any ideas? I'm under some heavy pressure here to get this working and I don't even know where to start with this.
Big thanks in advance.
Hello,
I recently started integrating HLS downloads into my application by using AVAssetDownloadTask and AVAssetDownloadConfiguration. I took an example from the documentation as a basis, with only one small difference: the minimum target for my application is iOS 16, so I replaced urlSession(_:assetDownloadTask:willDownloadTo:) with urlSession(_:assetDownloadTask:didFinishDownloadingTo:).
And I encountered the following issue: after pausing a download and resuming it later, the progress no longer functions as expected.
Could you, please, help me with this? What are the right approaches to implementing pause and progress tracking?
Some details:
I used devices with iOS 16.0.2 and 17.6.1 for testing.
There was no code in the example that pauses the download and resumes it. So, I used the following methods to do this: suspend and resume
Also, I have tried to track downloading progress using two different approaches:
Using task.progress.observe(\.fractionCompleted) { ... }, which was presented in the example. In this scenario, after a pause, an observation callback will only be called once, when the download has completed, despite the fact that data is being successfully downloaded over the network.
Using urlSession(_:assetDownloadTask:didLoad:totalTimeRangesLoaded:timeRangeExpectedToLoad:) and calculating progress as totalTimeRangesLoaded.reduce(0.0) { $0 + CMTimeGetSeconds($1.timeRangeValue.duration) / CMTimeGetSeconds(timeRangeExpectedToLoad.duration) }. In this scenario, I have noticed that the result of the calculation does not always increase, but sometimes there are outliers. Example of logs: 68%, 69%, 70%, 72%, 63%, 65%, 66%, 69%, 70%, 71%, 72%. Such fluctuations are most easily reproduced when I try to resume the download after pause. However, sometimes they occur spontaneously. It's important to mention, that this method marked as deprecated, perhaps for this reason.
In both cases download is successful, the problem is with progress reporting only.
Full version of code can be found here.
I have a FairPlay-encrypted HLS stream and played the video in an AVPlayer.And I want to generate scrubbing thumbnails using the AVAssetImageGenerator.
Also, I am able to generate thumbnails for clear streams but get errors for protected content.
*How to generate thumbnails for protected content.
func getImageThumbnail(forTime: CMTime) {
let generator = AVAssetImageGenerator(asset: asset)
generator.appliesPreferredTrackTransform = true
generator.cancelAllCGImageGeneration()
generator.generateCGImagesAsynchronously(forTimes: [NSValue(time: forTime)]) { [weak self] requestedTime, image, actualTime, result, error in
if let error = error {
print("Error generate: \(error.localizedDescription)")
return
}
if let image = image {
DispatchQueue.main.async {
let image = UIImage(cgImage: image).jpegData(compressionQuality: 1.0)
self?.playerImg.image = UIImage(data: image!)
}
}
}
}
(AVPlayerViewController *)avPlayerVC {
if(!_avPlayerVC){
_avPlayerVC =[[AVPlayerViewController alloc] init];
_avPlayerVC.videoGravity = AVLayerVideoGravityResizeAspectFill;
_avPlayerVC.showsPlaybackControls = NO;
[self addSubview:_avPlayerVC.view];
[_avPlayerVC.view mas_makeConstraints:^(MASConstraintMaker *make) {
make.edges.mas_equalTo(0);
}];
[self sendSubviewToBack:_avPlayerVC.view];
}
return _avPlayerVC;
}
我在一个cell里添加这个,界面无法动弹。只有在iOS18会这样
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
My project uses AVPlayer (AVPlayerViewController) to play video. There are continuous warning logs while playing and when it goes to dealloc, it prints information below.
<<<< PlayerRemoteXPC >>>> remoteXPCItem_handleSetProperty signalled err=-12860 (kFigPlayerError_ParamErr) (propertyValue should be MTAudioProcessingTap) at FigPlayer_RemoteXPC.m:2760
This only happens in iOS 18 and I have no idea about this. There is no any information for FigPlayerInterstitial and else.
When update as IOS18 , Display automatically dims while the iPhone is video calling with LINE and it is stabilizing.
Basic
iPhone 11
iOS 17.5.1
Main Thread
libsystem_kernel.dylib___ulock_wait (in libsystem_kernel.dylib) +8
libdispatch.dylib__dlock_wait (in libdispatch.dylib) +52
libdispatch.dylib__dispatch_thread_event_wait_slow (in libdispatch.dylib) +52
libdispatch.dylib___DISPATCH_WAIT_FOR_QUEUE__ (in libdispatch.dylib) +364
libdispatch.dylib__dispatch_sync_f_slow (in libdispatch.dylib) +144
MediaToolbox_fpic_CopyCurrentEvent (in MediaToolbox) +132
AVFCore___104-[AVPlayer _setRate:withVolumeRampDuration:playImmediately:rateChangeReason:affectsCoordinatedPlayback:]_block_invoke_2 (in AVFCore) +244
AVFCore-[AVPlayer _setRate:withVolumeRampDuration:playImmediately:rateChangeReason:affectsCoordinatedPlayback:] (in AVFCore) +276
AVFCore-[AVPlayer setRate:] (in AVFCore) +56
call AVPlayer pause
Thread 81 name: fpic-sync
libsystem_kernel.dylib___ulock_wait (in libsystem_kernel.dylib) +8
libdispatch.dylib__dlock_wait (in libdispatch.dylib) +52
libdispatch.dylib__dispatch_thread_event_wait_slow (in libdispatch.dylib) +52
libdispatch.dylib___DISPATCH_WAIT_FOR_QUEUE__ (in libdispatch.dylib) +364
libdispatch.dylib__dispatch_sync_f_slow (in libdispatch.dylib) +144
MediaToolbox_itemasync_CopyProperty (in MediaToolbox) +588
MediaToolbox_fpic_CurrentItemMoment (in MediaToolbox) +184
MediaToolbox___fpic_EstablishCurrentEventForCurrentItem_block_invoke (in MediaToolbox) +136
libdispatch.dylib__dispatch_client_callout (in libdispatch.dylib) +16
libdispatch.dylib__dispatch_lane_barrier_sync_invoke_and_complete (in libdispatch.dylib) +52
MediaToolbox_fpic_ServiceCurrentEvent (in MediaToolbox) +600
MediaToolbox___fpic_NotifyServiceCurrentEvent_block_invoke (in MediaToolbox) +912
libdispatch.dylib__dispatch_call_block_and_release (in libdispatch.dylib) +28
libdispatch.dylib__dispatch_client_callout (in libdispatch.dylib) +16
libdispatch.dylib__dispatch_lane_serial_drain (in libdispatch.dylib) +744
libdispatch.dylib__dispatch_lane_invoke (in libdispatch.dylib) +428
libdispatch.dylib__dispatch_root_queue_drain (in libdispatch.dylib) +388
libdispatch.dylib__dispatch_worker_thread (in libdispatch.dylib) +256
libsystem_pthread.dylib__pthread_start (in libsystem_pthread.dylib) +132
libsystem_pthread.dylib_thread_start (in libsystem_pthread.dylib) +4
Thread 93 name: com.apple.coremedia.player.async.0x303c60240.P/GR
libsystem_kernel.dylib_mach_msg2_trap (in libsystem_kernel.dylib) +8
libsystem_kernel.dylib_mach_msg2_internal (in libsystem_kernel.dylib) +76
libsystem_kernel.dylib_mach_msg_overwrite (in libsystem_kernel.dylib) +432
libsystem_kernel.dylib_mach_msg (in libsystem_kernel.dylib) +20
libdispatch.dylib__dispatch_mach_send_and_wait_for_reply (in libdispatch.dylib) +540
libdispatch.dylib_dispatch_mach_send_with_result_and_wait_for_reply (in libdispatch.dylib) +56
libxpc.dylib_xpc_connection_send_message_with_reply_sync (in libxpc.dylib) +260
CoreMedia_FigXPCConnectionSendSyncMessageCreatingReply (in CoreMedia) +288
CoreMedia_FigXPCRemoteClientSendSyncMessageCreatingReply (in CoreMedia) +44
MediaToolbox_remoteXPCPlayer_SetRateWithOptions (in MediaToolbox) +148
MediaToolbox_playerasync_runOneCommand (in MediaToolbox) +768
MediaToolbox_playerasync_runAsynchronousCommandOnQueue (in MediaToolbox) +180
libdispatch.dylib__dispatch_client_callout (in libdispatch.dylib) +16
libdispatch.dylib__dispatch_lane_serial_drain (in libdispatch.dylib) +744
libdispatch.dylib__dispatch_lane_invoke (in libdispatch.dylib) +428
libdispatch.dylib__dispatch_root_queue_drain (in libdispatch.dylib) +388
libdispatch.dylib__dispatch_worker_thread (in libdispatch.dylib) +256
libsystem_pthread.dylib__pthread_start (in libsystem_pthread.dylib) +132
libsystem_pthread.dylib_thread_start (in libsystem_pthread.dylib) +4
I'm trying to create code to generate an fcpxml file so I can automate Final Cut Pro timeline (project) creation. Here's an xml element that FCP successfully imports (and successfully creates a project/timeline).
<project name="2013-08-09 19_23_07 (id).mov">
<sequence format="r1">
<spine>
<asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="146173027/60000s" duration="871871/60000s" tcFormat="DF" audioRole="dialogue"></asset-clip>
</spine>
</sequence>
</project>
The xml element example above was generated by exporting a simple timeline with a single clip. The problem I'm having is the media asset has timecode that gives a start time in relation to the timecode. When I try to remove timecode attributes and change the start time to "0s"
<asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="0s" duration="871871/60000s" audioRole="dialogue"></asset-clip>
FCP complains with the import error:
2013-08-09 19_23_07 (id).fcpxml Invalid edit with no respective media. (/fcpxml[1]/project[1]/sequence[1]/spine[1]/asset-clip[1])
I guess the question is, does AVAsset provide a way to get the timecode information and the timecode based start offset, or is there a way to tell FCP to use a default start time independent of timecode?
AVKit provides the SwiftUI view VideoPlayer, and allows you to add an interactive overlay. But that overlay is normally placed behind the system-provided playback controls.
Is there any way to suppress those controls, without resorting to wrapping AVPlayerView?
I updated macOS to 15.0 yesterday, and I found some floating point value support under CMFormatDescriptionExtensions and CVPixelBuffer's Attachment seems to be broken.
When I call CMSampleBufferCreateReadyWithImageBuffer() from CVPixelBuffer, macOS 15.0 always fail with floating point values.
a. kCMFormatDescriptionExtension_GammaLevel
Previous macOS 14.x works with double value like
NSString* keyGamma = (__bridge NSString*)kCMFormatDescriptionExtension_GammaLevel;
extensions[keyGamma] = @(2.2);
b. kCMFormatDescriptionExtension_CleanAperture
I am not sure yet but such non-integer value issue also seems to be applied to CleanAperture.
kCMFormatDescriptionKey_CleanApertureWidth
kCMFormatDescriptionKey_CleanApertureHeight
kCMFormatDescriptionKey_CleanApertureHorizontalOffset
kCMFormatDescriptionKey_CleanApertureVerticalOffset
Also, When I add rational values to extensions, it cannot pass CMVideoFormatDescriptionMatchesImageBuffer() with:
kCMFormatDescriptionKey_CleanApertureWidthRational
kCMFormatDescriptionKey_CleanApertureHeightRational
kCMFormatDescriptionKey_CleanApertureHorizontalOffsetRational
kCMFormatDescriptionKey_CleanApertureVerticalOffsetRational
Is there any known workaround?
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?
I work on a video editing app that composes multiple small video clips, sometimes hundreds or thousands. For one user in particular, attempting to export causes a failure 100% of the time. The failure occurs in the initialization of AVAssetReader, and is in the AVFoundationErrorDomain with code -11819 (AVErrorMediaServicesWereReset.) We've done everything we can think of, including quitting other running apps, enabling airplane mode, and even performing the flow on an identical device using the customer's data, and have had no luck pinning down the cause of the error. Does anyone have any suggestion for how we might go about debugging this? Getting ready to file a TSI but thought I should ask here first.
Hi, Recording Videos with AVAssetWriter, capture fps(camera output fps) is ok, but final result video fps was lower, the reason is AVAssetWriterInput.isReadyForMoreMediaData is false sometimes.
Yes, I have read document many times, it said need to set expectsMediaDataInRealTime to true and balabala...
I really be tortured by this problem for a long time, can I debug this problem? or any advice?
Hi,
Im trying to use this example (https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video)
to encode a stereoscopic (left eye right eye) video frame using MVHEVC. The sample project creates tagged buffers for left and right eye, and uses a writer to write the MVHEC encoded video buffers. But i after i get right and left tagged buffers, i want to use VideoMaterial and its AVSampleBufferVideoRenderer to enqueue these video frames. If i render MVHEVC encoded left eye sample buffer, and right eye sample buffer, sequentially will the AVSampleBufferVideoRenderer render it as a stereoscopic view? How does this work with VideoMaterial and AVSampleBufferVideoRenderer ? Thanks!
When displaying and playing multiple HLS videos (4 or 6 screens) side by side using AVPlayer on iPad devices running iOS 17 or later, even though the videos are set to play at normal speed, some frames appear to be skipped, causing the videos to play faster than intended. This issue occasionally occurs when repeatedly playing and pausing the videos, and the more screens there are, the more frequently it happens. However, the occurrence rate is not very high (about 1 in 50 times).
This phenomenon has been reproduced on iPad devices running iOS 17 or later and does not occur on devices running iOS 16 or earlier.
Devices where the issue has been confirmed:
iPad 6th generation / iOS ver 17.6.1
iPad 9th generation / iOS ver 17.6.1
iPad Pro 11-inch 1st generation / iOS ver 17.4.1
I have tried implementing countermeasures based on information from similar issues, such as those mentioned on the following website, but the problem remains unresolved:
https://stackoverflow.com/questions/77224167/avplayer-unexpected-behaviour-after-ios-and-tvos-update-to-17-0
From the console logs, I observed that on devices running iOS 17 or later, the following log was output:
AppleD5500: Bad NAL type 10
I suspect that some kind of decoding failure may be occurring, leading to the issue described above. If you have any information or can provide support on this matter, I would greatly appreciate it.
I was trying to migrate Core Image based code that's rotating an image in a CVPixelBuffer to the newer VTPixelRotationSession from Video Toolbox. Hoping to increase performance.
The original code does:
let rotatedImage = CIImage(cvPixelBuffer: origPixelBuffer).oriented(.left)
context.render(rotatedImage, to: newPixelBuffer)
The new code uses a session:
_ = VTPixelRotationSessionRotateImage(rotationSession, origPixelBuffer, newPixelBuffer)
However I immediately ran into memory limitations, since my code has to be able to run in an iOS extension. It seems VTPixelRotationSessionRotateImage easily lets memory usage spike over the 50MB of allowed memory. While the CIImage based implementation has no such high memory usage at all.
Is this expected? Does the VTPixelRotationSession implementation gain more performance by sacrificing memory? Or is there something I'm overlooking?
I was expecting the VTPixelRotationSession at worst to be on par in terms of memory usage and processing speed compared to CIImage. At this moment it seems VTPixelRotationSession is unusable in extensions.
See also Feedback: FB14977240
I'm using the "Converting side-by-side 3D video to multiview HEVC and spatial video" sample code on iOS. It takes about 8 seconds to convert a 6-second video. At this rate, a 1-hour video would take 1.3 hours to convert.
How can I speed up the conversion?
BTW, are there solutions to convert side-by-side 3D video to spatial video for Windows?