AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts

Post

Replies

Boosts

Views

Activity

FairPlay HLS Downloaded Asset Fails to Play on First Attempt When Offline, Works on Retry
Hello, We're seeing an intermittent issue when playing back FairPlay-protected HLS downloads while the device is offline. Assets are downloaded using AVAggregateAssetDownloadTask with FairPlay protection. After download, asset.assetCache.isPlayableOffline == true. On first playback attempt (offline), ~8% of downloads fail. Retrying playback always works. We recreate the asset and player on each attempt. During the playback setup, we try to load variants via: try await asset.load(.variants) This call sometimes fails with: Error Domain=NSURLErrorDomain Code=-1009 “The Internet connection appears to be offline.” UserInfo={NSUnderlyingError=0x105654a00 {Error Domain=NSURLErrorDomain Code=-1009 “The Internet connection appears to be offline.” UserInfo={NSDescription=The Internet connection appears to be offline.}}, NSErrorFailingURLStringKey=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, NSErrorFailingURLKey=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, NSURL=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, AVErrorFailedDependenciesKey=( “assetProperty_HLSAlternates” ), NSLocalizedDescription=The Internet connection appears to be offline.} This variant load is used to determine available audio tracks, check for Dolby support, and apply user language preferences. After this step, the AVPlayerItem also fails via Combine’s publisher for .status. However, retrying the entire process immediately after (same offline conditions, same asset path, new AVURLAsset) results in successful playback. Assets are represented using the following class: public class DownloadedAsset: AVURLAsset { public let id: String public let localFileUrl: URL public let fairplayLicenseUrlString: String? public let drmToken: String? var isProtected: Bool { return fairplayLicenseUrlString != nil } public init(id: String, localFileUrl: URL, fairplayLicenseUrlString: String?, drmToken: String?) { self.id = id self.localFileUrl = localFileUrl self.fairplayLicenseUrlString = fairplayLicenseUrlString self.drmToken = drmToken super.init(url: localFileUrl, options: nil) } } We use user-selected quality levels to control bitrate and multichannel (e.g. Dolby 5.1) downloads: let downloadQuality = UserDefaults.standard.downloadVideoQuality let bitrate: Int let shouldDownloadMultichannelTracks: Bool switch downloadQuality { case .dataSaver: shouldDownloadMultichannelTracks = false bitrate = 596564 case .standard: shouldDownloadMultichannelTracks = false bitrate = 1503844 case .best: shouldDownloadMultichannelTracks = true bitrate = 7038970 } var selections = multichannelIdentifiedMediaSelections if !shouldDownloadMultichannelTracks { selections = selections.filter { !$0.isMultichannel } } let task = session.aggregateAssetDownloadTask( with: asset, mediaSelections: selections.map { $0.mediaSelection }, assetTitle: title, assetArtworkData: nil, options: [AVAssetDownloadTaskMinimumRequiredMediaBitrateKey: bitrate] ) Seen on devices running iOS 16, iOS 17, and iOS 18. What could cause the initial failure of an otherwise valid, offline-ready FairPlay HLS asset? Could .load(.variants) internally trigger a failed network resolution, even when offline? Is there an internal caching or initialization behavior in AVFoundation that might explain why the second attempt works? Any guidance would be appreciated.
1
0
233
Jun ’25
storing AVAsset in SwiftData
Hi, I am creating an app that can include videos or images in it's data. While @Attribute(.externalStorage) helps with images, with AVAssets I actually would like access to the URL behind that data. (as it would be stupid to load and then save the data again just to have a URL) One key component is to keep all of this clean enough so that I can use (private) CloudKit syncing with the resulting model. All the best Christoph
1
0
611
Jun ’25
AirPlay v1 is broken in iOS 18.4?
After upgrading to iOS 18.4, I'm no longer able to establish an AirPlay v1 connection to an audio system. The symptom is that the AirPlay route picker just spins when trying to connect to an audio system. It eventually gives up. I tested this on an iPhone 14, connecting to a HomePod, AirPort express, AppleTV and a Wiim Pro. If I try connecting with AirPlay v2, ex: using Apple Music, the connection succeeds and audio can be played. I'm the developer of an app that plays audio over AirPlay while also recording. My app has to use AirPlay v1 because AvAudioSession doesn't allow the policy .longFormAudio when the category is .playAndRecord. This issue is a real pain as it means my app is suddenly broken for many thousands of users. Is anyone else seeing this issue? Any suggestions for a workaround?
2
3
454
Jun ’25
Alert structure and playing sound
Hello, I'm working on an SwiftUI iOS app that shows a list of timers. When the timer is up then I pop up an alert struct. The user hits "ok" to dismiss the alert. I am trying to include an alarm sound using AVFoundation. I can get the sounds to play if I change the code to play when a button clicks so I believe I have the url path correct. But I really want it to play during the alert pop up. I have not been able to find examples where this is done using an alert so I suspect I need a custom view but thought I'd try the alert route first. Anyone try this before? @State var audioPlayer: AVAudioPlayer? .alert(isPresented: $showAlarmAlert) { playSound() -- Calls AVFoundation return Alert(title: Text("Time's Up!")) } func playSound() { let alertSoundPath = Bundle.main.url(forResource: "classicAlarm", withExtension: "mp3")! do { audioPlayer = try AVAudioPlayer(contentsOf: alertSoundPath) audioPlayer?.play() } catch { appData.logger.debug("Error playing sound: \(alertSoundPath)") } }
4
0
134
Jun ’25
Is Phase Detection Autofocus degrading video stabilization, and can I disable it?
I'm developing a video capture app using AVFoundation, designed specifically for use on a boat pylon to record slalom water skiing. This setup involves considerable vibration. As you may know, the OIS that Apple began adding to lenses since the iPhone 7 is actually very problematic in high vibration circumstances, ironically creating very shaky video, whereas lenses without OIS produce perfectly stable video. Because of this, up until iPhone 14, the solution for my app was simply to use the Selfie lens, which did not have OIS. Starting with iPhone 14 through iPhone 16 (non-Pro models), technical specs suggest the selfie lens still does not include OIS. However, I’m still seeing the same kind of shaky video behavior I see on OIS-equipped lenses. The one hardware change I see in this camera module is the addition of PDAF (Phase Detection Autofocus), so that is my best guess as to what is causing the unstable video. 1- Does that make any sense - that in high vibration settings, PDAF could create unstable video in the same way that OIS does? Or could it be something else that was changed between the iPhone 13 and 14 Selfie lens? Thinking that the issue was PDAF, I figured that if I enabled my app to set a Manual Focus level, that ought to circumvent PDAF (expecting that if a lens is manually focusing, it can’t also be autofocusing via PDAF). However, even with manual focus locked via AVCaptureDevice in my app, on the Selfie lens of an iPhone 16, the video still comes out very shaky, basically unusable. I also tested with the built-in Apple Camera app (using the press-and-hold to lock focus and exposure) and another 3rd party camera app to lock focus, all with the same results, so it's not that my app just isn't correctly doing manual focus. So I'm stuck with these questions: 2- Does the selfie camera on iPhones 14–16 use PDAF even when focus is set to locked/manual mode? 3- Is there any way in AVFoundation to disable or suppress PDAF during video recording (e.g., a flag, device format setting, or private API)? 4- Is PDAF behavior or suppression documented or controllable via AVCaptureDevice or any related class? 5- If no control of PDAF is available, are there any best practices for stabilizing or smoothing this effect programmatically? Note that I also have set my app to use the most aggressive form of stabilization available, so it defaults to .cinematicExtendedEnhanced, if that’s not available, then .cinematicExtended, etc. On the 16 Selfie lens, it is using .cinematicExtended. As an additional question: 6- Would those be the most appropriate stabilization settings for a high vibration environment, and if not, what would be best?
0
0
222
May ’25
App Randomly Crashes During Continuous Sound Playback Using AVAudioPlayer
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function: var audioPlayer: AVAudioPlayer? func soundPlay(resource: String, type: String){ guard let path = Bundle.main.path(forResource: resource, ofType: type) else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path)) audioPlayer!.delegate = self try audioSession.setCategory(.playback) } catch { return } self.audioPlayer!.play() } The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping. Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes. My questions are: **Is this expected behavior from AVAudioPlayer or AVAudioSession? Could this be a known issue or a limitation in AVFoundation? Is there any documentation or guidance on handling frequent sound playback safely?** Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
0
0
204
May ’25
Format of 14-bit RAW bayer data from lower bit camera sensor?
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive sensor data in as unprocessed format as possible. I'm using AVCapturePhotoOutput to take Bayer RAW stills and receiving data in kCVPixelFormatType_14Bayer_RGGB format. However, I'm puzzled as to the content of the bits. I simply demosaic the image by taking each 2x2 square: RG GB and use R, (G+G)/2, B to get 16-bit RGB values - and this indeed works. However, I am puzzled as to the values we are getting as they seem to be approximately in the range 2048 - 16383. The top value is understandable - the maximum that you can fit in 14-bits (as implied by the pixel format type). However we don't seem to be able to get lower than ~2048 no matter how black/dark we make the sensor. I'm aware that the sensor is probably not 14-bits (we're using the iPhone 16e camera) and that maybe this is to do with the way the sensor data is packaged. The Advances in iOS Photography video (https://developer.apple.com/videos/play/wwdc2016/501/) describes it as "10-bit sensor RAW packaged in 14 bits per pixel instead of eight." Is there any documentation describing what is going on here? It's vital for our use that we get as close to the raw camera sensor light readings as possible, so any pointers as to the mapping (e.g. decompanding?) being used would be extremely useful. Many thanks in advance for your help.
3
0
187
May ’25
How to reduce CMSampleBuffer volume
Hello, Basically, I am reading and writing an asset. To simplify, I am just reading the asset and rewriting it into an output video without any modifications. However, I want to add a fade-out effect to the last three seconds of the output video. I don’t know how to do this. So far, before adding the CMSampleBuffer to the output video, I tried reducing its volume using an extension on CMSampleBuffer. In the extension, I passed 0.4 for testing, aiming to reduce the video's overall volume by 60%. My question is: How can I directly adjust the volume of a CMSampleBuffer? Here is the extension: extension CMSampleBuffer { func adjustVolume(by factor: Float) -> CMSampleBuffer? { guard let blockBuffer = CMSampleBufferGetDataBuffer(self) else { return nil } var length = 0 var dataPointer: UnsafeMutablePointer<Int8>? guard CMBlockBufferGetDataPointer(blockBuffer, atOffset: 0, lengthAtOffsetOut: nil, totalLengthOut: &length, dataPointerOut: &dataPointer) == kCMBlockBufferNoErr else { return nil } guard let dataPointer = dataPointer else { return nil } let sampleCount = length / MemoryLayout<Int16>.size dataPointer.withMemoryRebound(to: Int16.self, capacity: sampleCount) { pointer in for i in 0..<sampleCount { let sample = Float(pointer[i]) pointer[i] = Int16(sample * factor) } } return self } }
4
0
440
May ’25
MPNowPlayingInfoCenter nowPlayingInfo throttled
Hello, I have been running into issues with setting nowPlayingInfo information, specifically updating information for CarPlay and the CPNowPlayingTemplate. When I start playback for an item, I see lock screen information update as expected, along with the CarPlay now playing information. However, the playing items are books with collections of tracks. When I select a new track(chapter) within the book, I set the MPMediaItemPropertyTitle to the new chapter name. This change is reflected correctly on the lock screen, but almost never appears correctly on the CarPlay CPNowPlayingTemplate. The previous chapter title remains set and never updates. I see "Application exceeded audio metadata throttle limit." in the debug console fairly frequently. From that a I figured that I need to minimize updates to the nowPlayingInfo dictionary. What I did: I store the metadata dictionary in a local dictionary and only set values in the main nowPlayingInfo dictionary when they are different from the current value. I kick off the nowPlayingInfo update via a task that initially sleeps for around 2 seconds (not a final value, just for my current testing). If a previous Task is active, it gets cancelled, so that only one update can happen within that time window. Neither of these things have been sufficient. I can switch between different titles entirely and the information updates (including cover art). But when I switch chapters within a title, the MPMediaItemPropertyTitle continues to get dropped. I know the value is getting set, because it updates on the lock screen correctly. In total, I have 12 keys I update for info, though with the above changes, usually 2-4 of them actually get updated with high frequency. I am running out of ideas to satisfy the throttling thresholds to accurately display metadata. I could use some advice. Thanks.
4
1
197
May ’25
videoCaptureQueue would make the app crashed when I using IOS 18.4.1
Hi All I have some problem when I using the IOS 18.4.1 I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1 I tried to following sample code. However, when I run the app around 30 seconds to 1 minutes, the application would be crashed When I using another Ipad with IOS 17, it would not have the same problem. https://developer.apple.com/documentation/createml/creating-an-action-classifier-model https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
6
0
201
May ’25
AVPlayer freezes after ReplaceCurrentItemWithPlayerItem + immediate seek on iOS 18.4 (streaming only)
Starting in iOS 18.4, (and still in the iOS 18.5 beta), the AVPlayer seems to freeze when we: Replace the current AVPlayerItem, ReplaceCurrentItemWithPlayerItem and then: Call Seek very shortly afterwards (seekToTime:toleranceBefore:toleranceAfter: / seek(to:)) And then subsequent calls to play after have no effect. However, it feels scrubbing to see after works and also changing the playback rate (i.e. fast forward) tends to clear up the frozen state. Our primary workflow involves video playback, replacing video to show new clips and in some cases seeking to specific frames. This appears to only be occurring while streaming video, reports are all that local downloaded video playback remains fine. This same code path has worked without issue on 17.x and 18.3.2 and for years before that. What is particularly strange is that time observers log that video is still playing or feeding frames. The reported status is ReadyToPlay, IsLikelyToKeepUp is true, and there are no indications of stalling or buffering. A similar issue is true for our web application in Safari. While on Sonoma and Safari 17.x, there is no issue. When you update to macOS Sequoia 15.4.1 and Safari 18.4, you begin observing a similar freezing. The same does not occur on Chrome or other tested browsers. There appears to be in the release notes for Safari 18.4, an interesting "fix" note that seems similar to what we are now experiencing: https://developer.apple.com/documentation/safari-release-notes/safari-18_4-release-notes "Fixed an issue where playback doesn’t always resume after a seek. (140097993)" "Fixed playing video generating non-monotonic ‘timeupdate’ events. (142275184) (FB16222910)" "Fixed websites calling play() during a seek() is allowed by the specification so that the play event is fired even if the seek hasn’t completed. (142517488)" "Fixed seek not completing for WebM under some circumstances. (143372794)" "Fixed MediaRecorderPrivateEncoder writing frames out of order. (143956063)"
2
0
186
May ’25
CoreMediaErrorDomain -12888: Bandwidth down-stepping when using 2sec segment duration
the problem is when using HLS live stream with AVPlayer on iOS/ tvOS the player chooses first highest bandwidth then slowly steps down to lowest (within 1-3min) and eventually steps up again then repeats to step down. the AVPlayer error log sends events: errorStatusCode: -12888, errorDomain: Optional("CoreMediaErrorDomain"), errorComment: Optional("The operation couldn't be completed. (CoreMediaErrorDomain error -12888 - Playlist File unchanged for longer than 1.5 * target duration we use standard segments in CMAF format, 2sec duration #EXTM3U #EXT-X-VERSION:6 #EXT-X-TARGETDURATION:2 #EXT-X-MEDIA-SEQUENCE:147065903 #EXT-X-MAP:URI="video_1_4660000_init.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2" #EXT-X-PROGRAM-DATE-TIME:2025-04-30T12:51:07 #EXTINF:2.000, video_1_4660000_t17460174670001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 #EXTINF:2.000, video_1_4660000_t17460174690001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 #EXTINF:2.000, video_1_4660000_t17460174710001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 when using 6sec segments the player stays stable at highest bandwidth. is there a way to avoid this error? in AVPlayer or HLS configuration?
2
0
354
May ’25
How to request permission for System Audio Recording Only?
Hi community, I'm wondering how can I request the permission of "System Audio Recording Only" under the Privacy & Security -> Screen & System Audio Recording via swift? Did a bunch of search but didn't find good documentation on it. Tried another approach here https://github.com/insidegui/AudioCap/blob/main/AudioCap/ProcessTap/AudioRecordingPermission.swift which doesn't work very reliably.
2
0
762
May ’25
AVPlayerViewController crashes
I have a crash related to playing video in AVPlayerViewController and AVQueuePlayer. I download the video locally from the network and then initialize it using AVAsset and AVPlayerItem. Can't reproduce locally, but crashes occur from firebase crashlytics only for users starting with iOS 18.4.0 with this trace: Crashed: com.apple.avkit.playerControllerBackgroundQueue 0 libobjc.A.dylib 0x1458 objc_retain + 16 1 libobjc.A.dylib 0x1458 objc_retain_x0 + 16 2 AVKit 0x12afdc __77-[AVPlayerController currentEnabledAssetTrackForMediaType:completionHandler:]_block_invoke + 108 3 libdispatch.dylib 0x1aac _dispatch_call_block_and_release + 32 4 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 5 libdispatch.dylib 0x6560 _dispatch_continuation_pop + 596 6 libdispatch.dylib 0x5bd4 _dispatch_async_redirect_invoke + 580 7 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364 8 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156 9 libsystem_pthread.dylib 0x4624 _pthread_wqthread + 232 10 libsystem_pthread.dylib 0x19f8 start_wqthread + 8
2
3
232
May ’25
issue in recording using AVAudio
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet. **func installTap( onBus bus: AVAudioNodeBus, bufferSize: AVAudioFrameCount, format: AVAudioFormat?, block tapBlock: @escaping AVAudioNodeTapBlock ) ** It works perfectly fine. But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
0
0
123
Apr ’25
AVAudioMixerNode outputVolume range?
According to the header file the outputVolume properties supported range is 0.0-1.0: /*! @property outputVolume @abstract The mixer's output volume. @discussion This accesses the mixer's output volume (0.0-1.0, inclusive). @property (nonatomic) float outputVolume; However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume? Thanks
0
0
134
Apr ’25
Save MPEG-TS (h264 or HEVC) video stream using AVAssetWriter.
I'm capturing video stream from GoPro camera (I demux UDP MPEG-TS packets) and create CMSampleBuffers from them, this works fine when I display them using CMSampleBufferLayer. However when I dump them to disk using AVAssetWriter and then playback it with AVPlayer, AVPlayer has problems with scrubbing, it also cannot render previous frames, it needs to go back to key frames. Also thumbnails generated with AVAssetImageGenerator are mostly distorted and green, even though I set the requestedTimeToleranceAfter longer than the key frames frequency. When I re-encode saved video once again with AVAssetExportSession and play it back then I can scrub the video just fine. Is it because re-transcoding adds additional metadata to enable generating frames when rewinding the video and scrubbing? If so is there a way to achieve it with AVAssetWriter without much time penalty? I need the dump/save operation to be very fast. I also considered the following: Instead of de-muxing video and creating CMSampleBuffers, maybe I could directly dump the stream to disk and somehow add moov atoms with timing information. Would this approach work? If so where I can find information how to do it? Thank you!
3
0
171
Apr ’25
AVAssetWriterInputTaggedPixelBufferGroupAdaptor Hanging With Tagged Buffers
We've successfully implemented an AVAssetWriter to produce HLS streams (all code is Objective-C++ for interop with existing codebase) but are struggling to extend the operations to use tagged buffers. We're starting to wonder if the tagged buffers required for an MV-HEVC signal are fully supported when producing HLS segments in a live-stream setting. We generate a live stream of data using something like: UTType *t = [UTType typeWithIdentifier:AVFileTypeMPEG4]; m_writter = [[AVAssetWriter alloc] initWithContentType:t]; // - videoHint describes HEVC and width/height // - m_videoConfig includes compression settings and, when using MV-HEVC, // the correct keys are added (i.e. kVTCompressionPropertyKey_MVHEVCVideoLayerIDs) // The app was throwing an exception without these which was // useful to know when we got the configuration right. m_video = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:m_videoConfig sourceFormatHint:videoHint]; For either path we're producing CVPixelBufferRefs that contain the raw pixel information (i.e. 32BGRA) so we use an adapter to make that as simple as possible. If we use a single view and a AVAssetWriterInputPixelBufferAdaptor things work out very well. We produce segments and the delegate is called. However, if we use the AVAssetWriterInputTaggedPixelBufferGroupAdaptor as exampled in the SideBySideToMVHEVC demo project, things go poorly. We create the tagged buffers with something like: CMTagCollectionRef collections[2]; CMTag leftTags[] = { CMTagMakeWithSInt64Value( kCMTagCategory_VideoLayerID, (int64_t)0), CMTagMakeWithSInt64Value( kCMTagCategory_StereoView, kCMStereoView_LeftEye) }; CMTagCollectionCreate( kCFAllocatorDefault, leftTags, 2, &(collections[0]) ); CMTag rightTags[] = { CMTagMakeWithSInt64Value( kCMTagCategory_VideoLayerID, (int64_t)1), CMTagMakeWithSInt64Value( kCMTagCategory_StereoView, kCMStereoView_RightEye) }; CMTagCollectionCreate( kCFAllocatorDefault, rightTags, 2, &(collections[1]) ); CFArrayRef tagCollections = CFArrayCreate( kCFAllocatorDefault, (const void **)collections, 2, &kCFTypeArrayCallBacks ); CVPixelBufferRef buffers[] = {*b, *alt}; CFArrayRef b = CFArrayCreate( kCFAllocatorDefault, (const void **)buffers, 2, &kCFTypeArrayCallBacks ); CMTaggedBufferGroupRef bufferGroup; OSStatus res = CMTaggedBufferGroupCreate( kCFAllocatorDefault, tagCollections, b, &bufferGroup ); Perhaps there's something about this OBJC code that I've buggered up? Hopefully! Anyways, when I submit this tagged bugger group to the adaptor: if (![mvVideoAdapter appendTaggedPixelBufferGroup:bufferGroup withPresentationTime:pts]) { // report error... } Appending does not raise any errors - eventually it just hangs on us and we never return from it... Real Issue: So either: The delegate assigned to the AVAssetWriter doesn't fire its assetWriter callback which should produce the segments The adapter hangs on the appendTaggedPixelBufferGroup before a segment is ready to be completed (but succeeds for a number of buffer groups before this happens). This is the same delegate class that's assigned to the non multi view code path if MV-HEVC is turned off which works perfectly.
1
0
114
Apr ’25