Streaming

RSS for tag

Deep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.

Streaming Documentation

Posts under Streaming subtopic

Post

Replies

Boosts

Views

Activity

Processing AVCaptureVideoDataOutput video stream with appleLog and HLG_BT2020 AVCaptureColorSpace input
I’m building a professional camera app where users can customize the video recording format and color grading. In the func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method, I handle video frames and use Metal for real-time color grading. This works well when device.activeColorSpace is sRGB or P3, and the results are great. However, when the color space is HLG_BT2020 or appleLog, the MTKTextureLoader.newTexture(cgImage: cgImage, options: options) method throws an error. After researching, I found that the video frame in these color spaces has a bit-per-channel (bpc) greater than 8 after being converted to CGImage, causing the texture creation to fail. I tried converting the CGImage to a lower bpc to successfully create the texture, but the final output image is garbled and not as expected. Is there a solution to this issue?
1
0
87
Apr ’25
Level Networking on watchOS for Duplex audio streaming
I did watch WWDC 2019 Session 716 and understand that an active audio session is key to unlocking low‑level networking on watchOS. I’m configuring my audio session and engine as follows: private func configureAudioSession(completion: @escaping (Bool) -> Void) { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: []) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) // Retrieve sample rate and configure the audio format. let sampleRate = audioSession.sampleRate print("Active hardware sample rate: \(sampleRate)") audioFormat = AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 1) // Configure the audio engine. audioInputNode = audioEngine.inputNode audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: audioFormat) try audioEngine.start() completion(true) } catch { print("Error configuring audio session: \(error.localizedDescription)") completion(false) } } private func setupUDPConnection() { let parameters = NWParameters.udp parameters.includePeerToPeer = true connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupTCPConnection() { let parameters = NWParameters.tcp connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupWebSocketConnection() { guard let url = URL(string: "ws://***.***.xxxxx.***:0000") else { print("Invalid WebSocket URL") return } let session = URLSession(configuration: .default) webSocketTask = session.webSocketTask(with: url) webSocketTask?.resume() print("WebSocket connection initiated") sendAudioToServer() receiveDataFromServer() sendWebSocketPing(after: 0.6) } private func setupNWConnectionHandlers() { connection?.stateUpdateHandler = { [weak self] state in DispatchQueue.main.async { switch state { case .ready: print("Connected (NWConnection)") self?.isConnected = true self?.failToConnect = false self?.receiveDataFromServer() self?.sendAudioToServer() case .waiting(let error), .failed(let error): print("Connection error: \(error.localizedDescription)") DispatchQueue.main.asyncAfter(deadline: .now() + 2) { self?.setupNetwork() } case .cancelled: print("NWConnection cancelled") self?.isConnected = false default: break } } } connection?.start(queue: .main) } Duplex in this context refers to two-way audio transmission simultaneously recording and sending audio while also receiving and playing back incoming audio, similar to a VoIP/SIP call. The setup works fine on the simulator, which suggests that the core logic is correct. However, since the simulator doesn’t fully replicate WatchOS hardware behavior especially for audio sessions and networking issues might arise when running on a real device. The problem likely lies in either the Watch’s actual hardware limitations, permission constraints, or specific audio session configurations. I am reaching out to seek further assistance regarding the challenges I've been experiencing with establishing a UDP, TCP & web socket connection on watchOS using NWConnection for duplex audio streaming. Despite implementing the recommendations provided earlier, I am still encountering difficulties From what I can see, your implementation is focused on streaming audio playback with the server. In my case, I'm looking for a slightly different approach: I want to capture audio and send buffers of a specific size to the server while playing audio simultaneously, essentially achieving full duplex streaming similar to a VOIP call. Additionally, I’d like to ensure that if no external audio route is connected, the Apple Watch speaker is used by default. Any thoughts or insights on adapting this setup for those requirements would be very welcome.
1
0
82
Apr ’25
Issues with "AVMetricEventStreamPublisher Discover Media Performance Metrics in AVFoundation" Example Code
Hi everyone! I’ve been working with AVFoundation and trying to use the AVMetricEventStreamPublisher to discover media performance metrics, as described in the Apple documentation. https://developer.apple.com/cn/videos/play/wwdc2024/10113/?time=508 However, when following the example code, I’m not getting the expected results. The performance metrics for both audio and video don’t seem to be captured properly. Has anyone successfully used this example code? If so, could you share your experience or any solutions you’ve found? Any tips or insights would be greatly appreciated. Thanks in advance! Ps. the example code: AVPlayerItem *item = ... AVMetricEventStream *eventStream = [AVMetricEventStream eventStream]; id subscriber = [[MyMetricSubscriber alloc] init]; [eventStream setSubscriber:subscriber queue:mySerialQueue] [eventStream subscribeToMetricEvent:[AVMetricPlayerItemLikelyToKeepUpEvent class]]; [eventStream subscribeToMetricEvent:[AVMetricPlayerItemPlaybackSummaryEvent class]]; [eventStream addPublisher:item];
1
0
63
Apr ’25
AVAssetWriterInputTaggedPixelBufferGroupAdaptor Hanging With Tagged Buffers
We've successfully implemented an AVAssetWriter to produce HLS streams (all code is Objective-C++ for interop with existing codebase) but are struggling to extend the operations to use tagged buffers. We're starting to wonder if the tagged buffers required for an MV-HEVC signal are fully supported when producing HLS segments in a live-stream setting. We generate a live stream of data using something like: UTType *t = [UTType typeWithIdentifier:AVFileTypeMPEG4]; m_writter = [[AVAssetWriter alloc] initWithContentType:t]; // - videoHint describes HEVC and width/height // - m_videoConfig includes compression settings and, when using MV-HEVC, // the correct keys are added (i.e. kVTCompressionPropertyKey_MVHEVCVideoLayerIDs) // The app was throwing an exception without these which was // useful to know when we got the configuration right. m_video = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:m_videoConfig sourceFormatHint:videoHint]; For either path we're producing CVPixelBufferRefs that contain the raw pixel information (i.e. 32BGRA) so we use an adapter to make that as simple as possible. If we use a single view and a AVAssetWriterInputPixelBufferAdaptor things work out very well. We produce segments and the delegate is called. However, if we use the AVAssetWriterInputTaggedPixelBufferGroupAdaptor as exampled in the SideBySideToMVHEVC demo project, things go poorly. We create the tagged buffers with something like: CMTagCollectionRef collections[2]; CMTag leftTags[] = { CMTagMakeWithSInt64Value( kCMTagCategory_VideoLayerID, (int64_t)0), CMTagMakeWithSInt64Value( kCMTagCategory_StereoView, kCMStereoView_LeftEye) }; CMTagCollectionCreate( kCFAllocatorDefault, leftTags, 2, &(collections[0]) ); CMTag rightTags[] = { CMTagMakeWithSInt64Value( kCMTagCategory_VideoLayerID, (int64_t)1), CMTagMakeWithSInt64Value( kCMTagCategory_StereoView, kCMStereoView_RightEye) }; CMTagCollectionCreate( kCFAllocatorDefault, rightTags, 2, &(collections[1]) ); CFArrayRef tagCollections = CFArrayCreate( kCFAllocatorDefault, (const void **)collections, 2, &kCFTypeArrayCallBacks ); CVPixelBufferRef buffers[] = {*b, *alt}; CFArrayRef b = CFArrayCreate( kCFAllocatorDefault, (const void **)buffers, 2, &kCFTypeArrayCallBacks ); CMTaggedBufferGroupRef bufferGroup; OSStatus res = CMTaggedBufferGroupCreate( kCFAllocatorDefault, tagCollections, b, &bufferGroup ); Perhaps there's something about this OBJC code that I've buggered up? Hopefully! Anyways, when I submit this tagged bugger group to the adaptor: if (![mvVideoAdapter appendTaggedPixelBufferGroup:bufferGroup withPresentationTime:pts]) { // report error... } Appending does not raise any errors - eventually it just hangs on us and we never return from it... Real Issue: So either: The delegate assigned to the AVAssetWriter doesn't fire its assetWriter callback which should produce the segments The adapter hangs on the appendTaggedPixelBufferGroup before a segment is ready to be completed (but succeeds for a number of buffer groups before this happens). This is the same delegate class that's assigned to the non multi view code path if MV-HEVC is turned off which works perfectly.
1
0
62
Apr ’25
FairPlay HLS Downloaded Asset Fails to Play on First Attempt When Offline, Works on Retry
Hello, We're seeing an intermittent issue when playing back FairPlay-protected HLS downloads while the device is offline. Assets are downloaded using AVAggregateAssetDownloadTask with FairPlay protection. After download, asset.assetCache.isPlayableOffline == true. On first playback attempt (offline), ~8% of downloads fail. Retrying playback always works. We recreate the asset and player on each attempt. During the playback setup, we try to load variants via: try await asset.load(.variants) This call sometimes fails with: Error Domain=NSURLErrorDomain Code=-1009 “The Internet connection appears to be offline.” UserInfo={NSUnderlyingError=0x105654a00 {Error Domain=NSURLErrorDomain Code=-1009 “The Internet connection appears to be offline.” UserInfo={NSDescription=The Internet connection appears to be offline.}}, NSErrorFailingURLStringKey=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, NSErrorFailingURLKey=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, NSURL=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, AVErrorFailedDependenciesKey=( “assetProperty_HLSAlternates” ), NSLocalizedDescription=The Internet connection appears to be offline.} This variant load is used to determine available audio tracks, check for Dolby support, and apply user language preferences. After this step, the AVPlayerItem also fails via Combine’s publisher for .status. However, retrying the entire process immediately after (same offline conditions, same asset path, new AVURLAsset) results in successful playback. Assets are represented using the following class: public class DownloadedAsset: AVURLAsset { public let id: String public let localFileUrl: URL public let fairplayLicenseUrlString: String? public let drmToken: String? var isProtected: Bool { return fairplayLicenseUrlString != nil } public init(id: String, localFileUrl: URL, fairplayLicenseUrlString: String?, drmToken: String?) { self.id = id self.localFileUrl = localFileUrl self.fairplayLicenseUrlString = fairplayLicenseUrlString self.drmToken = drmToken super.init(url: localFileUrl, options: nil) } } We use user-selected quality levels to control bitrate and multichannel (e.g. Dolby 5.1) downloads: let downloadQuality = UserDefaults.standard.downloadVideoQuality let bitrate: Int let shouldDownloadMultichannelTracks: Bool switch downloadQuality { case .dataSaver: shouldDownloadMultichannelTracks = false bitrate = 596564 case .standard: shouldDownloadMultichannelTracks = false bitrate = 1503844 case .best: shouldDownloadMultichannelTracks = true bitrate = 7038970 } var selections = multichannelIdentifiedMediaSelections if !shouldDownloadMultichannelTracks { selections = selections.filter { !$0.isMultichannel } } let task = session.aggregateAssetDownloadTask( with: asset, mediaSelections: selections.map { $0.mediaSelection }, assetTitle: title, assetArtworkData: nil, options: [AVAssetDownloadTaskMinimumRequiredMediaBitrateKey: bitrate] ) Seen on devices running iOS 16, iOS 17, and iOS 18. What could cause the initial failure of an otherwise valid, offline-ready FairPlay HLS asset? Could .load(.variants) internally trigger a failed network resolution, even when offline? Is there an internal caching or initialization behavior in AVFoundation that might explain why the second attempt works? Any guidance would be appreciated.
1
0
136
Jun ’25
FPS Certificate Re-issuance and Validity of Existing Certificate
Keywords: FairPlay, FPS Certificate, DRM, FairPlay Streaming, license server Hi all, We are currently using FairPlay Streaming in production and already have an FPS certificate in place. However, the passphrase for the existing FPS certificate has unfortunately been lost. We are now considering reissuing a new FPS certificate, and I would like to confirm a few points before proceeding: 1️⃣ If we reissue a new FPS certificate, will the existing certificate be automatically revoked? Or will it remain valid until its original expiration date? 2️⃣ Is it possible to have both the newly issued and the existing certificates valid at the same time? In other words, can we serve DRM licenses using either certificate depending on the packaging or client? 3️⃣ Are there any caveats or best practices we should be aware of when reissuing an FPS certificate? For example, would existing packaged content become unplayable, or would CDN/packaging server configurations need to be updated carefully? Since this affects our production environment, we would like to minimize any service disruption or compatibility issues. Unfortunately, when we contacted Apple support directly, we were advised to post this question here in the Forums for additional guidance. Any advice or experiences would be greatly appreciated! Thank you in advance.
1
0
119
Jun ’25
Can I Fade Out Track Volume Before End Using ApplicationMusicPlayer?
I’m building a music app using Apple Music streaming via ApplicationMusicPlayer. My goal is to decrease the volume of the current song during the last 10 seconds, and when the next track begins, restore the volume to its normal level. I know that ApplicationMusicPlayer doesn’t expose a volume API, and I want to avoid triggering the system volume HUD. ✅ Using Apple Music streaming (not local files) ❓ Is it possible to implement per-track fade-out/fade-in logic with ApplicationMusicPlayer? Appreciate any clarification or official guidance!
1
0
70
Jun ’25
Received the "Profile for Dolby Vision MUST be Profile 5 or 10" error for the channel that has only dolby vision profile 5
While validating a Dolby Vision Profile 5 playlist in CMAF format (with segments in MP4), the Media Stream Validator reported the following error in the MUXT-FIX-ISSUES list: However, the playlist correctly specifies Dolby Vision Profile 5 in both the EXT-X-STREAM-INF and EXT-X-I-STREAM-INF tags. Playlist: #EXTM3U #EXT-X-VERSION:8 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio-ec3",LANGUAGE="und",NAME="Undetermined",AUTOSELECT=YES,CHANNELS="6",URI="var14711339/aud1257/playlist.m3u8?device_profile=hls&seg_size=6&cmaf=1" #EXT-X-STREAM-INF:BANDWIDTH=14680000,AVERAGE-BANDWIDTH=14676380,VIDEO-RANGE=PQ,CODECS="ec-3,dvh1.05.06",RESOLUTION=3840x2160,AUDIO="audio-ec3" var14711339/vid/playlist.m3u8?device_profile=hls&seg_size=6&cmaf=1 #EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=1419881,URI="trk14711339/playlist.m3u8?device_profile=hls&cmaf=1",VIDEO-RANGE=PQ,CODECS="dvh1.05.06",RESOLUTION=3840x2160 Could you please review this and clarify: Why is the Media Stream Validator reporting this error, even though the playlist correctly includes Dolby Vision Profile 5 parameters in CMAF format? Why is this error not reported when using a playlist with TS segments instead of CMAF (MP4)?
1
0
74
Jul ’25
Will the new Automix feature in iOS 26 be available for third-party apps using MusicKit?
Hi everyone, We’re currently developing a music-based app using MusicKit, and we recently noticed that iOS 26 beta introduces a new “Automix” feature in the Apple Music app. This enables seamless DJ-style transitions between songs—beyond the standard crossfade functionality. We’re trying to understand: Will this Automix feature be accessible to third-party apps that use MusicKit? If not available in the initial iOS 26 release, is there a plan to expose it through public APIs in a future update? Is there any technical documentation, WWDC session, or roadmap info regarding Automix support via MusicKit? This functionality would be a significant enhancement for our app, especially for intelligent audio transitions and curated playlists. Thanks.
1
3
463
Jul ’25
Issue with Low-Latency HLS Playback Using AVAssetResourceLoaderDelegate
Hi, I am writing to seek any help or workaround regarding an issue I have encountered while implementing Low-Latency HLS playback using the AVAssetResourceLoaderDelegate. I have been successfully loading playlists during HLS live playback using the AVAssetResourceLoaderDelegate. However, after introducing Low-Latency HLS, I have run into a problem. When the AVPlayer loads low-latency content playback natively, everything works fine. But when I use the delegate for loading, I encounter the following error from AVPlayer's status observer: CoreMediaErrorDomain -15410 Low Latency: Server must support http2 ECN and SACK It seems there is no problem since playback does not stop, but there is a very critical part missing. The playback does not achieve the expected low latency and behaves similarly to standard HLS. Additionally, this behavior only occurs on iOS 16 devices and simulators. On iOS 17 simulators and devices, the error message does not appear, and the latency remains low as expected. Therefore, I suspect that there might be some misjudgment in the verification process within the internal implementation of AVPlayer. Since our app needs to support iOS 16, I would appreciate any solutions, methods to try, or workarounds that you could share regarding this issue. Thank you.
2
0
808
Oct ’24
macOS Sonoma 'Cannot Decode' HLS Video
I use AVPlayer to play HLS video successfully on macOS Sonoma, but I encountered this error on macOS Sequoia. Please help me: Error Domain=AVFoundationErrorDomain Code=-11833 ‘Cannot Decode’ UserInfo={NSUnderlyingError=0x600001e57330 {Error Domain=CoreMediaErrorDomain Code=-12906 ‘(null)’}, NSLocalizedFailureReason=The decoder required for this media cannot be found., AVErrorMediaTypeKey=vide, NSLocalizedDescription=Cannot Decode} Thanks!
2
1
602
Mar ’25
Get channels count for an audio stream in AVPlayer
In an m3u8 manifest, audio EXT-X-MEDIA tags usually contain CHANNELS tag containing the audio channels count like so: #EXT-X-MEDIA:TYPE=AUDIO,URI="audio_clear_eng_stereo.m3u8",GROUP-ID="default-audio-group",LANGUAGE="en",NAME="stream_5",AUTOSELECT=YES,CHANNELS="2" Is it possible to get this info from AVPlayer, AVMediaSelectionOption or some related API?
2
0
576
Dec ’24
ScreenCaptureKit crash
We are having issues with ScreenCaptureKit. Our use case is the following: We have multiple applications that each starts a stream capture, using ScreenCaptureKit. It works fine when just one application is streaming, but when starting multiple streams continuesly, all streams stops or crashes, without ScreenCaptureKit reporting an error back. Restarting replayd for the user will allow us to start streaming again, if the streaming applications are restartet too. We have build a small test program that we have tested on different Macs, running different versions of macOS, with identical results. The test program just calls the 'getShareableContentExcludingDesktopWindows' function multiple times, since this was the simplest way to show the problem. Our test setup is the following: Mac Mini M2 macOS 13.6 MacBook Pro M3 macOS 14.4 MacBook Pro M3 macOS 15.1 Code main.m #import <Foundation/Foundation.h> #import <ScreenCaptureKit/ScreenCaptureKit.h> @interface Runner : NSObject @property(atomic, assign) BOOL keepRunning; @property(atomic, retain) SCShareableContent* availableWindows; -(void) updateAvailableWindows; -(void) notif:(NSNotification*) aNotif; @end int main(int argc, const char * argv[]) { @autoreleasepool { NSRunLoop* loo = [NSRunLoop mainRunLoop]; Runner* r = [[Runner alloc] init]; [r updateAvailableWindows]; while (r.keepRunning) [loo runUntilDate:[NSDate distantFuture]]; NSLog(@"Program exit"); } return 0; } @implementation Runner -(instancetype) init { self = [super init]; if(self){ _keepRunning = YES; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(notif:) name:@"windowWasNotFound" object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(notif:) name:@"windowWasFound" object:nil]; } return self; } -(void) updateAvailableWindows { @autoreleasepool { NSLog(@"begin"); [SCShareableContent getShareableContentExcludingDesktopWindows:NO onScreenWindowsOnly:YES completionHandler:^(SCShareableContent* content, NSError* error){ NSLog(@"running"); if(!error){ self.availableWindows = content; for (__unused SCWindow* aWin in self.availableWindows.windows) { [NSThread sleepForTimeInterval:0.01]; } [[NSNotificationCenter defaultCenter] postNotificationName:@"windowWasFound" object:self.availableWindows]; } else{ [[NSNotificationCenter defaultCenter] postNotificationName:@"windowWasNotFound" object:nil]; } }]; } } -(void) notif:(NSNotification*) aNotif { if(aNotif.object) [self performSelectorOnMainThread:@selector(updateAvailableWindows) withObject:nil waitUntilDone:NO]; else{ NSLog(@"Not Found"); self.keepRunning = NO; } } @end How to replicate: Compile and run the program from multiple terminal windows (terminal must be granted screen recording permission) and notice that the output stops, when the replayd stops responding (our assumption). Restarting the application does nothing, before the replayd also is restarted. Running th application in Xcode gives the following error: [ERROR] -[RPDaemonProxy fetchShareableContentWithOption:windowID:currentProcess:withCompletionHandler:]_block_invoke:902 error: 4097 This error is not something we have been able to detect in our applications - and since the only workaround is restarting replayd and the applications, catching the error would help us.
2
0
598
Jan ’25
Problem with UVC Device Access on visionOS
No external cameras show up in the app on visionOS. We use this sample code as a basis for our tests: https://developer.apple.com/documentation/visionos/displaying-video-from-connected-devices We also received the needed entitlement from Apple, but every camera we tried so far does not show up on visionOS. We tried the following devices and hubs: Insta360 X4 Somikon Endoscope Camera: USB HD Endoscope Camera EMEET Full HD Webcam - C960 BENFEI Video/Audio Capture Card, 4K HDMI auf USB C/A Logitech C920 HD PRO Webcam, Anker PowerConf C200 Insta360 GO 3S Anker 341 USB-C Hub UGREEN Revodok Pro 10Gbps USB-C Hub All Vision Pro devices we tried run with visionOS 2.3. When trying the same code on iPad we can actually use external cameras. Steps to reproduce: Start the app on a Vision Pro device and connect an external camera. The connected camera does not show up in the dropdown. Development environment: Xcode 16.2, macOS 15.3 Run-time configuration: iOS 18.3, visionOS 2.3
2
0
578
Feb ’25