Hi,
After updating to iOS 26, our app is experiencing playback failures with AVPlayer. The same code and streams work fine on iOS 18 and earlier.
Error:
Domain [CoreMediaErrorDomain]
Code [-15628]
Description [The operation couldn’t be completed.]
Underlying Error Domain [(null)]
Code [0]
Description [(null)]
Environment:
iOS version: iOS 26
Stream type: HLS (m3u8) with segment (.ts) files
Observed behaviour:
We don’t have concrete steps to reproduce the issue, but so far, we have observed that this error tends to occur under low network conditions.
Streaming
RSS for tagDeep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
For devices that are still on ios17, playing Fairplay encrypted content still works fine. For devices that I've upgraded to ios26 playing the same content in the same app no longer works. I can advance and see the stream frames by tapping +10 scrubbing so I know that the content is being decrypted but tapping the play button of AVPlayer for an AVPlayerItem now does nothing in ios26. Is this a breaking change or is there a stricter requirement that I now have to implement?
Just updated my computer, phone, and dev tools to the latest versions of everything. Now when I run my app in a previously-working simulator (iPhone 16 w. iOS 18.5) I get:
Failed retrieving MusicKit tokens: fetching the developer token is not supported in the simulator when running on this version of macOS; please upgrade your Mac to macOS Ventura.
Also:
<ICCloudServiceStatusMonitor: 0x600003320e60>: Invoking 1 completion handler for MusicKit tokens. error=<ICError.DeveloperTokenFetchingFailed (-8200) "Failed to fetch media token from <AMSMediaTokenService: 0x6000029049a0>." { underlyingErrors: [ <AMSErrorDomain.300 "Token request encoding failed The token request encoder finished with an error." { userInfo: { AMSDescription : "Token request encoding failed", AMSFailureReason : "The token request encoder finished with an error." }; underlyingErrors: [ <AMSErrorDomain.5 "Anisette Failed Platform not supported" { userInfo: { AMSDescription : "Anisette Failed", AMSFailureReason : "Platform not supported" };
Anybody know what gives here? The Ventura message is absurd because I'm on Tahoe 26.1. The same code works on a physical phone running iOS 26.
Macs do not support Multi-Stream Transport (MST), which prevents from using a single DisplayPort or USB-C port to daisy-chain multiple external monitors in an extended display mode. So the the virtual multiple display modes are not working correctly on Mac.
Topic:
Media Technologies
SubTopic:
Streaming
For iOS17 we've had no problem playing Apple Fairplay encrypted content with keys delivered from our key server running on FairPlay Streaming Server SDK 5.1 and subsequently FairPlay Streaming Server SDK 26. It's built and deployed using Xcode Version 26.1.1 (17B100) with no changes to the code and - as expected - the content continued to be successfully decrypted and played (so far so good). However, as soon as a device was updated to iOS26, that device would no longer play the encrypted content.
Devices remaining on iOS17 continue to work normally and the debugging logs are a sanity-check that proves that. Is anyone else experiencing this issue?
Here's the code (you should be able to drop it into a fresh iOS Xcode project and provide a server url, content url and certificate).
We are experiencing an issue related to DepthData from the TrueDepth camera on a specific device.
On December 1, we tested with the complainant’s device iPhone 14 / iOS 26.0.1, and observed that the depth image is received with empty values.
However, the same implementation works normally on iPhone 17 Pro Max (iOS 26.1) and iPhone 13 Pro Max (iOS 26.0.1), where depth data is delivered correctly.
In the problematic case:
TrueDepth camera is active
Face ID works normally
The app receives a DepthData object, but all values are empty (0), not nil
Because the DepthData object is not nil, this makes it difficult to detect the issue through software fallback handling.
We developed the feature with reference to the following Apple sample:
https://developer.apple.com/documentation/AVFoundation/streaming-depth-data-from-the-truedepth-camera
We would like to ask:
Are there known cases where Face ID functions normally but DepthData from the TrueDepth camera is returned as empty values?
If so, is there a recommended approach for identifying or handling this situation?
Any guidance from Apple engineers or the community would be greatly appreciated.
Thank you.
We are trying to port our code to Apple TV on tvosVersion 17.6 while running the sample we are getting error CoreMediaErrorDomain error -42681. We understand that this error occurs when the FairPlay license (CKC) returned by the server contains incompatible or malformed version information that the iOS/tvOS FairPlay CDM cannot parse.
Can you please specify tvos 17.6 expect what fairplay version number or what fields are mandartory for fps version metadata ?
We have the application 'ADS Smart', a companion application for our ADS Dashcam. We offer a feature that lets users stream the live footage of the dashcam cameras through the app. Currently, we are experiencing a time delay of 30+ seconds to see the live stream, i.e the first frame of the live footage is taking around 30+ seconds to display in the app. We are using the MobileVLCKit library to stream the videos in the app.
The current flow of the code,
Flutter triggers the native playback via a method channel
The Dart side calls the iOS method channel <identifier_name>/ts_player with method playTSFromURL passing:
url(e.g rtsp://.... for live),
playerId
viewId (stable ID used to host native UI)
showControls
optional localIp
AppDelegate receives the call and prepares networking
Entry point: AppDelegate.tsChannel handler for "playTSFromURL" in AppDelegate.swift.
It resolves the Wi‑Fi interface and local IP if possible:
Sets VLC_SOURCE_ADDRESS to the Wi‑Fi IP (when available) to prefer Wi‑Fi for the stream.
Uses NWPathMonitor and direct interface inspection to find the Wi‑Fi interface (e.g., en0) and IP.
Kicks off best-effort route priming to the dashcam IP/ports (non-blocking), see establishWiFiRoutePriority.
AppDelegate chooses the right player implementation
createPlayerForURL(_: ) decides:
RTSP(rasp://..) --> use VLCKit-backed player (class TSStreamPlayer -> TSStreamPlayer class provides a VLC-backed video player for iOS, handling playback of Transport Stream(TS) URLs with strict main-thread UI updates, view safety, and stream management, using MobileVLCKit)
.ts files --> use VLCKit-based player for playing already recorded videos in the app.
If the selected player supports extras (e.g. TSStreamPlayerExtras), it sets
LocalIP (if resolved)
Wi-fi interface name
AppDeletegate creates the native 'platform view' container and overlay
platformView(for:parent:showControls:):
Creates a container UIView attached to the Flutter root view
Adds a dedicated child videoHost[viewId]-the host UIView for rendering video.
If showControls == true, adds TSPlayerControlsOverlay over the video and wires overlay callbacks back to the Flutter via controlChannel (/player_controls)
If showControls == false, adds a minimal back button and wires it to onGalleryBack.
The player starts playback inside the host view
Class player.playTSFromURL(urlString, in:host){ success, error in...} on the main thread.
For RTSP/TS streams: this is handled by TSStreamPlayer(VLCKit).
Success/failure is reported back to Flutter
The completion closure invoked in step 5 returns true on first real playback or an error message on failure.
The method channel result responds:
true --> Flutter knows playback started
FlutterError -> Flutter can show an error
Stopping and cleanup
"stop" on tsChannel stops and disposes the player(s).
"removePlatformView" removes the overlay, back button, the host, and the container, and disposes any remaining players.
I am attaching the logs of the app while running.
The actual issue happening is that when the iOS device is connected to the dashcam's Wi-Fi, for the app's live streaming-related information, the iOS is using Mobile Data even though the wifi is the main communication channel.
The iOS device takes approximately 30 seconds to display the first frame of live footage in the app. Despite being connected to the dashcam’s Wi-Fi, the iOS device sets the value of ES (en0) to Wi-Fi after multiple attempts, causing the live footage to appear in the app after this delay.
So, how can we set up the configuration to display the live footage from the dashcam cameras within just 2 to 3 seconds in the iOS device?
ios.txt
We're troubleshooting SCK issues. They occur with a relatively small amount of sessions, but lack of context and/or ability to advise the customer on how they could make behavior more predictable and reliable is problematic.
Generally, there is 2 distinct issues which may or may not have the same root cause:
Failure to establish SCK session. Usually manifests within the app as SCShareableContent.getWithCompletionHandler call either never invoking the completion handler, or taking prohibitively long time (we usually give it 3-10 sec before giving up). In the system log it may look like this:
(log omitted - suspecting it triggers the content filter)
Note the 6 seconds delay to completion of fetchShareableContentWithOption (normally it's a 30-40ms operation).
Sometime, we'd see the stream established, but some minutes (or even hours) into the recording we'd stop receiving frames.
Both scenarios are likely to occur when the disk space is low, with reliable repro of the problem #2 at below 8gb of free space (in that case, we've seen replayd silently dropping the session, without ever notifying the client ... improving API could go a long way there). However, out of recent occurrences, while most have less than 100GB available, we've seen it on machines with as much as 500GB free.
Unfortunately, it's almost never reproducible in dev environment, so we have to rely on diagnostics we're able to collect in the field -- which nothing obvious yet.
I'd like to understand the root cause of both scenarios better and/or how what specific frameworks can cause these behaviors.
I want develop an app for real-time streaming spatial video transmission from an Apple Vision Pro to another Apple Vision Pro and play, like MV-HEVC, does it's possible? If it's possible how to make it?
Hello,
I have implemented Low-Latency Frame Interpolation using the VTFrameProcessor framework, based on the sample code from https://developer.apple.com/kr/videos/play/wwdc2025/300. It is currently working well for both LIVE and VOD streams.
However, I have a few questions regarding the lifecycle management and synchronization of this feature:
1. Common Questions (Applicable to both Frame Interpolation & Super Resolution)
1.1 Dynamic Toggling
Do you recommend enabling/disabling these features dynamically during playback?
Or is it better practice to configure them only during the initial setup/preparation phase?
If dynamic toggling is supported, are there any recommended patterns for managing VTFrameProcessor session lifecycle (e.g., startSession / endSession timing)?
1.2 Synchronization Method
I am currently using CADisplayLink to fetch frames from AVPlayerItemVideoOutput and perform processing.
Is CADisplayLink the recommended approach for real-time frame acquisition with VTFrameProcessor?
If the feature needs to be toggled on/off during active playback, are there any concerns or alternative approaches you would recommend?
1.3 Supported Resolution/Quality Range
What are the minimum and maximum video resolutions supported for each feature?
Are there any aspect ratio restrictions (e.g., does it support 1:1 square videos)?
Is there a recommended resolution range for optimal performance and quality?
2. Frame Interpolation Specific Questions
2.1 LIVE Stream Support
Is Low-Latency Frame Interpolation suitable for LIVE streaming scenarios where latency is critical?
Are there any special considerations for LIVE vs VOD?
3. Super Resolution Specific Questions
3.1 Adaptive Bitrate (ABR) Stream Support
In ABR (HLS/DASH) streams, the video resolution can change dynamically during playback.
Is VTLowLatencySuperResolutionScaler compatible with ABR streams where resolution changes mid-playback?
If resolution changes occur, should I recreate the VTLowLatencySuperResolutionScalerConfiguration and restart the session, or does the API handle this automatically?
3.2 Small/Square Resolution Issue
I observed that 144x144 (1:1 square) videos fail with error:
"VTFrameProcessorErrorDomain Code=-19730: processWithSourceFrame within VCPFrameSuperResolutionProcessor failed"
However, 480x270 (16:9) videos work correctly.
minimumDimensions reports 96x96, but 144x144 still fails. Is there an undocumented restriction on aspect ratio or a practical minimum resolution?
3.3 Scale Factor Selection
supportedScaleFactors returns [2.0, 4.0] for most resolutions.
Is there a recommended scale factor for balancing quality and performance?
Are there scenarios where 4.0x should be avoided?
The documentation on this specific topic seems limited, so I would appreciate any insights or advice.
Thank you.
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
VideoToolbox
HTTP Live Streaming
AVKit
AVFoundation
Hello,
I am reviewing the sample codes of FairPlay Streaming SDK 26 and there was a place where I think is a mistake.
The codes are for the server, for both Swift and Rust codes.
There is an if statement which compares "ProtocolVersionUsed"(spcData.versionUsed) and SPCVersion1 constant, though "ProtocolVersionUsed" and SPC Version is a different thing, so shouldn't it be using a different constant value?
[createContentKeyPayload.swift]
// Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS.
if serverCtx.spcContainer.spcData.versionUsed == base_constants.SPCVersion.v1.rawValue &&
[createContentKeyPayload.rs]
// Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS.
if (serverCtx.spcContainer.spcData.versionUsed == SPCVersion::v1 as u32) &&
Thank you.
Hi,
Has anyone been able to protect the audio part of FairPlay protected content from being captured as part of screen recording on Safari/iOS (PWA and/or online web app)?
We have tried many things but could not prevent the audio from being recorded.
Same app and content on Safari/Mac does not allow audio to be recorded. Any tips?
Hi everyone,
Our team is encountering a reproducible crash when using VTLowLatencyFrameInterpolation on iOS 26.3 while processing a live LL-HLS input stream.
🤖 Environment
Device: iPhone 16
OS: iOS 26.3
Xcode: Xcode 26.3
Framework: VideoToolbox
💥 Crash Details
The application crashes with the following fatal error:
Fatal error: Swift/ContiguousArrayBuffer.swift:184: Array index out of range
The stack trace highlights the following:
VTLowLatencyFrameInterpolationImplementation processWithParameters:frameOutputHandler:
Called from VTFrameProcessor.process(parameters:)
Here is the simplified implementation block where the crash occurs. (Note: PrismSampleBuffer and PrismLLFIError are our internal custom wrapper types).
// Create `VTFrameProcessorFrame` for the source (previous) frame.
let sourcePTS = sourceSampleBuffer.presentationTimeStamp
var sourceFrame: VTFrameProcessorFrame?
if let pixelBuffer = sourceSampleBuffer.imageBuffer {
sourceFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: sourcePTS)
}
// Validate the source VTFrameProcessorFrame.
guard let sourceFrame else { throw PrismLLFIError.missingImageBuffer }
// Create `VTFrameProcessorFrame` for the next frame.
let nextPTS = nextSampleBuffer.presentationTimeStamp
var nextFrame: VTFrameProcessorFrame?
if let pixelBuffer = nextSampleBuffer.imageBuffer {
nextFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: nextPTS)
}
// Validate the next VTFrameProcessorFrame.
guard let nextFrame else { throw PrismLLFIError.missingImageBuffer }
// Calculate interpolation intervals and allocate destination frame buffers.
let intervals = interpolationIntervals()
let destinationFrames = try framesBetween(firstPTS: sourcePTS, lastPTS: nextPTS, interpolationIntervals: intervals)
let interpolationPhase: [Float] = intervals.map { Float($0) }
// Create VTLowLatencyFrameInterpolationParameters.
// This sets up the configuration required for temporal frame interpolation between the previous and current source frames.
guard let parameters = VTLowLatencyFrameInterpolationParameters(
sourceFrame: nextFrame,
previousFrame: sourceFrame,
interpolationPhase: interpolationPhase,
destinationFrames: destinationFrames
) else {
throw PrismLLFIError.failedToCreateParameters
}
try await send(sourceSampleBuffer)
// Process the frames.
// Using progressive callback here to get the next processed frame as soon as it's ready,
// preventing the system from waiting for the entire batch to finish.
for try await readOnlyFrame in self.frameProcessor.process(parameters: parameters) {
// Create an interpolated sample buffer based on the output frame.
let newSampleBuffer: PrismSampleBuffer = try readOnlyFrame.frame.withUnsafeBuffer { pixelBuffer in
try PrismLowLatencyFrameInterpolation.createSampleBuffer(from: pixelBuffer, readOnlyFrame.timeStamp)
}
// Pass the newly generated frame to the output stream.
try await send(newSampleBuffer)
}
🙋 Questions
Are there any known limitations or bugs regarding VTLowLatencyFrameInterpolation when handling live 60fps streams?
Are there any undocumented constraints we should be aware of regarding source/previous frame timing, pixel buffer attributes, or how destinationFrames and interpolationPhase arrays must be allocated?
Is a "warm-up" sequence recommended after startSession() before making the first process(parameters:) call?