Hi everyone! I’ve been working with AVFoundation and trying to use the AVMetricEventStreamPublisher to discover media performance metrics, as described in the Apple documentation.
https://developer.apple.com/cn/videos/play/wwdc2024/10113/?time=508
However, when following the example code, I’m not getting the expected results. The performance metrics for both audio and video don’t seem to be captured properly.
Has anyone successfully used this example code? If so, could you share your experience or any solutions you’ve found? Any tips or insights would be greatly appreciated. Thanks in advance!
Ps. the example code:
AVPlayerItem *item = ...
AVMetricEventStream *eventStream = [AVMetricEventStream eventStream];
id subscriber = [[MyMetricSubscriber alloc] init];
[eventStream setSubscriber:subscriber queue:mySerialQueue]
[eventStream subscribeToMetricEvent:[AVMetricPlayerItemLikelyToKeepUpEvent class]];
[eventStream subscribeToMetricEvent:[AVMetricPlayerItemPlaybackSummaryEvent class]];
[eventStream addPublisher:item];
Streaming
RSS for tagDeep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
the problem is when using HLS live stream with AVPlayer on iOS/ tvOS the player chooses first highest bandwidth then slowly steps down to lowest (within 1-3min) and eventually steps up again then repeats to step down.
the AVPlayer error log sends events:
errorStatusCode: -12888, errorDomain: Optional("CoreMediaErrorDomain"), errorComment: Optional("The operation couldn't be completed. (CoreMediaErrorDomain error -12888 - Playlist File unchanged for longer than 1.5 * target duration
we use standard segments in CMAF format, 2sec duration
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:147065903
#EXT-X-MAP:URI="video_1_4660000_init.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2"
#EXT-X-PROGRAM-DATE-TIME:2025-04-30T12:51:07
#EXTINF:2.000,
video_1_4660000_t17460174670001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2
#EXTINF:2.000,
video_1_4660000_t17460174690001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2
#EXTINF:2.000,
video_1_4660000_t17460174710001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2
when using 6sec segments the player stays stable at highest bandwidth.
is there a way to avoid this error? in AVPlayer or HLS configuration?
TL;DR How to solve possible racing issue of EXT-X-SESSION-KEY request and encrypted media segment request?
I'm having trouble using custom AVAssetResourceLoaderDelegate with video manifest containing VideoProtectionKey(VPK). My master manifest contains rendition manifest url and VPK url. When not using custom resource delegate, everything works fine.
My custom resource delegate is implemented in way where it first append prefix to scheme of the master manifest url before creating the asset. And during handling master manifest, it puts back original scheme, make the request, modify the scheme for rendition manifest url in the response content by appending the same prefix again, so that rendition manifest request also goes into custom resource loader delegate. Same goes for VPK request. The AES-128 key is stored in memory within custom resource loader delegate object. So far so good.
The VPK is requested before segment request. But the problem comes where the media segment requests happen. The media segment request url from rendition manifest goes into custom resource loader as well and those are encrypted. I can see segment request finish first then the related VPK requests kick in after a few seconds. The previous VPK value is cached in memory so it is not network causing the delay but some mechanism that I'm not aware of causing this.
So could anyone tell me what would be the proper way of handling this situation? The native library is handling it well so I just want to know how. Thanks in advance!
Hi everyone,
We’re currently developing a music-based app using MusicKit, and we recently noticed that iOS 26 beta introduces a new “Automix” feature in the Apple Music app. This enables seamless DJ-style transitions between songs—beyond the standard crossfade functionality.
We’re trying to understand:
Will this Automix feature be accessible to third-party apps that use MusicKit?
If not available in the initial iOS 26 release, is there a plan to expose it through public APIs in a future update?
Is there any technical documentation, WWDC session, or roadmap info regarding Automix support via MusicKit?
This functionality would be a significant enhancement for our app, especially for intelligent audio transitions and curated playlists.
Thanks.
The documentation for the Apple Music API indicates that the genreNames field for a given artist (see https://developer.apple.com/documentation/applemusicapi/artists/attributes-data.dictionary) is an array of strings. However, it only appears as though you return ONE SINGLE GENRE per Artist, regardless of how many genres might be attached to that artist's albums.
Am I missing something? Is there an artist where multiple genres may be returned, or is this a bug in the documentation?
Hi everyone,
I noticed that Apple recently added a few new beta sample codes related to video encoding:
Encoding video for low-latency conferencing
Encoding video for live streaming
While experimenting with H.264 encoding, I came across some questions regarding certain configurations:
When I enable kVTVideoEncoderSpecification_EnableLowLatencyRateControl, is it still possible to use kVTCompressionPropertyKey_VariableBitRate? In my tests, I get an error.
It also seems that kVTVideoEncoderSpecification_EnableLowLatencyRateControl cannot be used together with kVTCompressionPropertyKey_ConstantBitRate when encoding H264. Is that expected?
When using kVTCompressionPropertyKey_ConstantBitRate with kVTCompressionPropertyKey_MaxKeyFrameInterval set to 2, the encoder outputs only keyframes, and the frame size keeps increasing, which doesn’t seem like the intended behavior.
Regarding the following code from the sample:
let byteLimit = (Double(bitrate) / 8) * 1.5 as CFNumber
let secLimit = Double(1.0) as CFNumber
let limitsArray = [ byteLimit, secLimit ] as CFArray // Each 1 second limit byte as bitrate
err = VTSessionSetProperty(session, key: kVTCompressionPropertyKey_DataRateLimits, value: limitsArray)
This DataRateLimits setting doesn’t seem to have any effect in my tests. Whether I set it or not, the values remain unchanged.
Since the documentation on developer.apple.com/documentation
doesn’t clearly explain these cases, I’d like to ask if anyone has insights or recommendations about the proper usage of these settings.
Thanks in advance!
Hi,
After updating to iOS 26, our app is facing playback failures with AVPlayer. The same code and streams work fine on iOS 18 and earlier.
Error - Domain[CoreMediaErrorDomain]:Code[-15628]:Desc[The operation couldn’t be completed.]:Underlying Error Domain[(null)]:Code[0]:Desc[(null)]
Environment:
iOS version: ios 26
React Native: 0.69
Video library: react-native-video (AVPlayer under the hood)
Stream type: HLS (m3u8) with segment (.ts) files
Observed behaviour:
Playback works initially on iOS 26.
On iOS 26, the stream fails at runtime after a few seconds/minutes (not on first load).
Network logs show 307 redirects on some segment requests. After this, AVPlayer throws the above error.
Playback fails intermittently on slow/unstable networks.
The operation couldn’t be completed. (CoreMediaErrorDomain error -19156 - The operation couldn’t be completed. (CoreMediaErrorDomain error -19156.
Hello, We have Video Stream app. It has HLS VOD Content. We supply 1080p, 4K Contents to users. Users were watching 1080p content before tvOS 26. Users can not watch 1080p content anymore when they update to tvOS 26. We have not changed anything at HLS playlist side and application version. This problem only occurs on Apple TV 4th Gen (A1625) tvOS 26 version. There is no problem with newer Apple TV devices. Would you help to resolve problem? Thanks in advance
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Apple TV
tvOS
HTTP Live Streaming
Hello Apple team and developer community,
I am preparing a visionOS app for a fair environment, where we want to automatically stream the current experience to a nearby monitor via AirPlay, without requiring guests or staff to manually interact with the Control Center or AirPlay pickers all the time.
The goal is to provide a smooth, frictionless setup so attendees can focus on the demo, not the configuration.
Feature Request:
A supported API or method to programmatically start/stop AirPlay video streaming (mirroring or external playback) from within a visionOS app, allowing the current experience to be instantly displayed on an external monitor or Apple TV for the audience.
Context & Rationale:
In a trade fair or exhibition setting, rapid guest turnaround and minimal staff intervention are crucial. Having to manually guide each visitor through AirPlay setup is impractical.
As I understood, AVRoutePickerView can be used for this on iOS/macOS, but this is not available in visionOS. Enabling similar automated streaming on visionOS would make the device far more suitable for live demos and public showcases.
Questions:
Are there any supported workarounds or best practices for enabling automated screen streaming or AirPlay initiation on visionOS in public demo environments that I missed?
Is Apple considering adding programmatic AirPlay control or accessibility features to support such use cases in future visionOS releases?
Thank you for considering this request! If there are recommended patterns, entitlements, or accessibility solutions we could explore for trade fair scenarios, your guidance would be greatly appreciated.
Best regards,
Julian Zürn - IPI, HS Kempten
Hi,
I have a IOS app and we are using fairplay DRM to play videos. In IOS app we are allowing offline download of the videos and hence we are getting a persistent fairplay license. In IOS app everything is working fine.
Now we have used the same app and built for MacOS catalyst. In MAC OS catalyst app we are not able to play the video and getting error code -42650
We are able to get the persistent license from server, but when we play the video with the license we are getting the error. Below are the logs:
2024-12-06 22:05:48.911266+0530 0x4dffe2 Default 0x0 85505 0 teachonline: (MediaToolbox) [com.apple.coremedia:] <<<< FigPKDKeyManager >>>> keyManager_processOfflineKeyInternal: 0x600000322000 160D4519-C60B-4FD0-B69A-20B2A4597017 created decrypt context:0x0 with offline key; updated offline key:0x0 err:-42650
2024-12-06 22:05:48.911369+0530 0x4dffe2 Default 0x0 85505 0 teachonline: (MediaToolbox) [com.apple.coremedia:player] <<<< FigStreamPlayer >>>> fpfs_ensureDecryptorHasStarted: [0x7fc44e4dc520|P/NW] <0x7fc44fa44000|I/SRA.01>: track 1 latching decryptorFailure -42650
85505 0 teachonline: (MediaToolbox) [com.apple.coremedia:player] <<<< FigStreamPlayer >>>> fpfs_StopPlayingItem: [0x7fc44e4dc520|P/NW] <0x7fc44fa44000|I/SRA.01>: Pausing, err=Error Domain=CoreMediaErrorDomain Code=-42650 "(null)"
I have copied only the lines which has errors. You can download the full logs from https://drive.google.com/file/d/1feb9pKZERUr--PMt6m-6IrO_mDvoFbjO/view?usp=sharing
Can you please help me to fix the issue.
Hi, I'm trying to decode a HLS livestream with VideoToolbox. The CMSampleBuffer is successfully created (OSStatus == noErr). When I enqueue the CMSampleBuffer to a AVSampleBufferDisplayLayer the view isn't displaying anything and the status of the AVSampleBufferDisplayLayer is 1 (rendering).
When I use a VTDecompressionSession to convert the CMSampleBuffer to a CVPixelBuffer the VTDecompressionOutputCallback returns a -8969 (bad data error).
What do I need to fix in my code? Do I incorrectly parse the data from the segment for the CMSampleBuffer?
let segmentData = try await downloadSegment(from: segment.url)
let (sps, pps, idr) = try parseH264FromTSSegment(tsData: segmentData)
if self.formatDescription == nil {
self.formatDescription = try CMFormatDescription(h264ParameterSets: [sps, pps])
}
if let sampleBuffer = try createSampleBuffer(from: idr, segment: segment) {
try self.decodeSampleBuffer(sampleBuffer)
}
func parseH264FromTSSegment(tsData: Data) throws -> (sps: Data, pps: Data, idr: Data) {
let tsSize = 188
var pesData = Data()
for i in stride(from: 0, to: tsData.count, by: tsSize) {
let tsPacket = tsData.subdata(in: i..<min(i + tsSize, tsData.count))
guard let payload = extractPayloadFromTSPacket(tsPacket) else { continue }
pesData.append(payload)
}
let nalUnits = parseNalUnits(from: pesData)
var sps: Data?
var pps: Data?
var idr: Data?
for nalUnit in nalUnits {
guard let firstByte = nalUnit.first else { continue }
let nalType = firstByte & 0x1F
switch nalType {
case 7: // SPS
sps = nalUnit
case 8: // PPS
pps = nalUnit
case 5: // IDR
idr = nalUnit
default:
break
}
if sps != nil, pps != nil, idr != nil {
break
}
}
guard let validSPS = sps, let validPPS = pps, let validIDR = idr else {
throw NSError()
}
return (validSPS, validPPS, validIDR)
}
func extractPayloadFromTSPacket(_ tsPacket: Data) -> Data? {
let syncByte: UInt8 = 0x47
guard tsPacket.count == 188, tsPacket[0] == syncByte else {
return nil
}
let payloadStart = (tsPacket[1] & 0x40) != 0
let adaptationFieldControl = (tsPacket[3] & 0x30) >> 4
var payloadOffset = 4
if adaptationFieldControl == 2 || adaptationFieldControl == 3 {
let adaptationFieldLength = Int(tsPacket[4])
payloadOffset += 1 + adaptationFieldLength
}
guard adaptationFieldControl == 1 || adaptationFieldControl == 3 else {
return nil
}
let payload = tsPacket.subdata(in: payloadOffset..<tsPacket.count)
return payloadStart ? payload : nil
}
func parseNalUnits(from h264Data: Data) -> [Data] {
let startCode = Data([0x00, 0x00, 0x00, 0x01])
var nalUnits: [Data] = []
var searchRange = h264Data.startIndex..<h264Data.endIndex
while let range = h264Data.range(of: startCode, options: [], in: searchRange) {
let nextStart = h264Data.range(of: startCode, options: [], in: range.upperBound..<h264Data.endIndex)?.lowerBound ?? h264Data.endIndex
let nalUnit = h264Data.subdata(in: range.upperBound..<nextStart)
nalUnits.append(nalUnit)
searchRange = nextStart..<h264Data.endIndex
}
return nalUnits
}
private func createSampleBuffer(from data: Data, segment: HLSSegment) throws -> CMSampleBuffer? {
var blockBuffer: CMBlockBuffer?
let alignedData = UnsafeMutableRawPointer.allocate(byteCount: data.count, alignment: MemoryLayout<UInt8>.alignment)
data.copyBytes(to: alignedData.assumingMemoryBound(to: UInt8.self), count: data.count)
let blockStatus = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: alignedData,
blockLength: data.count,
blockAllocator: nil,
customBlockSource: nil,
offsetToData: 0,
dataLength: data.count,
flags: 0,
blockBufferOut: &blockBuffer
)
guard blockStatus == kCMBlockBufferNoErr, let validBlockBuffer = blockBuffer else {
alignedData.deallocate()
throw NSError()
}
var sampleBuffer: CMSampleBuffer?
var timing = [calculateTiming(for: segment)]
var sampleSizes = [data.count]
let sampleStatus = CMSampleBufferCreate(
allocator: kCFAllocatorDefault,
dataBuffer: validBlockBuffer,
dataReady: true,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: formatDescription,
sampleCount: 1,
sampleTimingEntryCount: 1,
sampleTimingArray: &timing,
sampleSizeEntryCount: sampleSizes.count,
sampleSizeArray: &sampleSizes,
sampleBufferOut: &sampleBuffer
)
guard sampleStatus == noErr else {
alignedData.deallocate()
throw NSError()
}
return sampleBuffer
}
private func decodeSampleBuffer(_ sampleBuffer: CMSampleBuffer) throws {
guard let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else {
throw NSError()
}
if decompressionSession == nil {
try setupDecompressionSession(formatDescription: formatDescription)
}
guard let session = decompressionSession else {
throw NSError()
}
let flags: VTDecodeFrameFlags = [._EnableAsynchronousDecompression, ._EnableTemporalProcessing]
var flagOut = VTDecodeInfoFlags()
let status = VTDecompressionSessionDecodeFrame(
session,
sampleBuffer: sampleBuffer,
flags: flags,
frameRefcon: nil,
infoFlagsOut: nil)
if status != noErr {
throw NSError()
}
}
private func setupDecompressionSession(formatDescription: CMFormatDescription) throws {
self.formatDescription = formatDescription
if let session = decompressionSession {
VTDecompressionSessionInvalidate(session)
self.decompressionSession = nil
}
var decompressionSession: VTDecompressionSession?
var callback = VTDecompressionOutputCallbackRecord(
decompressionOutputCallback: decompressionOutputCallback,
decompressionOutputRefCon: Unmanaged.passUnretained(self).toOpaque())
let status = VTDecompressionSessionCreate(
allocator: kCFAllocatorDefault,
formatDescription: formatDescription,
decoderSpecification: nil,
imageBufferAttributes: nil,
outputCallback: &callback,
decompressionSessionOut: &decompressionSession
)
if status != noErr {
throw NSError()
}
self.decompressionSession = decompressionSession
}
let decompressionOutputCallback: VTDecompressionOutputCallback = { (
decompressionOutputRefCon,
sourceFrameRefCon,
status,
infoFlags,
imageBuffer,
presentationTimeStamp,
presentationDuration
) in
guard status == noErr else {
print("Callback: \(status)")
return
}
if let imageBuffer = imageBuffer {
}
}
I try to validate low latency HLS fragmented MP4 setup with meadistreamvalidator. I get following error:
Error: Invalid URL
Detail: '(null)' is not a valid URL
Source: mediaplaylistURL.m3u8 - segmentURL.mp4
meadistreamvalidator version is 1.23.14
What does that error mean?
Topic:
Media Technologies
SubTopic:
Streaming
I'm developing a tutorial style tvOS app with multiple videos. The examples I've seen so far deal with only one video.
Defining the player and source(url) before body view
let avPlayer = AVPlayer(url: URL(string: "https://domain.com/.../.../video.mp4")!))
and then in the body view the video is displayed
VideoPlayer(player: avPlayer)
This allows options such as stop/start etc.
When I try something similar with a video title passed into this view I can't define the player with this title variable.
var vTitle: String
var avPlayer = AVPlayer(url: URL(string: "https://domain.com/.../.../" + vTitle + ".mp4"")!))
var body: some View {
I het an error that vTitle can't be used in the url above the body view.
Any thoughts or suggestions? Thanks
I'm developing an iOS radio app that plays various HLS streams. The challenge is that some stations broadcast HLS streams containing both audio and video (example: https://svs.itworkscdn.net/smcwatarlive/smcwatar/chunks.m3u8), but I want to:
Extract and play only the audio track
Support AirPlay for audio-only streaming
Minimize data usage by not downloading video content
Technical Details:
iOS 17+
Swift 5.9
Using AVFoundation for playback
Current implementation uses AVPlayer with AVPlayerItem
Current Code Structure:
class StreamPlayer: ObservableObject {
@Published var isPlaying = false
private var player: AVPlayer?
private var playerItem: AVPlayerItem?
func playStream(url: URL) {
let asset = AVURLAsset(url: url)
playerItem = AVPlayerItem(asset: asset)
player = AVPlayer(playerItem: playerItem)
player?.play()
}
Stream Analysis:
When analyzing the video stream using FFmpeg:
CopyInput #0, hls, from 'https://svs.itworkscdn.net/smcwatarlive/smcwatar/chunks.m3u8':
Stream #0:0: Video: h264, yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps
Stream #0:1: Audio: aac, 44100 Hz, stereo, fltp
Attempted Solutions:
Using MobileFFmpeg:
let command = [
"-i", streamUrl,
"-vn",
"-acodec", "aac",
"-ac", "2",
"-ar", "44100",
"-b:a", "128k",
"-f", "mpegts",
"udp://127.0.0.1:12345"
].joined(separator: " ")
ffmpegProcess = MobileFFmpeg.execute(command)
Issue: While FFmpeg successfully extracts audio, playback through AVPlayer doesn't work reliably.
Tried using HLS output:
let command = [
"-i", streamUrl,
"-vn",
"-acodec", "aac",
"-ac", "2",
"-ar", "44100",
"-b:a", "128k",
"-f", "hls",
"-hls_time", "2",
"-hls_list_size", "3",
outputUrl.path
]
Issue: Creates temporary files but faces synchronization issues with live streams.
Requirements:
Real-time audio extraction from HLS stream
Maintain live streaming capabilities
Full AirPlay support
Minimal data usage (avoid downloading video content)
Handle network interruptions gracefully
Questions:
What's the most efficient way to extract only audio from an HLS stream in real-time?
Is there a way to tell AVPlayer to ignore video tracks completely?
Are there better alternatives to FFmpeg for this specific use case?
What's the recommended approach for handling AirPlay with modified streams?
Any guidance or alternative approaches would be greatly appreciated. Thank you!
Topic:
Media Technologies
SubTopic:
Streaming
Can some one please answer me How Can I update the cookies of the previously set m3u8 video in AVPlayer without creating the new AVURLAsset and replacing the AVPlayer current Item with it
ApplicationMusicPlayer with queue created from playlist crashes with random occurrence shortly after skipping back or forth using controls embedded in the notification, with the error on console log: applicationController: xpc service connection interrupted.
I've noticed that the issue occurs more frequently the shorter is time between skipping entries. Since ApplicationMusicPlayer is run on a remote process, the main app does not crash, but the music stops playing without any exception, and the playback control turns uninitiated.
Here is how I'm initiating the queue:
let entries = playlist
.with(.entries).entries!
.map { ApplicationMusicPlayer.Queue.Entry($0) }
ApplicationMusicPlayer.shared.queue = .init(
entries, startingAt: entries.last
)
Please give me some tips on how to solve this.
EDIT:
The issue does not occur when navigating quickly through the station.
In our Apple TV application, we are using the native AVPlayer for live playback functionality. During live restart playback, we intermittently encounter an error when the playback timeline approaches the actual live event end time.
Error:
The operation couldn’t be completed. (CoreMediaErrorDomain error -16839 - Unable to get playlist before long download timer) / Failure reason:
Scenario:
The live event is scheduled from 7:00 AM to 8:00 AM.
Restart playback begins at 7:20 AM, allowing the user to watch the event from the start while the live stream continues in real-time.
As the restart playback timeline approaches the actual event end time (8:00 AM), AVPlayer displays an error, and playback continues in the background.
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Media Player
HTTP Live Streaming
Hi folks,
When doing HLS v6 live streaming with fmp4 chunks we noticed that when the encoder timestamps slightly drift and a #EXT-X-DISCONTINUITY tag is created in either the audio or video playlist (in an ABR setup), the tag is not correctly handled by the player leading to a broken playback containing black screen or no audio (depending on which playlist the tag is printed in).
We noticed that this is often true when the number of tags is odd between the playlists (eg. the audio playlist contains 1 tag and the video contains 2 tags will result in a black screen with audio).
By using the same "broken" source but using Shaka player instead won't break the playback at all.
Are there any possible fix (or upcoming) for AV Player?
I can't play video content with HEVC and DRM. Tested HEVC only: OK. Tested DRM+AVC: Ok.
Tested 2 players (Clappr/Stevie and BitMovin)
Master, variants and EXT-X-MAPs are downloaded Ok, DRM keys Ok and then, for instance with BitMovin Player:
[BMP] [Player] [Error] Event: SourceError, Data: {"code":2001,"data":{"message":"The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.)","code":-12927},"message":"Source Error. The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.)","timestamp":1740320663.4505711,"type":"onSourceError"} code: 2001 [Data code: -12927, message: The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.), underlying error: Error Domain=CoreMediaErrorDomain Code=-12927 "(null)"]
4k-master.m3u8.txt
4k.m3u8.txt
4k-audio.m3u8.txt