I have an application that downloads content using AVAssetDownloadTask. In the iOS Settings app, these downloads are listed in the Storage section as a collection of downloaded movies, displaying the asset image, whether it's already watched, the file size, and an option to delete it.
Curious about how other apps handle this display, I noticed that Apple Music shows every downloaded artist, album, and song individually. This feature made me wonder: can I achieve something similar in my application? On the other hand, apps like Spotify and Amazon Music don’t show any downloaded files in the Settings app. Is it possible to implement that approach as well?
Here is print screen of the Apple Music Storage section in the Settings App:
I tried moving the download directory into sub folder using the FileManager, but all the results made the downloads stop showing in the setting app
Streaming
RSS for tagDeep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.
Post
Replies
Boosts
Views
Activity
Hello,
Is there an example from Apple on how to extract the data to create an Iframe playlist using the AVAssetSegmentTrackReport?
I'm following the example of HLS authoring from WWDC 2020 - Author fragmented MPEG-4 content with AVAssetWriter
It states:
"You can create the playlist and the I-frame playlist based on the information AVAssetSegmentReport provides."
I've examined the AVAssetSegmentTrackReport and it only appears to provide the firstVideoSampleInformation, which is good for the first frame, but the content I'm creating contains an I-Frame every second within 6 second segments.
I've tried parsing the data object from the assetWriter delegate function's didOutputSegmentData parameter, but only getting so far parsing the NALUs - the length prefixes seem to go wrong when I hit the first NALU type 8 (PPS) in the first segment.
Alternatively, I could parse out the output from ffmpeg, but hoping there's a solution within Swift.
Many thanks
Description:
HLS-VOD-Stream contains several audio tracks, being marked with same language tag but different name tag.
https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/bipbop_16x9_variant.m3u8
e.g.
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="bipbop_audio",LANGUAGE="eng",NAME="BipBop Audio 1",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="bipbop_audio",LANGUAGE="eng",NAME="BipBop Audio 2",AUTOSELECT=NO,DEFAULT=NO,URI="alternate_audio_aac/prog_index.m3u8"
You set up Airplay from e.g. iPhone or Mac or iPad to Apple TV or Mac.
Expected behavior:
You see in AVPlayer and QuickTime Language Audiotrack Dropdown containing info about LANGUAGE and NAME on Airplay Sender as on Airplay Receiver - the User Interface between playing back a local Stream or Airplay-Stream is consistent.
Current status:
You see in UI of Player of Airplay Receiver only Information of Language Tag.
Question:
=> Do you have an idea, if this is a missing feature of Airplay itself or a bug?
Background:
We'd like to offer additional Audiotrack with enhanced Audio-Characteristics for better understanding of spoken words - "Klare Sprache".
Technically, "Klare Sprache" works by using an AI-based algorithm that separates speech from other audio elements in the broadcast. This algorithm enhances the clarity of the dialogue by amplifying the speech and diminishing the volume of background sounds like music or environmental noise. The technology was introduced by ARD and ZDF in Germany and is available on select programs, primarily via HD broadcasts and digital platforms like HbbTV.
Users can enable this feature directly from their television's audio settings, where it may be labeled as "deu (qks)" or "Klare Sprache" depending on the device. The feature is available on a growing number of channels and is part of a broader effort to make television more accessible to viewers with hearing difficulties.
It can be correctly signaled in HLS via:
e.g.
https://ccavmedia-amd.akamaized.net/test/bento4multicodec/airplay1.m3u8
# Audio
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="stereo-aac",LANGUAGE="de",NAME="Deutsch",DEFAULT=YES,AUTOSELECT=YES,CHANNELS="2",URI="ST.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="stereo-aac",LANGUAGE="de",NAME="Deutsch (Klare Sprache)",DEFAULT=NO,AUTOSELECT=YES,CHARACTERISTICS="public.accessibility.enhances-speech-intelligibility",CHANNELS="2",URI="KS.m3u8"
Still there's the problem, that with Airplay-Stream you don't get this extra information but only LANGUAGE tag.
I'm using screenCaptureKit for winodow capture.
I build a filter like follow code, (I'm not usng independent window filter, because sometime I need capture multi windows in the same time)
filter = [[SCContentFilter alloc] initWithDisplay:displayID
includingWindows:includingWindows];
At begining, the capture works OK.
When the target window's size or position changed, my code monitored this change and called updateConfiguration like below , I got completionHandler without error.
[streamConfiguration setSourceRect:newRect];
[streamConfiguration setWidth:newWidth];
[streamConfiguration setHeight:newHeight];
[scStream updateConfiguration:streamConfiguration
completionHandler:^(NSError *_Nullable error) {
if (error) {
// some error log
} else {
// update done
}
}];
But sometimes, it still output frame with old size, and the rect is still the old.
And int some other cases, it works fine.....
Is there any special work before call updateConfiguration to make it work ?
Hi, I'm trying to download a encripted video using mediafilesegmenter with SAMPLE-AES, not fairplay...
I can play the video online without any problems..
When i try download the video using AVAssetDownloadTask
I get an error:
Error Domain=CoreMediaErrorDomain Code=-12160 "(null)"
And, if I use ClearKey system to deliver the key when I have a custom scheme on the m3u8, Airplay doesn't work either
Sample-aes only works with fairplay?
I can't find any information about it, does anyone know if it is a bug?
I hope someone can help me :)
Hello,
We have a TV app, based on react-native-video, which was tweaked to suit our requirements.
There is a problem with AirPlay streaming.
An asset can be streamed on AppleTV, but when we try to stream it on any TV with AirPlay and choose a language different from the default in the manifest there is a problem.
Seek freezes the picture and nothing happens. The funny thing is if we do seek back to the starting point +/-20 sec, the video resumes.
The obvious difference with AppleTV, which we were able to recognize, is that with AppleTv search an isPlaybackBufferEmpty is observed, while with 3rd party TVs, there are only isPlaybackLikelyToKeepUp events firing.
Maybe, there is a solution to that issue? Or at least, there is a way to forcefully empty the buffer when search is called?
Thank you
I find the default timeout of 1 second to download a segment is not reasonable when playing an HLS stream from a server that is transcoding.
Does anyone know if it's possible to change this networking timeout?
Error status: -12889, Error domain: CoreMediaErrorDomain, Error comment: No response for map in 1s. Event: <AVPlayerItemErrorLogEvent: 0x301866250>
Also there is a delegate to control downloading HLS for offline viewing but no delegate for just streaming HLS.
Need some pointers on how to decode RTSP and streaming protocols like RTP, RTMP, SRT other than HLS within Vision OS builds using the Unity SDK. Is there a comprehensive and robust decoder solution that would work for Vision OS in Mixed Reality mode with the Polyspatial package without the need for a transcoder?
I am playing the protected HLS streams and the authorization token expires in 5 minutes,I am trying to achieve this with 'AVAssetResourceLoaderDelegate' and I'm getting an error 401 Unauthorized.
The question is how to update the token inside asset? I've already tried to change it in resourceLoader loadingRequest.allHTTPHeaderFields but it is not working:
func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
guard let url = loadingRequest.request.url else {
loadingRequest.finishLoading(with: NSError(domain: "Invalid URL", code: -1, userInfo: nil))
return false
}
// Create a URLRequest with the initial token
var request = URLRequest(url: url)
request.addValue("Bearer \(token)", forHTTPHeaderField: "Authorization")
request.allHTTPHeaderFields = loadingRequest.request.allHTTPHeaderFields
// Perform the request
let task = URLSession.shared.dataTask(with: request) { data, response, error in
if let error = error {
print("Error performing request: \(error.localizedDescription)")
loadingRequest.finishLoading(with: error)
return
}
guard let response = response as? HTTPURLResponse, response.statusCode == 200 else {
let error = NSError(domain: "HTTP Error", code: (response as? HTTPURLResponse)?.statusCode ?? -1, userInfo: nil)
print("HTTP Error: \(error.localizedDescription)")
loadingRequest.finishLoading(with: error)
return
}
if let data = data {
loadingRequest.dataRequest?.respond(with: data)
}
loadingRequest.finishLoading()
}
task.resume()
return true
}
return false
}
Hi
I'm trying to run a 4K video on my Apple TV 4K, but I get error in AVPlayer.
Error Domain=CoreMediaErrorDomain Code=-16170
I can't get any more information.
Example HSL Manifest with video track in 4K:
#EXT-X-STREAM-INF:AUDIO="aud_mp4a.40.2",AVERAGE-BANDWIDTH=11955537,BANDWIDTH=12256000,VIDEO-RANGE=SDR,CODECS="hvc1.1.6.L153.90,mp4a.40.2",RESOLUTION=3840x2160,FRAME-RATE=50,HDCP-LEVEL=TYPE-1
video_4/stream.m3u8
Maybe, problem with hvc1 ? But as far as I know, Apple TV supports HEVC.
Hi Guys,
I'm working on adding LL-HLS support to the Ant Media Server. I'm following up the documentation in hlstools for streaming and testing mediastreamsegmenter and tsrecompressor. What I wonder is that the sample uses 1002 ms for --part-target-duration-ms (-w in short form) as below
mediastreamsegmenter -w 1002 -t 4 224.0.0.50:9123 -s 16 -D -T -f /Library/WebServer/Documents/2M/`
It works in this way.
mediastreamsegmenter -w 1000 -t 4 224.0.0.50:9123 -s 16 -D -T -f /Library/WebServer/Documents/2M/`
It works in this way
mediastreamsegmenter -w 1000 -t 4 224.0.0.50:9123 -s 16 -D -T --iso-fragmented -f /Library/WebServer/Documents/2M/`
It crashes in this way when I add --iso-fragmented and mediastreamsegmenter gives the following error
encountered failure write segment failed (-17543) - exiting
It works if I use 1001 or 1003.
I wondering if there is a reason for that or is it a bug?
Dear All,
Since installing iOS 18 public beta, I can't send music from my iPhone to my old Airport Express Gen 1 (unable to conect). Is this a general problem?
Thanks for your feedback,
Patrick
I have an HDR10+ encoded video that if loaded as a mov plays back on the Apple Vision Pro but when that video is encoded using the latest (1.23b) Apple HLS tools to generate an fMP4 - the resulting m3u8 cannot be played back in the Apple Vision Pro and I only get back a "Cannot Open" error.
To generate the m3u8, I'm just calling mediafilesegmenter (with -iso-fragmented) and then variantplaylistcreator. This completes with no errors but the m3u8 will playback on the Mac using VLC but not on the Apple Vision Pro.
The relevant part of the m3u8 is:
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=40022507,BANDWIDTH=48883974,VIDEO-RANGE=PQ,CODECS="ec-3,hvc1.1.60000000.L180.B0",RESOLUTION=4096x4096,FRAME-RATE=24.000,CLOSED-CAPTIONS=NONE,AUDIO="audio1",REQ-VIDEO-LAYOUT="CH-STEREO"
{{url}}
Has anyone been able to use the HLS tools to generate fMP4s of MV-HEVC videos with HDR10?
I am implementing Siri/Shortcuts for radio app for iOS. I have implemented AppIntent that sends notification to app and app should start playing the stream in AVPlayer.
AppIntent sometimes works, sometimes it doesn't. So far I couldn't find the pattern when/why it works and when/why it doesn't. Sometimes it works even if app is killed or is in the background. Sometimes it doesn't work when the app is in the background and when it is killed.
I have been observing logs in Console and apparently sometimes it stops when AVPlayer tries to figure out buffer size (then I am getting in console AVPlayerWaitingToMinimizeStallsReason and the AVPlayerItem status is set to .unknown). Sometimes it figures out quickly (for the same stream) and starts playing.
Sometimes when the app is killed, after AppIntent call the app is launched in the background (at least I see it as a process in Console) and receives notification from AppIntent and start playing. Sometimes... the app is not called at all, and its process is not visible in the console, so it doesn't receives the notification and doesn't play.
I have setup Session correctly (set to .playback without any options and activated), I set AVPlayerItem's preferredForwardBufferDuration to 0 (default), and AVPlayer's automaticallyWaitsToMinimizeStalling to true.
Background processing, Audio, AirPlay, Picture in Picture and Siri are added in Singing & Capabilities section of the app project settings.
Here are the code examples:
Play AppIntent (Stop App Intent is constructed the same way):
@available(iOS 16, *)
struct PlayStationIntent: AudioPlaybackIntent {
static let title: LocalizedStringResource = "Start playing"
static let description = IntentDescription("Plays currently selected radio")
@MainActor
func perform() async throws -> some IntentResult {
NotificationCenter.default.post(name: IntentsNotifications.siriPlayCurrentStationNotificationName, object: nil)
return .result()
}
}
AppShortcutsProvider:
struct RadioTestShortcuts: AppShortcutsProvider {
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: PlayStationIntent(),
phrases: [
"Start station in \(.applicationName)",
],
shortTitle: LocalizedStringResource("Play station"),
systemImageName: "radio"
)
}
}
Player object:
class Player: ObservableObject {
private let session = AVAudioSession.sharedInstance()
private let streamURL = URL(string: "http://radio.rockserwis.fm/live")!
private var player: AVPlayer?
private var item: AVPlayerItem?
var cancellables = Set<AnyCancellable>()
typealias UInfo = [AnyHashable: Any]
@Published var status: Player.Status = .stopped
@Published var isPlaying = false
func setupSession() {
do {
try session.setCategory(.playback)
} catch {
print("*** Error setting up category audio session: \(error), \(error.localizedDescription)")
}
do {
try session.setActive(true)
} catch {
print("*** Error setting audio session active: \(error), \(error.localizedDescription)")
}
}
func setupPlayer() {
item = AVPlayerItem(url: streamURL)
item?.preferredForwardBufferDuration = TimeInterval(0)
player = AVPlayer(playerItem: item)
player?.automaticallyWaitsToMinimizeStalling = true
player?.allowsExternalPlayback = false
let metaDataOuptut = AVPlayerItemMetadataOutput(identifiers: nil)
}
func play() {
setupPlayer()
setupSession()
handleInterruption()
player?.play()
isPlaying = true
player?.currentItem?.publisher(for: \.status)
.receive(on: DispatchQueue.main)
.sink(receiveValue: { status in
self.handle(status: status)
})
.store(in: &self.cancellables)
}
func stop() {
player?.pause()
player = nil
isPlaying = false
status = .stopped
}
func handle(status: AVPlayerItem.Status) {
...
}
func handleInterruption() {
...
}
func handle(interruptionType: AVAudioSession.InterruptionType?, userInfo: UInfo?) {
...
}
}
extension Player {
enum Status {
case waiting, ready, failed, stopped
}
}
extension Player {
func setupRemoteTransportControls() {
...
}
}
Content view:
struct ContentView: View {
@EnvironmentObject var player: Player
var body: some View {
VStack(spacing: 20) {
Text("AppIntents Radio Test App")
.font(.title)
Button {
if player.isPlaying {
player.stop()
} else {
player.play()
}
} label: {
Image(systemName: player.isPlaying ? "pause.circle" : "play.circle")
.font(.system(size: 80))
}
}
.padding()
}
}
#Preview {
ContentView()
}
Main struct:
```import SwiftUI
@main
struct RadioTestApp: App {
let player = Player()
let siriPlayCurrentPub = NotificationCenter.default.publisher(for: IntentsNotifications.siriPlayCurrentStationNotificationName)
let siriStop = NotificationCenter.default.publisher(for: IntentsNotifications.siriStopRadioNotificationName)
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(player)
.onReceive(siriPlayCurrentPub, perform: { _ in
player.play()
})
.onReceive(siriStop, perform: { _ in
player.stop()
})
}
}
}
Issue found in Native App or Hybrid App:Native
OS Version:Any
Device:Any
4.Description:
We are using AVPlayer for streaming videos in our iOS application. The streaming works fine in lower sandbox environment, but we are encountering a "server not properly configured" error in the production environment.
5.Steps to Reproduce:
Configure AVPlayer with a video URL from the production server.
Attempt to play the video.
6.Expected Behavior:
The video should stream successfully as it does in the sandbox environment.
7.Actual Behavior:
AVPlayer fails to stream the video and reports a "server not properly configured" error.
tl;dr how can I get raw YUV in a Metal fragment shader from a VideoToolbox 10-bit/BT.2020 HEVC stream without any extra/secret format conversions?
With VideoToolbox and 10-bit HEVC, I've found that it defaults to CVPixelBuffers w/ formats kCVPixelFormatType_Lossless_420YpCbCr10PackedBiPlanarFullRange or kCVPixelFormatType_Lossy_420YpCbCr10PackedBiPlanarFullRange. To mitigate this, I have the following snippet of code to my application:
// We need our pixels unpacked for 10-bit so that the Metal textures actually work
var pixelFormat:OSType? = nil
let bpc = getBpcForVideoFormat(videoFormat!)
let isFullRange = getIsFullRangeForVideoFormat(videoFormat!)
// TODO: figure out how to check for 422/444, CVImageBufferChromaLocationBottomField?
if bpc == 10 {
pixelFormat = isFullRange ? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange
}
let videoDecoderSpecification:[NSString: AnyObject] = [kVTVideoDecoderSpecification_EnableHardwareAcceleratedVideoDecoder:kCFBooleanTrue]
var destinationImageBufferAttributes:[NSString: AnyObject] = [kCVPixelBufferMetalCompatibilityKey: true as NSNumber, kCVPixelBufferPoolMinimumBufferCountKey: 3 as NSNumber]
if pixelFormat != nil {
destinationImageBufferAttributes[kCVPixelBufferPixelFormatTypeKey] = pixelFormat! as NSNumber
}
var decompressionSession:VTDecompressionSession? = nil
err = VTDecompressionSessionCreate(allocator: nil, formatDescription: videoFormat!, decoderSpecification: videoDecoderSpecification as CFDictionary, imageBufferAttributes: destinationImageBufferAttributes as CFDictionary, outputCallback: nil, decompressionSessionOut: &decompressionSession)
In short, I need kCVPixelFormatType_420YpCbCr10BiPlanar so that I have a straightforward MTLPixelFormat.r16Unorm/MTLPixelFormat.rg16Unorm texture binding for Y/CbCr. Metal, seemingly, has no direct pixel format for 420YpCbCr10PackedBiPlanar. I'd also rather not use any color conversion in VideoToolbox, in order to save on processing (and to ensure that the color transforms/transfer characteristics match between streamer/client, since I also have a custom transfer characteristic to mitigate blocking in dark scenes).
However, I noticed that in visionOS 2, the CVPixelBuffer I receive is no longer a compressed render target (likely a bug), which caused GPU texture read bandwidth to skyrocket from 2GiB/s to 30GiB/s. More importantly, this implies that VideoToolbox may in fact be doing an extra color conversion step, wasting memory bandwidth.
Does Metal actually have no way to handle 420YpCbCr10PackedBiPlanar? Are there any examples for reading 10-bit HDR HEVC buffers directly with Metal?
The same H265 encrypted Fairplay content can be played in all Apple devices except A1625.
The clear H265 content is played in A1625.
The question is: will this model (A1625) support H265 Fairplay encrypted content?
A ticket was created here:
https://discussions.apple.com/thread/255658006?sortBy=best
I saw equalizer in apps like Musi and Spotify. I think (but not sure) they use HLS streaming. If so, how to implement such an equalizer for HLS?
I searched and tried several approaches but so far none works, like:
AVAudioEngine seems only support local file;
Download .ts and merge into .mp3 to make it local can not guarantee real time effect;
MTAudioProcessingTap needs the audio track. For remote .mp3 I can extract the audio track but not for HLS.
Any suggestion?
We found that when enabled FairPlay to streams that uses AVC video codec and AC-3 + MP4 audio codec, the playback fails on some devices with CoreMedia error 1718449215 after started playback for about a second.
Successful playback devices: iPhone SE
Failed playback devices: iPhone 12 Pro, iPhone 14 Pro
I am writing to report an issue encountered with the playback of HLS (HTTP Live Streaming) streams that I believe is specific to iOS version 17. The problem manifests when certain conditions are met during the playback of concatenated HLS segments, particularly those with low video bitrate. Below, I will detail the background, symptoms, and steps required to reproduce the issue.
Background:
Our business scenario requires concatenating two HLS playlists, referred to as 1.m3u8 and 2.m3u8, into a single playlist 12.m3u8. An example of such a playlist is as follows:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:2.0,
1.1.ts
#EXTINF:2.0,
1.2.ts
#EXTINF:2.0,
1.3.ts
#EXT-X-DISCONTINUITY
#EXTINF:2.0,
2.1.ts
#EXTINF:2.0,
2.2.ts
#EXT-X-ENDLIST
Problem Symptoms:
On PC web browsers, Android devices, and iOS versions 13 and 15, the following is observed:
Natural playback completion occurs without any issues.
Seeking to different points within the stream (e.g., from 3 seconds to 9 seconds) works as expected.
However, on iOS version 17, there is a significant issue:
Natural playback completion is unaffected.
When seeking to various points within the first playlist (1.m3u8) after playing for 1, 2, or 3 seconds, the audio for the last 3 seconds of 1.m3u8 gets lost.
Conditions for Replication:
The issue only arises when all the following conditions are satisfied:
The video content is generated from a single image and an audio track, ensuring sound presence in the final 3 seconds.
The video stream bitrate is below 500 Kbps. (Tested with 1393 Kbps bitrate, which did not trigger the issue.)
The HLS streams are concatenated using the #EXT-X-DISCONTINUITY tag to form a virtual 11.m3u8 playlist. (No issues occur when streams are not concatenated.)
Seek operations are performed during playback. (No issues occur without seek operations.)
The issue is exclusive to iOS version 17. (No issues reported on iOS versions 13 and 15.)
Disrupting any one of these conditions results in normal playback behavior.
Steps to Reproduce:
Using FFmpeg, generate a video from a single image and an audio track, with a suggested duration of 10 to 20 seconds for testing convenience.
If the video's bitrate exceeds 1000 Kbps, consider transcoding it to 500 Kbps or lower to avoid potential edge-case issues.
Convert the 1.mp4 file into 1.m3u8 using FFmpeg. The segment duration can be set to between 1 and 5 seconds (tested with both 2-second and 5-second durations).
Duplicate 1.m3u8 as 2.m3u8, then concatenate 1.m3u8 and 2.m3u8 into 12.m3u8 as shown below:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:2.0,
1.1.ts
#EXTINF:2.0,
1.2.ts
#EXT-X-DISCONTINUITY
#EXTINF:2.0,
1.1.ts
#EXTINF:2.0,
1.2.ts
#EXT-X-ENDLIST
On an iOS 17 device, play 12.m3u8 for 1, 2, or 3 seconds, then seek to any point between 7 and 9 seconds (within the duration of 1.m3u8). This action results in the loss of audio for the last 3 seconds of 1.m3u8.