Streaming

RSS for tag

Deep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.

Streaming Documentation

Post

Replies

Boosts

Views

Activity

mediastreamvalidator always signals 'unsupported sudo track codec' with AES sample encrypted streams
Hi,When I run mediastreamvalidator on HLS streams (produced by mediafilesegmenter), with SAMPLE-AES applied, the latest version always signals :Unsupported audio track codec: Unknown - As a 'MUST FIX' issue.This result occurs even when I run the validator against the FairPlayStreaming example content supplied with the developer SDK package.This seems to have been introduced in HTTPLiveStreamingTools version:Version 1.2(170524)It does not happen with tools version:Version 1.2(160525)Is this a bug in the tool? or a new requirement perhaps?Hopefully someone can help me out with this...
2
0
1.6k
Jun ’17
HLS media playlist Captions and the DEFAULT attribute
When describing closed captions renditions through HLS master playlists, should a properly formed playlist with closed captions contain one closed caption rendition with the DEFAULT=YES attribute? I searched the HLS RFC and section 4.3.4.1 on EXT-X-MEDIA mentions that no more than one should contain the DEFAULT=YES attribute. I was hoping to find recommendations around whether with one or more EXT-X-MEDIA renditions in the same group, if it is required that one of them contain DEFAULT=YES.The media player I am using does not display closed captions if one of the closed captions renditions does not contain the DEFAULT=YES attribute and I am wondering if that is an issue with the player or an issue with a malformed HLS playlist. Note the lack of a DEFAULT=YES attribute in the example playlist below. Should this still be considered a valid playlist?#EXTM3U #EXT-X-VERSION:3 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-MEDIA:TYPE=CLOSED-CAPTIONS,GROUP-ID="CC",LANGUAGE="eng",NAME="English",INSTREAM-ID="CC1" #EXT-X-STREAM-INF:BANDWIDTH=2103200,AVERAGE-BANDWIDTH=2305600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=960x540,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=804760,AVERAGE-BANDWIDTH=875600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=1304160,AVERAGE-BANDWIDTH=1425600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=768x432,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=505120,AVERAGE-BANDWIDTH=545600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=480x270,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=3102000,AVERAGE-BANDWIDTH=3405600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream
1
0
4.0k
Jan ’20
Putting the TS into tsrecompressor
I notice that precious few HTTP Live Streaming questions have gotten responses, let alone answers. But I'll be optimistic!I'd like to try streaming my own live video through the LL-HLS tools (currently prerelease 73), in multiple bitrates.I succeeded in following directions: I use tsrecompressor to generate "bip-bop" video and pass three compressed variations of that into three instances of mediastreamsegmenter, then out through three instances of ll-hls-origin-example.go. It works as promised, end-to-end. (Brief aside, for any who may stumble after me: it took me too long to realize that I should improve my knowledge of IP Multicasting and use the prescribed 224.0.0.50 address. I got nowhere trying to simply route familiar UDP unicast between my processes.)So far, so good. Now I want to supply my own video from an external feed. Not the generated "bip-bop" or any local capture devices.% tsrecompressor --help tsrecompressor: unrecognized option `--help' Read input MPEG-2 TS, recompress and write to output. Usage: tsrecompressor [options] where options are: -i | --input-file= : input file path (default is stdin) ... etc.That sounds fantastic — I'd love to feed tsrecompressor through stdin! But in what format? It doesn't say, and my first few dozen guesses came up cold.The man page for mediastreamsegmenter appears to point the way: The mediastreamsegmenter only accepts MPEG-2 Transport Streams as defined in ISO/IEC 14496-1 as input. The transport stream must contain H.264 (MPEG-4, part 10) video and AAC or MPEG audio. If AAC audio is used, it must have ADTS headers. H.264 video access units must use Access Unit Delimiter NALs, and must be in unique PES packets.Of course, that's mediastreamsegmenter and not tsrecompressor. But it's a start. So this is my best guess at the appropriate ffmpeg output. (Recall that I want to eventually pass a live stream into ffmpeg; for now I'm starting with an m4v file.)% ffmpeg -re -i infile.m4v \ -c:v h264_************ \ -c:a aac_at \ -f mpegts \ - | tsrecompressor -h -a \ -O 224.0.0.50:9121 \ -L 224.0.0.50:9123 \ -P 224.0.0.50:9125This ends abruptly after 9 frames.av_interleaved_write_frame(): Broken pipe Error writing trailer of pipe:: Broken pipe//My best results are when I change from H.264 to H.265:% ffmpeg -re -i infile.m4v \ -c:v hevc_************ \ -c:a aac_at \ -f mpegts \ - | tsrecompressor -h -a \ -O 224.0.0.50:9121 \ -L 224.0.0.50:9123 \ -P 224.0.0.50:9125Now it doesn't break the pipe. It keeps counting along, frame after frame. The VTEncoderXPCService starts up, and sampling of the tsrecompressor process shows both producer and consumer threads for audio recompression.But there's no output. There was output for the generated "bip-bop" video. Not for HEVC-TS via stdin. I'm not 100% certain yet but I see no indication of any UDP output from tsrecompressor. The three mediastreamsegmenter processes sit idle.Am I missing some tag, or something, in the input stream? Do I need to pay more attention to chunk sizes and frame offsets?Thanks, all, for any insight or experience.
7
0
1.9k
Apr ’20
How to insert timed metadata (id3) into live HLS files with Apple's mediastreamsegmenter and ffmpeg
I am trying to insert timed metadata (id3) into a live HLS stream created with Apple's mediastreamsegmenter tool. I am getting the video from an ffmpeg stream, here is the command I run to test from an existing file: ffmpeg -re -i vid1.mp4 -vcodec libx264 -acodec aac -f mpegts - | mediastreamsegmenter -f /Users/username/Sites/video -s 10 -y test -m -M 4242 -l log.txt To inject metadata, I run this command: id3taggenerator -text '{"x":"data dan","y":"36"}' -a localhost:4242 This setup creates the expected .ts files and I can play back the video/audio with no issues. However the metadata I am attempting to insert does not work in the final file. I know the metadata is there in some form, when I file-compare a no-metadata version of the video to one I injected metadata into, I can see the ID3 tags within the binary data. Bad File Analysis When I analyze the generated files using ffmpeg: ffmpeg -i video1.ts the output I get is: [mpegts @ 0x7fb00a008200] start time for stream 2 is not set in estimate_timings_from_pts[mpegts @ 0x7fb00a008200] stream 2 : no TS found at start of file, duration not set[mpegts @ 0x7fb00a008200] Could not find codec parameters for stream 2 (Audio: mp3, 0 channels): unspecified frame sizeConsider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options Input #0, mpegts, from 'video1.ts':  Duration: 00:00:10.02, start: 0.043444, bitrate: 1745 kb/s  Program 1   Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 848x464 [SAR 1:1 DAR 53:29], 30 fps, 30 tbr, 90k tbn, 60 tbc   Stream #0:1[0x101] Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 130 kb/s  No Program   Stream #0:2[0x102]: Audio: mp3, 0 channels Note how the third stream (stream #0:2) is marked as mp3...this is incorrect! Also it says "No Program", instead of being in "Program 1". When I analyze a properly encoded video file with inserted ID3 metadata that I created with Apple's mediafilesegmenter tool, the analysis shows a "timed_id3" track and this metadata track works properly in my web browser. Good File Analysis ffmpeg -i video1.ts —Input #0, mpegts, from 'video1.ts':  Duration: 00:00:10.08, start: 19.984578, bitrate: 1175 kb/s  Program 1   Stream #0:0[0x101]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 848x464, 30 fps, 30 tbr, 90k tbn, 180k tbc  Stream #0:1[0x102]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 67 kb/s  Stream #0:2[0x103]: Data: timed_id3 (ID3  / 0x20334449) I must use mediastreamsegmenter because that is required for live streams. Does anyone know how I can get timed ID3 metadata into a live HLS stream properly?
2
0
2.2k
Sep ’21
CoreMediaErrorDomain : code : -16012
Hi , We are getting big spikes of errors on very few programs in some live channels. [-16012:CoreMediaErrorDomain] [Error Domain=CoreMediaErrorDomain Code=-16012 "(null)"] When this error occurred , AVPlayer stops & users has to restart the playback. This is the error we are getting & this happens in some live programs. We have the same set up & uses same transcoders etc in all programs. Mostly we have very low error rate in player with live programs , but with this error , error rate can increase up to 80% of the users effecting pretty much all the users on apple devices. Does anyone know what this error actually means ? What is the context & what is the reason behind that ? It seems like this may be related subtitles & this occurs only when the subtitles are enabled. ( The subtitles are not embedded in the stream it is teletext ) Try to find in apple documents & online & nothing could be find unfortunately.
1
0
1.3k
May ’23
How to intercept HLS Playlist chunk request for CDN token implementation
Hello, We are using HLS for our streaming iOS and tvOS applications. We have DRM protection on our applications but we want add another secure layer which is CDN token. We want to add that CDN token data on header or query parameters. Any of two are applicable at our CDN side. There is a problem at client side. We want to send that token knowledge and refresh at given a time. We add token data using at initial state let asset = AVURLAsset(url: url, options: ["AVURLAssetHTTPHeaderFieldsKey": headers]) and add interceptor with asset.resourceLoader.setDelegate. It works seamlessly. We use AVAssetResourceLoaderDelegate and we can intercept just master playlist and playlists via func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool and we can refresh CDN token data at only playlists. That token data can be at query params or header. It does not matter. For example, #EXTM3U #EXT-X-VERSION:3 #EXTINF:10.0 https://chunk1?cdntoken=A.ts #EXTINF:10.0 https://chunk2?cdntoken=A.ts #EXTINF:10.0 https://chunk3?cdntoken=A.ts #EXTINF:10.0 assume that it is our .m3u8 file for given live video playlist. It has three chunks with cdn token data in query params. If we give that chunks to AVPlayer. It is going to play chunks in order. When we change new cdn token at query params, it effects our chunk Urls and our player stalls. It is because our cdn adds new cdn token knowledge to chunk's Url. It means that our new .m3u8 is going to be like that for next playlist; #EXT-X-VERSION:3 #EXTINF:10.0 https://chunk4?cdntoken=B.ts #EXTINF:10.0 https://chunk5?cdntoken=B.ts #EXTINF:10.0 https://chunk6?cdntoken=B.ts #EXTINF:10.0 Cdn token data is converted to B from A at cdn side and it sends new playlist like that. That's why, our player is going to stall. Is there any way not to player stall with edited chunk url? When we change new cdn token at header. It does not change chunks url like at the first question but AVPlayer does not allow to intercept chunk Urls which means that before callin https://chunk1?cdntoken=A.ts url, i want to intercept and add new cdn token data to header. Is there any way to intercept chunk Urls like intercepting playlist? Thanks for answers in advance
3
0
1.4k
Jun ’23
Video Streaming Protocols Supported
I am aware that HLS is required for most video streaming use cases (watching a movie, TV show, or YouTube video). This is a requirement for all apps. However, I am confused as to whether this would also apply to video chat/video conferencing apps. It would be inefficient to upload compressed video using rtmp/rtp, decompress it, and create HLS segments. Low latency requirements only make this worse. So, is it permissible to use other protocols for video conferencing use cases? Thanks
1
0
572
Jul ’23
Offline Fairplay not load persistent key data before download metadata
Hi, we have hls+fps stream, on our app implementation through AVAssetResourceLoader (get spc, generate ckc correctly) and stream played successfully But when we try to download stream for offline, through session.processContentKeyRequest for contentId, we get spc, generate ckc the same as previously, but can't recieve persistent key data keyRequest.persistableContentKey PKD failed Error Domain=AVFoundationErrorDomain Code=-11835 "Cannot Open" UserInfo={NSLocalizedFailureReason=This content is not authorized., NSLocalizedDescription=Cannot Open, NSUnderlyingError=0x282da98f0 {Error Domain=NSOSStatusErrorDomain Code=-42668 "(null)"}} Server and client the same, fps stream played, but not load pkd for offline All offline part of sources we get from HLS Catalog example, and implement requestCertificate and requestContentKeyFromKeySecurityModule, and i don't understand why we not recieve pkd.
1
0
661
Jul ’23
HLS Stream Master Playlist Unavailable on iPhone
I am facing an issue with video content that I have converted to HLS playlist content (using ffmpeg) added to an S3 bucket that is shared through a Cloudfront Distribution. My scenario is the following: I have a bucket called bucket-a, with a "folder" video-1 which contains the following files: output.m3u8 output0.ts ... output15.ts audio/ audio.aac image.jpg All items in bucket-a are blocked from public access through S3. Content is only vended through a Cloudfront distribution which has origin bucket-a. I am able to access https://.cloudfront.net/path/output.m3u8 on a desktop browser without fail, and no errors thrown. But the file output.m3u8 and all .ts files are not available on iPhone mobile browsers. The part that is peculiar is that this is not true for all playlist content in bucket-a. For example, I have a "folder" video-2 within bucket-a that has the same file structure as video-1 that is completely accessible through all mobile browsers. Here is an example master playlist error: https://dbs3s11vyxuw0.cloudfront.net/bottle-promo/script_four/output.m3u8 Even more head-scratching is that I am able to access all the playlists that are within this playlist. What I've tried: Initially, I believed the issue to be due to the way the video was transcoding so I standardized the video transcoding. Then I believed the issue to be due to CloudFront permissions, though those seem to be fine. I've validated my stream here: https://ott.dolby.com/OnDelKits_dev/StreamValidator/Start_Here.html Not sure which way to turn.
1
0
709
Aug ’23
How to minimize configuredTimeOffsetFromLive property for HLS in live situation?
Hi, I am using HLS playback for live broadcasting. I am using AVPlayerItem. In live, I found seekableDuration always have some offset from the latest moment compared to the same playback on Chrome or Android. As far as I have digged into, the difference approximately matches the recommendedTimeOffsetFromLive (usually 6~9 seconds on my tests). The problem is, I tried to minimize configuredTimeOffsetFromLive but it does not have any effect. Even if I set it to 1~2 seconds, it is always the same as recommendedTimeOffsetFromLive. I tried to change automaticallyPreservesTimeOffsetFromLive as well, but nothing seems working. How do these properties work and how can I make the time offset minimized?
0
0
379
Aug ’23
Feasibility and Privacy Concerns for App Monitoring and Blocking 3rd Party Apps on iOS
Hello, I am interested in developing an iOS app that can monitor and potentially block third-party applications while a specific app is running. I would like to inquire about the feasibility of implementing such functionality within the iOS ecosystem and if there are any privacy terms or restrictions that I need to be aware of to ensure compliance with Apple's policies. Thank you for your guidance.
0
0
446
Sep ’23
Does HLS support MPEG2Video encoded video?
Hi, I have an HLS content i.e., .m3u8 manifest file but the segments are encoded with **MPEG2Video. ** Is such encoding supported by HLS? Or they only support H.264/AVC or HEVC/H.265? Stream #0:0[0x281]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, bt470bg, top first), 720x576 [SAR 16:15 DAR 4:3], 3125 kb/s, 25 fps, 25 tbr, 90k tbn Stream #0:1[0x201]: Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 128 kb/s Thanks.
0
0
381
Sep ’23
How can I create a Clear key HLS using Shaka-Packager
Hello, I'm try to create a working Clear-key HLS stream using Shaka-packager. Shaka packager support only SAMPLE-AES. when implemented, it's looks like the resulting stream is no't playable in Safary browser. This is the example M3U8 file starts: #EXTM3U #EXT-X-VERSION:6 ## Generated with https://github.com/google/shaka-packager version v2.6.1-634af65-release #EXT-X-TARGETDURATION:13 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MAP:URI="audio_und_2c_128k_aac_init.mp4" #EXT-X-KEY:METHOD=SAMPLE-AES,URI="https://www.httpstest.com:771/key.key",IV=0x00000000000000000000000000000000,KEYFORMAT="identity" #EXTINF:10.008, audio_und_2c_128k_aac_1.mp4 #EXTINF:10.008, audio_und_2c_128k_aac_2.mp4 #EXTINF:9.985, audio_und_2c_128k_aac_3.mp4 #EXTINF:10.008, audio_und_2c_128k_aac_4.mp4 #EXTINF:10.008, audio_und_2c_128k_aac_5.mp4 #EXTINF:9.985, audio_und_2c_128k_aac_6.mp4 #EXTINF:0.093, audio_und_2c_128k_aac_7.mp4 #EXT-X-ENDLIST I'm looking for: A. a working example of SAMPLE-AES Clear key encrypted HLS (so I will be able to learn from it how it should be defined for IOS) B. help on how to create a clear key working HLS stream for IOS/macOS (Safari)
0
0
610
Sep ’23
tvOS: AVPlayerViewController.transportBarCustomMenuItems not working
Hi guys, Setting AVPlayerViewController.transportBarCustomMenuItems is not working on tvOS. I still see 2 icons for Audio and Subtitles. let menuItemAudioAndSubtitles = UIMenu( image: UIImage(systemName: "heart") ) playerViewController.transportBarCustomMenuItems = [menuItemAudioAndSubtitles] WWDC 2021 video is insufficient to make this work. https://developer.apple.com/videos/play/wwdc2021/10191/ The video doesn't say what exactly I need to do. Do I need to disable subtitle options? viewController.allowedSubtitleOptionLanguages = [] This didn't work and I still see the default icon loaded by the player. Do I need to create subclass of AVPlayerViewController? I just want to replace those 2 default icons by 1 icon as a test, but I was unsuccessful after many hours of work. Is it mandatory to define child menu items to the main item? Or do I perhaps need to define UIAction? The documentation and video are insufficient in providing guidance how to do that. I did something like this before, but that was more than 3 years ago and audi and subtitles was showing at the top of the player screen as tabs, if I rememebr correctly. Is transportBarCustomMenuItems perhaps deprecated? Is it possible that when loading AVPlayerItem and it detects audi and subtitles in the stream, it automatically resets AVPlayerViewController menu? How do I suppress this behavior? I'm currently loading AVPlayerViewController into SwiftUI interface. Is that perhaps the problem? Should I write SwiftUI player overlay from scratch? Thanks, Robert
1
0
706
Sep ’23
FairPlay Production Deployment Package Requirements
On the FairPlay main page it states: The FPS Deployment Package is not available to third parties acting on behalf of licensed content owners. Could someone please provide more details around this statement? If I were to build a subscription based platform where users can upload & monetize their videos, and used FairPlay to prevent unauthorized access to the creator's content, what would Apple's response be?
0
0
399
Sep ’23
Why does AVPlayer stop reporting seekableTimeRanges when currentTime falls out of a live DVR window?
For live content, if the user pauses playback and their current playhead falls out of the valid DVR window, AVPlayer will stop reporting seekableTimeRanges - why does this happen? Is there a workaround? For example: Suppose we were streaming content with a 5 minute DVR window User pauses playback for 6 minutes Their current position is now outside of the valid seekable time range AVPlayer stops reporting seekableTimeRanges all together This is problematic for two reasons: We have observed the AVPlayer generally becomes unresponsive when this happens. i.e. Any seek action will cause the player to freeze up Without knowing the seekable range, we dont know how to return the user to the live edge when they resume playback Seems to be the same issue described in these threads: https://developer.apple.com/forums/thread/45850 https://developer.apple.com/forums/thread/45850
2
0
533
Sep ’23