Streaming

RSS for tag

Deep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.

Streaming Documentation

Post

Replies

Boosts

Views

Activity

mediastreamvalidator always signals 'unsupported sudo track codec' with AES sample encrypted streams
Hi,When I run mediastreamvalidator on HLS streams (produced by mediafilesegmenter), with SAMPLE-AES applied, the latest version always signals :Unsupported audio track codec: Unknown - As a 'MUST FIX' issue.This result occurs even when I run the validator against the FairPlayStreaming example content supplied with the developer SDK package.This seems to have been introduced in HTTPLiveStreamingTools version:Version 1.2(170524)It does not happen with tools version:Version 1.2(160525)Is this a bug in the tool? or a new requirement perhaps?Hopefully someone can help me out with this...
2
0
1.5k
Jun ’17
HLS media playlist Captions and the DEFAULT attribute
When describing closed captions renditions through HLS master playlists, should a properly formed playlist with closed captions contain one closed caption rendition with the DEFAULT=YES attribute? I searched the HLS RFC and section 4.3.4.1 on EXT-X-MEDIA mentions that no more than one should contain the DEFAULT=YES attribute. I was hoping to find recommendations around whether with one or more EXT-X-MEDIA renditions in the same group, if it is required that one of them contain DEFAULT=YES.The media player I am using does not display closed captions if one of the closed captions renditions does not contain the DEFAULT=YES attribute and I am wondering if that is an issue with the player or an issue with a malformed HLS playlist. Note the lack of a DEFAULT=YES attribute in the example playlist below. Should this still be considered a valid playlist?#EXTM3U #EXT-X-VERSION:3 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-MEDIA:TYPE=CLOSED-CAPTIONS,GROUP-ID="CC",LANGUAGE="eng",NAME="English",INSTREAM-ID="CC1" #EXT-X-STREAM-INF:BANDWIDTH=2103200,AVERAGE-BANDWIDTH=2305600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=960x540,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=804760,AVERAGE-BANDWIDTH=875600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=1304160,AVERAGE-BANDWIDTH=1425600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=768x432,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=505120,AVERAGE-BANDWIDTH=545600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=480x270,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=3102000,AVERAGE-BANDWIDTH=3405600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream
1
0
3.9k
Jan ’20
-16832/CoreMediaErrorDomain Warning: restarting from end of live playlist
Hello Guys,Recently, I'm getting this kind of issue on the HLS live streaming content,The error detail is like belowerror -16832/CoreMediaErrorDomain Warning: restarting 12.494080s from end of live playlist; target duration 9s - stall danger for https://C4--147-21-8-0-24-8395.dfwlive-v1-c4p1-sponsored.dfw.vcdn.att-idns.net/Content/HLS.abre/Live/channel(CMTHD-8395.dfw.1080)/uc9zjusg758wyw3zmumhqyck92mpguti-ks7qcjbcm2aaaaaa-20191125T201851-197474271-114.vttAnyone know what this is? It seems make the picture frozen, or degrade the streaming quality. Sometimes end up restarting the playback.Thanks & regards.
2
2
1.7k
Jan ’20
Putting the TS into tsrecompressor
I notice that precious few HTTP Live Streaming questions have gotten responses, let alone answers. But I'll be optimistic!I'd like to try streaming my own live video through the LL-HLS tools (currently prerelease 73), in multiple bitrates.I succeeded in following directions: I use tsrecompressor to generate "bip-bop" video and pass three compressed variations of that into three instances of mediastreamsegmenter, then out through three instances of ll-hls-origin-example.go. It works as promised, end-to-end. (Brief aside, for any who may stumble after me: it took me too long to realize that I should improve my knowledge of IP Multicasting and use the prescribed 224.0.0.50 address. I got nowhere trying to simply route familiar UDP unicast between my processes.)So far, so good. Now I want to supply my own video from an external feed. Not the generated "bip-bop" or any local capture devices.% tsrecompressor --help tsrecompressor: unrecognized option `--help' Read input MPEG-2 TS, recompress and write to output. Usage: tsrecompressor [options] where options are: -i | --input-file= : input file path (default is stdin) ... etc.That sounds fantastic — I'd love to feed tsrecompressor through stdin! But in what format? It doesn't say, and my first few dozen guesses came up cold.The man page for mediastreamsegmenter appears to point the way: The mediastreamsegmenter only accepts MPEG-2 Transport Streams as defined in ISO/IEC 14496-1 as input. The transport stream must contain H.264 (MPEG-4, part 10) video and AAC or MPEG audio. If AAC audio is used, it must have ADTS headers. H.264 video access units must use Access Unit Delimiter NALs, and must be in unique PES packets.Of course, that's mediastreamsegmenter and not tsrecompressor. But it's a start. So this is my best guess at the appropriate ffmpeg output. (Recall that I want to eventually pass a live stream into ffmpeg; for now I'm starting with an m4v file.)% ffmpeg -re -i infile.m4v \ -c:v h264_************ \ -c:a aac_at \ -f mpegts \ - | tsrecompressor -h -a \ -O 224.0.0.50:9121 \ -L 224.0.0.50:9123 \ -P 224.0.0.50:9125This ends abruptly after 9 frames.av_interleaved_write_frame(): Broken pipe Error writing trailer of pipe:: Broken pipe//My best results are when I change from H.264 to H.265:% ffmpeg -re -i infile.m4v \ -c:v hevc_************ \ -c:a aac_at \ -f mpegts \ - | tsrecompressor -h -a \ -O 224.0.0.50:9121 \ -L 224.0.0.50:9123 \ -P 224.0.0.50:9125Now it doesn't break the pipe. It keeps counting along, frame after frame. The VTEncoderXPCService starts up, and sampling of the tsrecompressor process shows both producer and consumer threads for audio recompression.But there's no output. There was output for the generated "bip-bop" video. Not for HEVC-TS via stdin. I'm not 100% certain yet but I see no indication of any UDP output from tsrecompressor. The three mediastreamsegmenter processes sit idle.Am I missing some tag, or something, in the input stream? Do I need to pay more attention to chunk sizes and frame offsets?Thanks, all, for any insight or experience.
7
0
1.8k
Apr ’20
iOS 15 - HLS parallel download issue
Hi, We are using AVPlayer for FPL HLS stream, after migrating to iOS 15 (currently Beta 3), we have observed a strange behavior during segments download; The player downloads last video segment of VOD HLS stream The player downloads all audio segments Playback starts playing from the end (last segment) Needs to restart playback to start downloading all video segments. Note that we do not have this behavior with older versions of iOS (14 and before). m3u8 file Thank you,
3
0
2.0k
Jul ’21
How to insert timed metadata (id3) into live HLS files with Apple's mediastreamsegmenter and ffmpeg
I am trying to insert timed metadata (id3) into a live HLS stream created with Apple's mediastreamsegmenter tool. I am getting the video from an ffmpeg stream, here is the command I run to test from an existing file: ffmpeg -re -i vid1.mp4 -vcodec libx264 -acodec aac -f mpegts - | mediastreamsegmenter -f /Users/username/Sites/video -s 10 -y test -m -M 4242 -l log.txt To inject metadata, I run this command: id3taggenerator -text '{"x":"data dan","y":"36"}' -a localhost:4242 This setup creates the expected .ts files and I can play back the video/audio with no issues. However the metadata I am attempting to insert does not work in the final file. I know the metadata is there in some form, when I file-compare a no-metadata version of the video to one I injected metadata into, I can see the ID3 tags within the binary data. Bad File Analysis When I analyze the generated files using ffmpeg: ffmpeg -i video1.ts the output I get is: [mpegts @ 0x7fb00a008200] start time for stream 2 is not set in estimate_timings_from_pts[mpegts @ 0x7fb00a008200] stream 2 : no TS found at start of file, duration not set[mpegts @ 0x7fb00a008200] Could not find codec parameters for stream 2 (Audio: mp3, 0 channels): unspecified frame sizeConsider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options Input #0, mpegts, from 'video1.ts':  Duration: 00:00:10.02, start: 0.043444, bitrate: 1745 kb/s  Program 1   Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 848x464 [SAR 1:1 DAR 53:29], 30 fps, 30 tbr, 90k tbn, 60 tbc   Stream #0:1[0x101] Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 130 kb/s  No Program   Stream #0:2[0x102]: Audio: mp3, 0 channels Note how the third stream (stream #0:2) is marked as mp3...this is incorrect! Also it says "No Program", instead of being in "Program 1". When I analyze a properly encoded video file with inserted ID3 metadata that I created with Apple's mediafilesegmenter tool, the analysis shows a "timed_id3" track and this metadata track works properly in my web browser. Good File Analysis ffmpeg -i video1.ts —Input #0, mpegts, from 'video1.ts':  Duration: 00:00:10.08, start: 19.984578, bitrate: 1175 kb/s  Program 1   Stream #0:0[0x101]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 848x464, 30 fps, 30 tbr, 90k tbn, 180k tbc  Stream #0:1[0x102]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 67 kb/s  Stream #0:2[0x103]: Data: timed_id3 (ID3  / 0x20334449) I must use mediastreamsegmenter because that is required for live streams. Does anyone know how I can get timed ID3 metadata into a live HLS stream properly?
2
0
2.2k
Sep ’21
iOS 16 - Exception Thrown for processContentKeyResponse
Issue: I am supporting an iOS application that streams Fairplay DRM protected content. On iOS 16 devices, I am seeing intermittent exceptions thrown when trying to process the CKC returned by the license server. The thrown exception is as follows: -[AVContentKeyRequest processContentKeyResponse:] AVContentKeySession's keySystem is not same as that of keyResponse This issue does not occur on older devices (we support iOS 13, 14, 15) I am unable to find documentation about this error so any insight is appreciated: High-Level Code Overview Use ContentKeyRequest to request an application certificate Use returned Cert to call makeStreamingContentKeyRequestData Use returned data to request FairPlay license Use returned CKC to generate AVContentKeyResponse (i.e. AVContentKeyResponse(fairPlayStreamingKeyResponseData:_)) Call processContentKeyResponse(_) App crash/exception thrown when callling processContentKeyResponse I am seeing other issues related to DRM and iOS 16 but these are specific to downloaded and offline content which do not match my use case.
3
0
2.5k
Nov ’22
CoreMediaErrorDomain : code : -16012
Hi , We are getting big spikes of errors on very few programs in some live channels. [-16012:CoreMediaErrorDomain] [Error Domain=CoreMediaErrorDomain Code=-16012 "(null)"] When this error occurred , AVPlayer stops & users has to restart the playback. This is the error we are getting & this happens in some live programs. We have the same set up & uses same transcoders etc in all programs. Mostly we have very low error rate in player with live programs , but with this error , error rate can increase up to 80% of the users effecting pretty much all the users on apple devices. Does anyone know what this error actually means ? What is the context & what is the reason behind that ? It seems like this may be related subtitles & this occurs only when the subtitles are enabled. ( The subtitles are not embedded in the stream it is teletext ) Try to find in apple documents & online & nothing could be find unfortunately.
1
0
1.2k
May ’23
How to intercept HLS Playlist chunk request for CDN token implementation
Hello, We are using HLS for our streaming iOS and tvOS applications. We have DRM protection on our applications but we want add another secure layer which is CDN token. We want to add that CDN token data on header or query parameters. Any of two are applicable at our CDN side. There is a problem at client side. We want to send that token knowledge and refresh at given a time. We add token data using at initial state let asset = AVURLAsset(url: url, options: ["AVURLAssetHTTPHeaderFieldsKey": headers]) and add interceptor with asset.resourceLoader.setDelegate. It works seamlessly. We use AVAssetResourceLoaderDelegate and we can intercept just master playlist and playlists via func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool and we can refresh CDN token data at only playlists. That token data can be at query params or header. It does not matter. For example, #EXTM3U #EXT-X-VERSION:3 #EXTINF:10.0 https://chunk1?cdntoken=A.ts #EXTINF:10.0 https://chunk2?cdntoken=A.ts #EXTINF:10.0 https://chunk3?cdntoken=A.ts #EXTINF:10.0 assume that it is our .m3u8 file for given live video playlist. It has three chunks with cdn token data in query params. If we give that chunks to AVPlayer. It is going to play chunks in order. When we change new cdn token at query params, it effects our chunk Urls and our player stalls. It is because our cdn adds new cdn token knowledge to chunk's Url. It means that our new .m3u8 is going to be like that for next playlist; #EXT-X-VERSION:3 #EXTINF:10.0 https://chunk4?cdntoken=B.ts #EXTINF:10.0 https://chunk5?cdntoken=B.ts #EXTINF:10.0 https://chunk6?cdntoken=B.ts #EXTINF:10.0 Cdn token data is converted to B from A at cdn side and it sends new playlist like that. That's why, our player is going to stall. Is there any way not to player stall with edited chunk url? When we change new cdn token at header. It does not change chunks url like at the first question but AVPlayer does not allow to intercept chunk Urls which means that before callin https://chunk1?cdntoken=A.ts url, i want to intercept and add new cdn token data to header. Is there any way to intercept chunk Urls like intercepting playlist? Thanks for answers in advance
3
0
1.4k
Jun ’23
Live Streaming
Hello, I have an IP Camera that is able to do RTSP streaming. This stream would go to my server, where it will be forwarded in RTSP format to an iPhone device. I would like to know if Apple allows RTSP incoming streaming (both audio and video); are there any specific guidelines for this? NOTE : - i will use Xcode version 14.3.1 swift version 5.8 Thanks,
0
0
369
Jun ’23
How to determine mimetype of .key and.numbers file ?
I need to correctly determine the mime type of keynote file and numbers file without relying on the extension. For this purpose, I am hoping to use the magic number concept and match it with the first four bytes of a file. Link for magic number understanding -> [https://www.outsystems.com/forge/component-overview/10108/validate-file-extension#:~:text=A%20magic%20number%20is%20a,types%20which%20is%20hexadecimal%20format.] Though after extensive searching online I am unable to find a unique magic number for the above files moreover first four bytes of the file is matching with the magic number of the zip file due to which I am getting the wrong extension as zip. This is the first four bytes 0x50, 0x4B, 0x3, 0x4. Is there any reliable way to find the mime type of these files without relying on an extension?
0
0
564
Jun ’23
Query regarding "Different target durations detected" error while validating the VOD playlist with media stream validator
When i validated the VOD playlist with media stream validator, am getting the "Different target durations detected" error for trk vs sub, var vs sub playlists, Error: Different target durations detected --> Detail: Target duration: 3599 vs Target duration: 6 --> Source: subs/eus_2/playlist.m3u8 --> Compare: trk969287/playlist.m3u8 --> Detail: Target duration: 6 vs Target duration: 3599 --> Source: var969287/playlist.m3u8 --> Compare: subs/eus_2/playlist.m3u8 but the apple specification (https://developer.apple.com/documentation/http-live-streaming/hls-authoring-specification-for-apple-devices specification) stated that "5.7. For VOD content, target durations of subtitle playlists MAY be longer than the other media." Can you please clarify the query that why we are getting different target duration error for subtitle vs other media playlists even though the apple spec mentioned that the subtitle playlist may be longer than other media for VOD assets?
1
0
796
Jul ’23
Video Streaming Protocols Supported
I am aware that HLS is required for most video streaming use cases (watching a movie, TV show, or YouTube video). This is a requirement for all apps. However, I am confused as to whether this would also apply to video chat/video conferencing apps. It would be inefficient to upload compressed video using rtmp/rtp, decompress it, and create HLS segments. Low latency requirements only make this worse. So, is it permissible to use other protocols for video conferencing use cases? Thanks
1
0
540
Jul ’23
Offline Fairplay not load persistent key data before download metadata
Hi, we have hls+fps stream, on our app implementation through AVAssetResourceLoader (get spc, generate ckc correctly) and stream played successfully But when we try to download stream for offline, through session.processContentKeyRequest for contentId, we get spc, generate ckc the same as previously, but can't recieve persistent key data keyRequest.persistableContentKey PKD failed Error Domain=AVFoundationErrorDomain Code=-11835 "Cannot Open" UserInfo={NSLocalizedFailureReason=This content is not authorized., NSLocalizedDescription=Cannot Open, NSUnderlyingError=0x282da98f0 {Error Domain=NSOSStatusErrorDomain Code=-42668 "(null)"}} Server and client the same, fps stream played, but not load pkd for offline All offline part of sources we get from HLS Catalog example, and implement requestCertificate and requestContentKeyFromKeySecurityModule, and i don't understand why we not recieve pkd.
1
0
610
Jul ’23
HLS Stream Master Playlist Unavailable on iPhone
I am facing an issue with video content that I have converted to HLS playlist content (using ffmpeg) added to an S3 bucket that is shared through a Cloudfront Distribution. My scenario is the following: I have a bucket called bucket-a, with a "folder" video-1 which contains the following files: output.m3u8 output0.ts ... output15.ts audio/ audio.aac image.jpg All items in bucket-a are blocked from public access through S3. Content is only vended through a Cloudfront distribution which has origin bucket-a. I am able to access https://.cloudfront.net/path/output.m3u8 on a desktop browser without fail, and no errors thrown. But the file output.m3u8 and all .ts files are not available on iPhone mobile browsers. The part that is peculiar is that this is not true for all playlist content in bucket-a. For example, I have a "folder" video-2 within bucket-a that has the same file structure as video-1 that is completely accessible through all mobile browsers. Here is an example master playlist error: https://dbs3s11vyxuw0.cloudfront.net/bottle-promo/script_four/output.m3u8 Even more head-scratching is that I am able to access all the playlists that are within this playlist. What I've tried: Initially, I believed the issue to be due to the way the video was transcoding so I standardized the video transcoding. Then I believed the issue to be due to CloudFront permissions, though those seem to be fine. I've validated my stream here: https://ott.dolby.com/OnDelKits_dev/StreamValidator/Start_Here.html Not sure which way to turn.
1
0
647
Aug ’23
How to minimize configuredTimeOffsetFromLive property for HLS in live situation?
Hi, I am using HLS playback for live broadcasting. I am using AVPlayerItem. In live, I found seekableDuration always have some offset from the latest moment compared to the same playback on Chrome or Android. As far as I have digged into, the difference approximately matches the recommendedTimeOffsetFromLive (usually 6~9 seconds on my tests). The problem is, I tried to minimize configuredTimeOffsetFromLive but it does not have any effect. Even if I set it to 1~2 seconds, it is always the same as recommendedTimeOffsetFromLive. I tried to change automaticallyPreservesTimeOffsetFromLive as well, but nothing seems working. How do these properties work and how can I make the time offset minimized?
0
0
350
Aug ’23
Feasibility and Privacy Concerns for App Monitoring and Blocking 3rd Party Apps on iOS
Hello, I am interested in developing an iOS app that can monitor and potentially block third-party applications while a specific app is running. I would like to inquire about the feasibility of implementing such functionality within the iOS ecosystem and if there are any privacy terms or restrictions that I need to be aware of to ensure compliance with Apple's policies. Thank you for your guidance.
0
0
416
Sep ’23