HTTP Live Streaming

RSS for tag

Send audio and video over HTTP from an ordinary web server for playback on Mac, iOS, and tvOS devices using HTTP Live Streaming (HLS).

Posts under HTTP Live Streaming tag

81 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

mediastreamvalidator always signals 'unsupported sudo track codec' with AES sample encrypted streams
Hi,When I run mediastreamvalidator on HLS streams (produced by mediafilesegmenter), with SAMPLE-AES applied, the latest version always signals :Unsupported audio track codec: Unknown - As a 'MUST FIX' issue.This result occurs even when I run the validator against the FairPlayStreaming example content supplied with the developer SDK package.This seems to have been introduced in HTTPLiveStreamingTools version:Version 1.2(170524)It does not happen with tools version:Version 1.2(160525)Is this a bug in the tool? or a new requirement perhaps?Hopefully someone can help me out with this...
2
0
1.6k
Aug ’23
HLS media playlist Captions and the DEFAULT attribute
When describing closed captions renditions through HLS master playlists, should a properly formed playlist with closed captions contain one closed caption rendition with the DEFAULT=YES attribute? I searched the HLS RFC and section 4.3.4.1 on EXT-X-MEDIA mentions that no more than one should contain the DEFAULT=YES attribute. I was hoping to find recommendations around whether with one or more EXT-X-MEDIA renditions in the same group, if it is required that one of them contain DEFAULT=YES.The media player I am using does not display closed captions if one of the closed captions renditions does not contain the DEFAULT=YES attribute and I am wondering if that is an issue with the player or an issue with a malformed HLS playlist. Note the lack of a DEFAULT=YES attribute in the example playlist below. Should this still be considered a valid playlist?#EXTM3U #EXT-X-VERSION:3 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-MEDIA:TYPE=CLOSED-CAPTIONS,GROUP-ID="CC",LANGUAGE="eng",NAME="English",INSTREAM-ID="CC1" #EXT-X-STREAM-INF:BANDWIDTH=2103200,AVERAGE-BANDWIDTH=2305600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=960x540,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=804760,AVERAGE-BANDWIDTH=875600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=640x360,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=1304160,AVERAGE-BANDWIDTH=1425600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=768x432,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=505120,AVERAGE-BANDWIDTH=545600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=480x270,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream #EXT-X-STREAM-INF:BANDWIDTH=3102000,AVERAGE-BANDWIDTH=3405600,CODECS="avc1.640029,mp4a.40.2",RESOLUTION=1280x720,FRAME-RATE=30.000,CLOSED-CAPTIONS="CC" https://the.link.to.my.stream
1
0
4.0k
Sep ’23
Putting the TS into tsrecompressor
I notice that precious few HTTP Live Streaming questions have gotten responses, let alone answers. But I'll be optimistic!I'd like to try streaming my own live video through the LL-HLS tools (currently prerelease 73), in multiple bitrates.I succeeded in following directions: I use tsrecompressor to generate "bip-bop" video and pass three compressed variations of that into three instances of mediastreamsegmenter, then out through three instances of ll-hls-origin-example.go. It works as promised, end-to-end. (Brief aside, for any who may stumble after me: it took me too long to realize that I should improve my knowledge of IP Multicasting and use the prescribed 224.0.0.50 address. I got nowhere trying to simply route familiar UDP unicast between my processes.)So far, so good. Now I want to supply my own video from an external feed. Not the generated "bip-bop" or any local capture devices.% tsrecompressor --help tsrecompressor: unrecognized option `--help' Read input MPEG-2 TS, recompress and write to output. Usage: tsrecompressor [options] where options are: -i | --input-file= : input file path (default is stdin) ... etc.That sounds fantastic — I'd love to feed tsrecompressor through stdin! But in what format? It doesn't say, and my first few dozen guesses came up cold.The man page for mediastreamsegmenter appears to point the way: The mediastreamsegmenter only accepts MPEG-2 Transport Streams as defined in ISO/IEC 14496-1 as input. The transport stream must contain H.264 (MPEG-4, part 10) video and AAC or MPEG audio. If AAC audio is used, it must have ADTS headers. H.264 video access units must use Access Unit Delimiter NALs, and must be in unique PES packets.Of course, that's mediastreamsegmenter and not tsrecompressor. But it's a start. So this is my best guess at the appropriate ffmpeg output. (Recall that I want to eventually pass a live stream into ffmpeg; for now I'm starting with an m4v file.)% ffmpeg -re -i infile.m4v \ -c:v h264_************ \ -c:a aac_at \ -f mpegts \ - | tsrecompressor -h -a \ -O 224.0.0.50:9121 \ -L 224.0.0.50:9123 \ -P 224.0.0.50:9125This ends abruptly after 9 frames.av_interleaved_write_frame(): Broken pipe Error writing trailer of pipe:: Broken pipe//My best results are when I change from H.264 to H.265:% ffmpeg -re -i infile.m4v \ -c:v hevc_************ \ -c:a aac_at \ -f mpegts \ - | tsrecompressor -h -a \ -O 224.0.0.50:9121 \ -L 224.0.0.50:9123 \ -P 224.0.0.50:9125Now it doesn't break the pipe. It keeps counting along, frame after frame. The VTEncoderXPCService starts up, and sampling of the tsrecompressor process shows both producer and consumer threads for audio recompression.But there's no output. There was output for the generated "bip-bop" video. Not for HEVC-TS via stdin. I'm not 100% certain yet but I see no indication of any UDP output from tsrecompressor. The three mediastreamsegmenter processes sit idle.Am I missing some tag, or something, in the input stream? Do I need to pay more attention to chunk sizes and frame offsets?Thanks, all, for any insight or experience.
7
0
1.9k
Aug ’23
How to cache an HLS video while playing it
Hi, I'm working on an app where a user can scroll through a feed of short videos (a bit like TikTok). I do some pre-fetching of videos a couple positions ahead of the user's scroll position, so the video can start playing as soon as he or she scrolls to the video. Currently, I just pre-fetch by initializing a few AVPlayers. However, I'd like to add a better caching system. I'm looking for the best way to: get videos to start playing as possible, while making sure to minimize re-downloading of videos if a user scrolls away from a video and back to it. Is there a way that I can cache the contents of an AVPlayer that has loaded an HLS video? Alternatively, I've explored using AVAssetDownloadTask to download the HLS videos. My issue is that I can't download the full video and then play it - I need to start playing the video as soon as the user scrolls to it, even if it's not done downloading. Is there a way to start an HLS download with an AVAssetDownloadTask, and then start playing the video while it continues downloading? Thank you!
10
0
6.7k
Oct ’23
Video Quality selection in HLS streams
Hello there, in our team we were requested to add the possibility to manually select the video quality. I know that HLS is an adaptive stream and that depending on the network condition it choose the best quality that fits to the current situation. I also tried some setting with preferredMaximumResolution and preferredPeakBitRate but none of them worked once the user was watching the steam. I also tried something like replacing the currentPlayerItem with the new configuration but anyway this only allowed me to downgrade the quality of the video. When I wanted to set it for example to 4k it did not change to that track event if I set a very high values to both params mentioned above. My question is if there is any method which would allow me to force certain quality from the manifest file. I already have some kind of extraction which can parse the manifest file and provide me all the available information but I couldn't still figure out how to make the player reproduce specific stream with my desired quality from the available playlist.
4
0
6.8k
Oct ’23
How to insert timed metadata (id3) into live HLS files with Apple's mediastreamsegmenter and ffmpeg
I am trying to insert timed metadata (id3) into a live HLS stream created with Apple's mediastreamsegmenter tool. I am getting the video from an ffmpeg stream, here is the command I run to test from an existing file: ffmpeg -re -i vid1.mp4 -vcodec libx264 -acodec aac -f mpegts - | mediastreamsegmenter -f /Users/username/Sites/video -s 10 -y test -m -M 4242 -l log.txt To inject metadata, I run this command: id3taggenerator -text '{"x":"data dan","y":"36"}' -a localhost:4242 This setup creates the expected .ts files and I can play back the video/audio with no issues. However the metadata I am attempting to insert does not work in the final file. I know the metadata is there in some form, when I file-compare a no-metadata version of the video to one I injected metadata into, I can see the ID3 tags within the binary data. Bad File Analysis When I analyze the generated files using ffmpeg: ffmpeg -i video1.ts the output I get is: [mpegts @ 0x7fb00a008200] start time for stream 2 is not set in estimate_timings_from_pts[mpegts @ 0x7fb00a008200] stream 2 : no TS found at start of file, duration not set[mpegts @ 0x7fb00a008200] Could not find codec parameters for stream 2 (Audio: mp3, 0 channels): unspecified frame sizeConsider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options Input #0, mpegts, from 'video1.ts':  Duration: 00:00:10.02, start: 0.043444, bitrate: 1745 kb/s  Program 1   Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 848x464 [SAR 1:1 DAR 53:29], 30 fps, 30 tbr, 90k tbn, 60 tbc   Stream #0:1[0x101] Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 130 kb/s  No Program   Stream #0:2[0x102]: Audio: mp3, 0 channels Note how the third stream (stream #0:2) is marked as mp3...this is incorrect! Also it says "No Program", instead of being in "Program 1". When I analyze a properly encoded video file with inserted ID3 metadata that I created with Apple's mediafilesegmenter tool, the analysis shows a "timed_id3" track and this metadata track works properly in my web browser. Good File Analysis ffmpeg -i video1.ts —Input #0, mpegts, from 'video1.ts':  Duration: 00:00:10.08, start: 19.984578, bitrate: 1175 kb/s  Program 1   Stream #0:0[0x101]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 848x464, 30 fps, 30 tbr, 90k tbn, 180k tbc  Stream #0:1[0x102]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 67 kb/s  Stream #0:2[0x103]: Data: timed_id3 (ID3  / 0x20334449) I must use mediastreamsegmenter because that is required for live streams. Does anyone know how I can get timed ID3 metadata into a live HLS stream properly?
2
0
2.2k
Sep ’23
Audio Interruption Issues with AVPlayer During Live Streaming via Amazon Kinesis
Hello, I've encountered a recurring issue while trying to play back live streams using AVPlayer in an iOS app. The video stream is being delivered via Amazon Kinesis Video Streams (KVS) using HLS. The specific issue is that audio frequently gets interrupted during playback. The video continues to play just fine, but the audio stops. This issue seems to occur only on iOS devices and not on other platforms or players. When I check the console logs, I see a number of error messages that may be related to the issue: 2023-05-11 20:57:27.494719+0200 Development[53868:24121620] HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop) 2023-05-11 20:57:27.534340+0200 Development[53868:24121620] [aqme] AQMEIO.cpp:199 timed out after 0.011s (6269 6269); suspension count=0 (IOSuspensions: ) 2023-05-11 20:57:30.592067+0200 Development[53868:24122309] HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop) 2023-05-11 20:57:30.592400+0200 Development[53868:24122309] HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop) I've attempted to troubleshoot this issue in various ways, including trying different iOS devices and networks. I've also attempted to use VLC's player on iOS, which doesn't have the audio interruption issue, but it does encounter other problems. I believe there might be some compatibility issue between AVPlayer and KVS. I've posted a similar issue on the Amazon KVS GitHub repo but I am reaching out here to see if anyone has faced a similar issue with AVPlayer and has found a solution or can provide some guidance. Has anyone encountered this issue before, or does anyone have suggestions on how to address it? Any help would be greatly appreciated!
1
1
1.5k
Oct ’23
CoreMediaErrorDomain : code : -16012
Hi , We are getting big spikes of errors on very few programs in some live channels. [-16012:CoreMediaErrorDomain] [Error Domain=CoreMediaErrorDomain Code=-16012 "(null)"] When this error occurred , AVPlayer stops & users has to restart the playback. This is the error we are getting & this happens in some live programs. We have the same set up & uses same transcoders etc in all programs. Mostly we have very low error rate in player with live programs , but with this error , error rate can increase up to 80% of the users effecting pretty much all the users on apple devices. Does anyone know what this error actually means ? What is the context & what is the reason behind that ? It seems like this may be related subtitles & this occurs only when the subtitles are enabled. ( The subtitles are not embedded in the stream it is teletext ) Try to find in apple documents & online & nothing could be find unfortunately.
1
0
1.3k
Oct ’23
How to intercept HLS Playlist chunk request for CDN token implementation
Hello, We are using HLS for our streaming iOS and tvOS applications. We have DRM protection on our applications but we want add another secure layer which is CDN token. We want to add that CDN token data on header or query parameters. Any of two are applicable at our CDN side. There is a problem at client side. We want to send that token knowledge and refresh at given a time. We add token data using at initial state let asset = AVURLAsset(url: url, options: ["AVURLAssetHTTPHeaderFieldsKey": headers]) and add interceptor with asset.resourceLoader.setDelegate. It works seamlessly. We use AVAssetResourceLoaderDelegate and we can intercept just master playlist and playlists via func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool and we can refresh CDN token data at only playlists. That token data can be at query params or header. It does not matter. For example, #EXTM3U #EXT-X-VERSION:3 #EXTINF:10.0 https://chunk1?cdntoken=A.ts #EXTINF:10.0 https://chunk2?cdntoken=A.ts #EXTINF:10.0 https://chunk3?cdntoken=A.ts #EXTINF:10.0 assume that it is our .m3u8 file for given live video playlist. It has three chunks with cdn token data in query params. If we give that chunks to AVPlayer. It is going to play chunks in order. When we change new cdn token at query params, it effects our chunk Urls and our player stalls. It is because our cdn adds new cdn token knowledge to chunk's Url. It means that our new .m3u8 is going to be like that for next playlist; #EXT-X-VERSION:3 #EXTINF:10.0 https://chunk4?cdntoken=B.ts #EXTINF:10.0 https://chunk5?cdntoken=B.ts #EXTINF:10.0 https://chunk6?cdntoken=B.ts #EXTINF:10.0 Cdn token data is converted to B from A at cdn side and it sends new playlist like that. That's why, our player is going to stall. Is there any way not to player stall with edited chunk url? When we change new cdn token at header. It does not change chunks url like at the first question but AVPlayer does not allow to intercept chunk Urls which means that before callin https://chunk1?cdntoken=A.ts url, i want to intercept and add new cdn token data to header. Is there any way to intercept chunk Urls like intercepting playlist? Thanks for answers in advance
3
0
1.4k
Aug ’23
Video Streaming Protocols Supported
I am aware that HLS is required for most video streaming use cases (watching a movie, TV show, or YouTube video). This is a requirement for all apps. However, I am confused as to whether this would also apply to video chat/video conferencing apps. It would be inefficient to upload compressed video using rtmp/rtp, decompress it, and create HLS segments. Low latency requirements only make this worse. So, is it permissible to use other protocols for video conferencing use cases? Thanks
1
0
573
Jul ’23
HLS video has noise play on safari
Problem Description This HLS video https://lf3-vod-cdn-tos.douyinstatic.com/obj/vodsass/hls/main.m3u8 starts with noise at 22 seconds play directly on MacOS 12.6.6 Safari,and it also appears on iOS (16.5.1) safari. But there is no noise when playing with MSE on Mac by the third-party open source web playe such as hls.js on Safari. Test tool hls.js test demo: https://hlsjs.video-dev.org/demo/
1
0
790
Aug ’23
WebRTC IceCandidates seems weird
Hi there, I'm currently making a web application using webRTC. Even though all the SDP info and ICES of caller, callee are well transmitted, the connection was kept failing. The other devices are functioning well. However at just Iphone(13), it's not working. I tried to connect at same network. And it's working. Therefore I think it's a problem about Ice candidates. I read similar post to avoid this issue. And when one of safari's advanced option called WebRTC platform UDP sockets is disabled it's working. Is there a way that I can connect without tuning options of Safari? Thanks. FYI this is one of my Iphone's Ice candidate:842163049 1 udp 1685921535 118.235.10.100 50750 typ srflx raddr 0.0.0.0 rport 50750 generation 0 ufrag 7e7f network-id 3 network-cost 900
0
1
456
Aug ’23
HLS Stream Master Playlist Unavailable on iPhone
I am facing an issue with video content that I have converted to HLS playlist content (using ffmpeg) added to an S3 bucket that is shared through a Cloudfront Distribution. My scenario is the following: I have a bucket called bucket-a, with a "folder" video-1 which contains the following files: output.m3u8 output0.ts ... output15.ts audio/ audio.aac image.jpg All items in bucket-a are blocked from public access through S3. Content is only vended through a Cloudfront distribution which has origin bucket-a. I am able to access https://.cloudfront.net/path/output.m3u8 on a desktop browser without fail, and no errors thrown. But the file output.m3u8 and all .ts files are not available on iPhone mobile browsers. The part that is peculiar is that this is not true for all playlist content in bucket-a. For example, I have a "folder" video-2 within bucket-a that has the same file structure as video-1 that is completely accessible through all mobile browsers. Here is an example master playlist error: https://dbs3s11vyxuw0.cloudfront.net/bottle-promo/script_four/output.m3u8 Even more head-scratching is that I am able to access all the playlists that are within this playlist. What I've tried: Initially, I believed the issue to be due to the way the video was transcoding so I standardized the video transcoding. Then I believed the issue to be due to CloudFront permissions, though those seem to be fine. I've validated my stream here: https://ott.dolby.com/OnDelKits_dev/StreamValidator/Start_Here.html Not sure which way to turn.
1
0
709
Aug ’23
How to minimize configuredTimeOffsetFromLive property for HLS in live situation?
Hi, I am using HLS playback for live broadcasting. I am using AVPlayerItem. In live, I found seekableDuration always have some offset from the latest moment compared to the same playback on Chrome or Android. As far as I have digged into, the difference approximately matches the recommendedTimeOffsetFromLive (usually 6~9 seconds on my tests). The problem is, I tried to minimize configuredTimeOffsetFromLive but it does not have any effect. Even if I set it to 1~2 seconds, it is always the same as recommendedTimeOffsetFromLive. I tried to change automaticallyPreservesTimeOffsetFromLive as well, but nothing seems working. How do these properties work and how can I make the time offset minimized?
0
0
379
Aug ’23
Does HLS support MPEG2Video encoded video?
Hi, I have an HLS content i.e., .m3u8 manifest file but the segments are encoded with **MPEG2Video. ** Is such encoding supported by HLS? Or they only support H.264/AVC or HEVC/H.265? Stream #0:0[0x281]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, bt470bg, top first), 720x576 [SAR 16:15 DAR 4:3], 3125 kb/s, 25 fps, 25 tbr, 90k tbn Stream #0:1[0x201]: Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 128 kb/s Thanks.
0
0
381
Sep ’23
HLS live streaming does not work if it is EVENT type on Safari later than iOS 16
I'm working on live streaming encoder but I cannot start to play live stream correctly if the stream type (#EXT-X-PLAYLIST-TYPE) is EVENT. When I try to play it, the stream starts from the beginning not the current position. When I tested on iPhone 7 (iOS 15.7.8), live streams start correctly but I tested on iPhone 8 Plus (iOS 16.6), live streams start from the beginning. (I tested on other iPhones with iOS 16 or later and the result is the same.) I also tried to add "#EXT-X-START:TIME-OFFSET" tag but it didn't work. Is this behavior a bug or do I have to add some tag to play?
0
0
511
Sep ’23
How can I create a Clear key HLS using Shaka-Packager
Hello, I'm try to create a working Clear-key HLS stream using Shaka-packager. Shaka packager support only SAMPLE-AES. when implemented, it's looks like the resulting stream is no't playable in Safary browser. This is the example M3U8 file starts: #EXTM3U #EXT-X-VERSION:6 ## Generated with https://github.com/google/shaka-packager version v2.6.1-634af65-release #EXT-X-TARGETDURATION:13 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MAP:URI="audio_und_2c_128k_aac_init.mp4" #EXT-X-KEY:METHOD=SAMPLE-AES,URI="https://www.httpstest.com:771/key.key",IV=0x00000000000000000000000000000000,KEYFORMAT="identity" #EXTINF:10.008, audio_und_2c_128k_aac_1.mp4 #EXTINF:10.008, audio_und_2c_128k_aac_2.mp4 #EXTINF:9.985, audio_und_2c_128k_aac_3.mp4 #EXTINF:10.008, audio_und_2c_128k_aac_4.mp4 #EXTINF:10.008, audio_und_2c_128k_aac_5.mp4 #EXTINF:9.985, audio_und_2c_128k_aac_6.mp4 #EXTINF:0.093, audio_und_2c_128k_aac_7.mp4 #EXT-X-ENDLIST I'm looking for: A. a working example of SAMPLE-AES Clear key encrypted HLS (so I will be able to learn from it how it should be defined for IOS) B. help on how to create a clear key working HLS stream for IOS/macOS (Safari)
0
0
610
Sep ’23
tvOS: AVPlayerViewController.transportBarCustomMenuItems not working
Hi guys, Setting AVPlayerViewController.transportBarCustomMenuItems is not working on tvOS. I still see 2 icons for Audio and Subtitles. let menuItemAudioAndSubtitles = UIMenu( image: UIImage(systemName: "heart") ) playerViewController.transportBarCustomMenuItems = [menuItemAudioAndSubtitles] WWDC 2021 video is insufficient to make this work. https://developer.apple.com/videos/play/wwdc2021/10191/ The video doesn't say what exactly I need to do. Do I need to disable subtitle options? viewController.allowedSubtitleOptionLanguages = [] This didn't work and I still see the default icon loaded by the player. Do I need to create subclass of AVPlayerViewController? I just want to replace those 2 default icons by 1 icon as a test, but I was unsuccessful after many hours of work. Is it mandatory to define child menu items to the main item? Or do I perhaps need to define UIAction? The documentation and video are insufficient in providing guidance how to do that. I did something like this before, but that was more than 3 years ago and audi and subtitles was showing at the top of the player screen as tabs, if I rememebr correctly. Is transportBarCustomMenuItems perhaps deprecated? Is it possible that when loading AVPlayerItem and it detects audi and subtitles in the stream, it automatically resets AVPlayerViewController menu? How do I suppress this behavior? I'm currently loading AVPlayerViewController into SwiftUI interface. Is that perhaps the problem? Should I write SwiftUI player overlay from scratch? Thanks, Robert
1
0
707
Sep ’23
Why does AVPlayer stop reporting seekableTimeRanges when currentTime falls out of a live DVR window?
For live content, if the user pauses playback and their current playhead falls out of the valid DVR window, AVPlayer will stop reporting seekableTimeRanges - why does this happen? Is there a workaround? For example: Suppose we were streaming content with a 5 minute DVR window User pauses playback for 6 minutes Their current position is now outside of the valid seekable time range AVPlayer stops reporting seekableTimeRanges all together This is problematic for two reasons: We have observed the AVPlayer generally becomes unresponsive when this happens. i.e. Any seek action will cause the player to freeze up Without knowing the seekable range, we dont know how to return the user to the live edge when they resume playback Seems to be the same issue described in these threads: https://developer.apple.com/forums/thread/45850 https://developer.apple.com/forums/thread/45850
2
0
533
Sep ’23