Post not yet marked as solved
Post marked as unsolved with 0 replies, 281 views
Hi,
I'm currently looking for a way to render WebVTT subtitles while using AVSampleBufferDisplayLayer. A little background on the issue.
In order to comply with some client rules (for play/pause/seek) and support picture in picture, I had to abandon the AVPlayerLayer (switching to AVSampleBufferDisplayLayer) which made me lose the native subtitles support but let me tap into the play/pause/seek calls and enforce the needed rules. This way I was able to use the AVPictureInPictureController with a content source and delegate (the latter did the trick to tapping into the PiP calls).
But with this, I lost the subtitles. The first thing that came to my mind was to implement support for the rendering of subtitles. Adding an AVPlayerItemLegibleOutput to the AVPlayerItem allowed me to get access to the subtitles, just to find out they were annotated with CoreMedia CMTextMark which don't seem to be automatically rendered by a CATextLayer. Thought of converting the NSAttributedString "styles" from Core Media to "normal" styles but then I would also need to add support for laying the subs correctly. Certainly, one way to do it but not sure it's easier. Couldn't find anything on the Core Media documentation that helped either.
Then while digging around the AVPlayerItemOutput I saw the suppressesPlayerRendering and trying to use AVPlayerLayer and AVSampleBufferDisplayLayer together.
The first one would render the subtitles while the other would do the video rendering. Made a sample and it sure works on the simulator, but when running on the device I get two layers of video playing and it seems that the suppressesPlayerRendering flag doesn't do anything.
How can I tackle this problem?