Integrate video and other forms of moving visual media into your apps.

Posts under Video tag

82 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Why I cannot response video data on ios15 with WKURLSchemeHandler
This is my h5 code: id="myVideo" src="xxxapp://***.***.xx/***/***.mp4" style="object-fit:cover;opacity:1;width:100%;height:100%;display:block;possition:absolute;" type="video/mp4"></video> I want to load local large video, so, I use WKURLSchemeHandler. - (void)webView:(WKWebView *)webView startURLSchemeTask:(id<WKURLSchemeTask>)urlSchemeTask { NSURLRequest *request = [urlSchemeTask request]; NSURL *url = request.URL; NSString *urlString = url.absoluteString; NSString *videoPath = [[NSBundle mainBundle] pathForResource:@"***" ofType:@"mp4"]; NSData *videoData = [NSData dataWithContentsOfFile:videoPath options:nil error:nil]; NSURLResponse *response = [[NSURLResponse alloc] initWithURL:url MIMEType:@"video/mp4" expectedContentLength:videoData.length textEncodingName:nil]; [urlSchemeTask didReceiveResponse:response]; [urlSchemeTask didReceiveData:videoData]; [urlSchemeTask didFinish]; } but its not work, data is not nil, but video do not play. I would greatly appreciate it if someone could help me find a solution!! ps: can make it, but we cannot use it due to some reasons.
0
0
421
Jan ’24
Capabilities of Sensitive Content Analysis and iOS 17?
Hello. I have three questions about the Sensitive Content Analysis (SCA) framework: SCA seems to be asynchronous. Is there a limit to how much a single app can send through it at a time? For video analysis, can the video be broken into smaller chunks, and then all chunks be hit concurrently? Can a video stream be sampled as it's being streamed? e.g. Maybe it samples one frame every 3 seconds and scans those? Thanks.
0
0
416
Jan ’24
CoreMediaErrorDomain error -12865
Hello, can anybody help me with this ? I am downloading video in FS, and when I give that url to player it gives me this error. but this comes up only in case of m3u8. other format like mp4 are working fine locally. please help ! {"error": {"code": -12865, "domain": "CoreMediaErrorDomain", "localizedDescription": "The operation couldn’t be completed. (CoreMediaErrorDomain error -12865.)", "localizedFailureReason": "", "localizedRecoverySuggestion": ""}, "target": 13367}
2
1
586
Jan ’24
AVFoundation - Access to each of eye of spatial video (MV-HEVC)
Hi, I am looking at display some spatial video content captured on iPhone 15 Pros in a side-by-side format. I've read the HEVC Stereo Video Profile provided by Apple, but I am confused on access the left and right eye video. Looking at the AVAsset track information, there is one video track, one sound, and three metadata ones. Apple's document references them as layers, but I am unsure how to access them. Could anyone provide some guidance on the access of them? Thanks, Will
0
0
489
Jan ’24
AVPlayer not Playing on tvOS 17.2
Loading a video that played on tvOS 17, won't now play in tvOS 17.2. It isn't true for all videos or even all videos of a certain type. This code works fine on tvOS 17, but not on 17.2 import SwiftUI import AVKit struct ContentView: View { var body: some View { let player = AVPlayer(url: URL(string: "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4")!) VideoPlayer(player: player) .onAppear { player.play() } } } I have tried reloading the metadata. I tried making the player from an AVAsset rather than a URL. I can't seem to see what is making it work with some videos and not all and what is different from tvOS 17 to 17.2.
5
1
902
Jan ’24
Who can I contact that can remove the gray frame from around the full screen video player on my iOS mobile application?
Who can I contact that can remove the gray frame from around the full screen video player on my iOS mobile application? This is an Apple iOS feature that I have no control over. The screenshot attached below shows the full screen view of a video when the iOS mobile phone is held sideways. The issue is the big gray frame that is around the video, is taking up too much space from the video and it needs to be removed so the video can be fully screened.
2
0
392
Jan ’24
IDR keyframes now required by Safari (as of iOS 16.3.1?)
Can we confirm that as of iOS 16.3.1, key frames for MPEGTS via HLS are mandatory now? I've been trying to figure out why https://chaney-field3.click2stream.com/ shows "Playback Error" across Safari, Chrome, Firefox, etc.. I ran the diagnostics against one of the m3u8 files that is generated via Developer Tools (e.g. mediastreamvalidator "https://e1-na7.angelcam.com/cameras/102610/streams/hls/playlist.m3u8?token=" and then hlsreport validation_data.json) and see this particular error: Video segments MUST start with an IDR frame Variant #1, IDR missing on 3 of 3 Does Safari and iOS devices explicitly block playback when it doesn't find one? From what I understand AngelCam simply acts as a passthrough for the video/audio packets and does no transcoding but converts the RTSP packets into HLS for web browsers But IP cameras are constantly streaming their data and a user connecting to the site may be receiving the video between key frames, so it would likely violate this expectation. From my investigation it also seems like this problem also started happening in iOS 16.3? I'm seeing similar reports for other IP cameras here: https://ipcamtalk.com/threads/blue-iris-ui3.23528/page-194#post-754082 https://www.reddit.com/r/BlueIris/comments/1255d78/ios_164_breaks_ui3_video_decode/ For what it's worth, when I re-encoded the MPEG ts files (e.g. ffmpeg-i /tmp/streaming-master-m4-na3.bad/segment-375.ts -c:v h264 /tmp/segment-375.ts) it strips the non key frames in the beginning and then playback works properly if I host the same images on a static site and have the iOS device connect to it. It seems like Chrome, Firefox, VLC, and ffmpeg are much more forgiving on missing key frames. I'm wondering what the reason for enforcing this requirement? And can I confirm it's been a recent change?
1
0
464
Dec ’23
Playing a specific rectangular ROI of a video?
Is there a way to play a specific rectangular region of interest of a video in an arbitrarily-sized view? Let's say I have a 1080p video but I'm only interested in a sub-region of the full frame. Is there a way to specify a source rect to be displayed in an arbitrary view (SwiftUI view, ideally), and have it play that in real time, without having to pre-render the cropped region? Update: I may have found a solution here: img DOT ly/blog/trim-and-crop-video-in-swift/ (Apple won't allow that URL for some dumb reason)
0
0
415
Dec ’23
Webrtc fullscreen resize issue on iOS17
Since upgrade to iOS17 WebRTC playback have problems on going fullscreen - video element is rapidly changing its dimensions while taking full screen size and animation seems very glitchy. I'm observing this issue on every webrtc players available, so I think the problem is in the mobile safari. Is there any way to prevent resizing of video on fullscreen?
2
3
568
Dec ’23
Synchronize `AVCaptureVideoDataOutput` and `AVCaptureAudioDataOutput` for `AVAssetWriter`
I'm building a Camera app, where I have two AVCaptureSessions, one for video and one for audio. (See this for an explanation why I don't just have one). I receive my CMSampleBuffers in the AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. Now, when I enable the video stabilization mode "cinematicExtended", the AVCaptureVideoDataOutput has a 1-2 seconds delay, meaning I will receive my audio CMSampleBuffers 1-2 seconds earlier than I will receive my video CMSampleBuffers! This is the code: func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection) { let type = captureOutput is AVCaptureVideoDataOutput ? "Video" : "Audio" let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) print("Incoming \(type) buffer at \(timestamp.seconds) seconds...") } Without video stabilization, this logs: Incoming Audio frame at 107862.52558333334 seconds... Incoming Video frame at 107862.535921166 seconds... Incoming Audio frame at 107862.54691666667 seconds... Incoming Video frame at 107862.569257333 seconds... Incoming Audio frame at 107862.56825 seconds... Incoming Video frame at 107862.585925333 seconds... Incoming Audio frame at 107862.58958333333 seconds... With video stabilization, this logs: Incoming Audio frame at 107862.52558333334 seconds... Incoming Video frame at 107861.535921166 seconds... Incoming Audio frame at 107862.54691666667 seconds... Incoming Video frame at 107861.569257333 seconds... Incoming Audio frame at 107862.56825 seconds... Incoming Video frame at 107861.585925333 seconds... Incoming Audio frame at 107862.58958333333 seconds... As you can see, the video frames arrive almost a full second later than when they are intended to be presented! There are a few guides on how to use AVAssetWriter online, but all recommend to start the AVAssetWriter session once the first video frame arrives - in my case I cannot do that, since the first 1 second of video frames is from before the user even started the recording. I also can't really wait 1 second here, as then I would lose 1 second of audio samples, since those are realtime and not delayed. I also can't really start the session on the first audio frame and drop all video frames until that point, since then the resulting video would start with one blank frame, as the video frame is never exactly on that first audio frame timestamp. Any advices on how I can synchronize that? Here is my code: RecordingSession.swift
1
0
560
Dec ’23
Adding VTT subtitles to a streaming video from an URL
Hi, I've started learning swiftUI a few months ago, and now I'm trying to build my first app :) I am trying to display VTT subtitles from an external URL into a streaming video using AVPlayer and AVMutableComposition. I have been trying for a few days, checking online and on Apple's documentation, but I can't manage to make it work. So far, I managed to display the subtitles, but there is no video or audio playing... Could someone help? Thanks in advance, I hope the code is not too confusing. // EpisodeDetailView.swift // OroroPlayer_v1 // // Created by Juan Valenzuela on 2023-11-25. // import AVKit import SwiftUI struct EpisodeDetailView4: View { @State private var episodeDetailVM = EpisodeDetailViewModel() let episodeID: Int @State private var player = AVPlayer() @State private var subs = AVPlayer() var body: some View { VideoPlayer(player: player) .ignoresSafeArea() .task { do { try await episodeDetailVM.fetchEpisode(id: episodeID) let episode = episodeDetailVM.episodeDetail guard let videoURLString = episode.url else { print("Invalid videoURL or missing data") return } guard let subtitleURLString = episode.subtitles?[0].url else { print("Invalid subtitleURLs or missing data") return } let videoURL = URL(string: videoURLString)! let subtitleURL = URL(string: subtitleURLString)! let videoAsset = AVURLAsset(url: videoURL) let subtitleAsset = AVURLAsset(url: subtitleURL) let movieWithSubs = AVMutableComposition() let videoTrack = movieWithSubs.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid) let audioTrack = movieWithSubs.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) let subtitleTrack = movieWithSubs.addMutableTrack(withMediaType: .text, preferredTrackID: kCMPersistentTrackID_Invalid) // if let videoTrackItem = try await videoAsset.loadTracks(withMediaType: .video).first { try await videoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)), of: videoTrackItem, at: .zero) } if let audioTrackItem = try await videoAsset.loadTracks(withMediaType: .audio).first { try await audioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)), of: audioTrackItem, at: .zero) } if let subtitleTrackItem = try await subtitleAsset.loadTracks(withMediaType: .text).first { try await subtitleTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)), of: subtitleTrackItem, at: .zero) } let playerItem = AVPlayerItem(asset: movieWithSubs) player = AVPlayer(playerItem: playerItem) let playerController = AVPlayerViewController() playerController.player = player playerController.player?.play() // player.play() } catch { print("Error: \(error.localizedDescription)") } } } } #Preview { EpisodeDetailView4(episodeID: 39288) }
0
0
533
Nov ’23
iOS: Recording from two AVCaptureSessions is out of sync
Hey all! I'm trying to record Video from one AVCaptureSession, and Audio from another AVCaptureSession. The reason I'm using two separate capture sessions is because I want to disable and enable the Audio one on the fly without interrupting the Video session. I believe Snapchat and Instagram also use this approach, as background music keeps playing when you open the Camera, and only slightly stutters (caused by the AVAudioSession.setCategory(..) call) once you start recording. However I couldn't manage to synchronize the two AVCaptureSessions, and whenever I try to record CMSampleBuffers into an AVAssetWriter, the video and audio frames are out of sync. Here's a quick YouTube video showcasing the offset: https://youtube.com/shorts/jF1arThiALc I notice two bugs: The video and audio tracks are out of sync - video frames start almost a second before the first audio sample starts to be played back, and towards the end the delay is also noticeable because the video stops / freezes while the audio continues to play. The video contains frames from BEFORE I even pressed startRecording(), as if my iPhone had a time machine! I am not sure how the second one can even happen, so at this point I'm asking for help if anyone has any experience with that. Roughly my code: let videoCaptureSession = AVCaptureSession() let audioCaptureSession = AVCaptureSession() func setup() { // ...adding videoCaptureSession outputs (AVCaptureVideoDataOutput) // ...adding audioCaptureSession outputs (AVCaptureAudioDataOutput) videoCaptureSession.startRunning() } func startRecording() { self.assetWriter = AVAssetWriter(outputURL: tempURL, fileType: .mov) self.videoWriter = AVAssetWriterInput(...) assetWriter.add(videoWriter) self.audioWriter = AVAssetWriterInput(...) assetWriter.add(audioWriter) AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker]) audioCaptureSession.startRunning() // <-- lazy start that } func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection) { // Record Video Frame/Audio Sample to File in custom `RecordingSession` (AVAssetWriter) if isRecording { switch captureOutput { case is AVCaptureVideoDataOutput: self.videoWriter.append(sampleBuffer) case is AVCaptureAudioDataOutput: // TODO: Do I need to update the PresentationTimestamp here to synchronize it to the other capture session? or not? self.audioWriter.append(sampleBuffer) default: break } } } Full code here: Video Capture Session Configuration Audio Capture Session Configuration Later on, startRecording() call RecordingSession, my AVAssetWriter abstraction Audio Session activation And finally, writing the CMSampleBuffers
1
0
776
Nov ’23
Rotate a CMSampleBuffer in swift5
I have a use case to rotate a CMSampleBuffer from landscape to portrait. I have written a rough code for it. But still am facing many issues with appending of the sampleBuffer to input frame. Here's the code: `guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil } // Get the dimensions of the image buffer let width = CVPixelBufferGetWidth(imageBuffer) let height = CVPixelBufferGetHeight(imageBuffer) // Determine if the image needs to be rotated let shouldRotate = width > height // Create a CIImage from the buffer var image = CIImage(cvImageBuffer: imageBuffer) // Rotate the CIImage if necessary if shouldRotate { image = image.oriented(forExifOrientation: 6) // Rotate 90 degrees clockwise } let originalPixelFormatType = CVPixelBufferGetPixelFormatType(imageBuffer) // Create a new pixel buffer var newPixelBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, height, width, originalPixelFormatType, nil, &newPixelBuffer) guard status == kCVReturnSuccess, let pixelBuffer = newPixelBuffer else { return nil } CVBufferPropagateAttachments(imageBuffer, newPixelBuffer!) // Render the rotated image onto the new pixel buffer let context = CIContext() context.render(image, to: pixelBuffer) CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0)) var videoInfo: CMVideoFormatDescription? CMVideoFormatDescriptionCreateForImageBuffer(allocator: kCFAllocatorDefault, imageBuffer: newPixelBuffer!, formatDescriptionOut: &videoInfo) var sampleTimingInfo = CMSampleTimingInfo(duration: CMSampleBufferGetDuration(sampleBuffer), presentationTimeStamp: CMSampleBufferGetPresentationTimeStamp(sampleBuffer), decodeTimeStamp: CMSampleBufferGetDecodeTimeStamp(sampleBuffer)) var newSampleBuffer: CMSampleBuffer? CMSampleBufferCreateForImageBuffer(allocator: kCFAllocatorDefault, imageBuffer: newPixelBuffer!, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: videoInfo!, sampleTiming: &sampleTimingInfo, sampleBufferOut: &newSampleBuffer) let attachments: CFArray! = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: true) let dictionary = unsafeBitCast(CFArrayGetValueAtIndex(attachments, 0), to: CFMutableDictionary.self) if let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: true) as? [CFDictionary] { for attachment in attachmentsArray { for (key, value) in attachment as! Dictionary<CFString, Any> { if let value = value as? CFTypeRef { CMSetAttachment(newSampleBuffer!, key: key, value: value, attachmentMode: kCMAttachmentMode_ShouldPropagate) } } } } return newSampleBuffer! The error that I am getting while appending the frame is: Error occured, isVideo = false, status = 3, Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x282b87390 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}} I read online that this error might be due to different PixelFormatType. But how can that be because I am obtaining the PixelFormatType from the buffer itself. If you want to see the difference between original and rotated sample buffer. https://www.diffchecker.com/V0a55kCB/ Thanks in advance!
0
0
576
Nov ’23
[ASC] Preview Videos never get processed after upload
When I upload a preview video to ASC (which conforms to preview specifications), the uploaded video it uploaded correctly in the first place. During upload to ASC it shows a blurred image of the first video frame. So far so good. But, once upload is finished the video turns into a "cloud" image and says it is currently processed. The problem is that it gets stuck in the status „currently processed“ forever. I waited a few days but processing did never end. To make it worse the landscape "cloud" image turns into a portrait when I come back to the ASC media center. The problem : is reproducible occurs on different video files occurs on different appIDs occurs on all iPhone resolutions This is a serious bug. I can’t finalise my app submission. Any ideas ?
0
0
320
Nov ’23
AVPlayerViewController doesn't stop playing after closing its Window
Hello! I'm trying to display AVPlayerViewController in a separate WindowGroup - my main window opens a new window where the only element is a struct that implements UIViewControllerRepresentable for AVPlayerViewController: @MainActor public struct AVPlayerView: UIViewControllerRepresentable { public let assetName: String public init(assetName: String) { self.assetName = assetName } public func makeUIViewController(context: Context) -> AVPlayerViewController { let controller = AVPlayerViewController() controller.player = AVPlayer() return controller } public func updateUIViewController(_ controller: AVPlayerViewController, context: Context) { Task { if context.coordinator.assetName != assetName { let url = Bundle.main.url(forResource: assetName, withExtension: ".mp4") guard let url else { return } controller.player?.replaceCurrentItem(with: AVPlayerItem(url: url)) controller.player?.play() context.coordinator.assetName = assetName } } } public static func dismantleUIViewController(_ controller: AVPlayerViewController, coordinator: Coordinator) { controller.player?.pause() controller.player = nil } public func makeCoordinator() -> Coordinator { return Coordinator() } public class Coordinator: NSObject { public var assetName: String? } } WindowGroup(id: Window.videoPlayer.rawValue) { AVPlayerView(assetName: "wwdc") .onDisappear { print("DISAPPEAR") } } This displays the video player in non-inline mode and plays the video. The problem appears when I try to close the video player's window using the close button. Sound from the video continues playing in the background. I've tried to clean the state myself by using dismantleUIViewController and onDisapear methods, but they are not called by the system (it works correctly if a window doesn't contain AVPlayerView). This appear on Xcode 15.1 Beta 3 (I haven't tested it on other versions). Is there something I do incorrectly that is causing this issue, or is it a bug and I need to wait until its fixed?
0
1
356
Nov ’23
Performance issues with `AVAssetWriter`
Hey all! I'm trying to build a Camera app that records Video and Audio buffers (AVCaptureVideoDataOutput and AVCaptureAudioDataOutput) to an mp4/mov file using AVAssetWriter. When creating the Recording Session, I noticed that it blocks for around 5-7 seconds before starting the recording, so I dug deeper to find out why. This is how I create my AVAssetWriter: let assetWriter = try AVAssetWriter(outputURL: tempURL, fileType: .mov) let videoWriter = self.createVideoWriter(...) assetWriter.add(videoWriter) let audioWriter = self.createAudioWriter(...) assetWriter.add(audioWriter) assetWriter.startWriting() There's two slow parts here in that code: The createAudioWriter(...) function takes ages! This is how I create the audio AVAssetWriterInput: // audioOutput is my AVCaptureAudioDataOutput, audioInput is the microphone let settings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: .mov) let format = audioInput.device.activeFormat.formatDescription let audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: settings, sourceFormatHint: format) audioWriter.expectsMediaDataInRealTime = true The above code takes up to 3000ms on an iPhone 11 Pro! When I remove the recommended settings and just pass nil as outputSettings: audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: nil) audioWriter.expectsMediaDataInRealTime = true ...It initializes almost instantly - something like 30 to 50ms. Starting the AVAssetWriter takes ages! Calling this method: assetWriter.startWriting() ...takes takes 3000 to 5000ms on my iPhone 11 Pro! Does anyone have any ideas why this is so slow? Am I doing something wrong? It feels like passing nil as the outputSettings is not a good idea, and recommendedAudioSettingsForAssetWriter should be the way to go, but 3 seconds initialization time is not acceptable. Here's the full code: RecordingSession.swift from react-native-vision-camera. This gets called from here. I'd appreciate any help, thanks!
1
1
771
Nov ’23
Live video streaming issue: Blank screen on iOS 17 with AVMutablePlayer/AVPlayer
I'm encountering an issue with live video streaming on iOS 17 using AVMutablePlayer. I'm utilizing a wss URL to stream videos by capturing data in chunks (e.g., 5 seconds) and playing it. Upon completion of the 5-second segment, I load another 5 seconds using self.player.replaceCurrentItem(with: nextPlayerItem). Despite listening to events via self.player.currentItem?.observe, the functionality appears to be working well on iOS 16 but consistently displays a blank video on iOS 17. private func playNext() { let nextSet = self.dataCollector.getNextItem(length: self.configuration.frameDelay) if nextSet.count == self.configuration.frameDelay { var playerTime:Int = self.player.currentItem != nil ? Int(CMTimeGetSeconds(player.currentTime())) : 0 var allData = Data() allData.appendAll(dataSet: dataCollector.getFileType()) nextSet.forEach { (data) in playerTime += 1 allData.append(data.getFragmentData()) self.currentFragmentTimes.updateValue(data.getFragmentTime(), forKey: playerTime) } if(allData.count > 0) { self.player.replaceCurrentItem(with: AVPlayerItem(asset: AVMutableMovie(data:allData, options: nil))) self.playerInitializedTime = nil self.player.play() } } }
0
0
496
Nov ’23
Action Mode video mode in Swift
I am using the AVCapture API and have tried all stabilization settings and video settings that I am aware of however I am unable to get the same quality of frames that Action Mode gets in the native iOS app during motion video. How can I access the same API or settings that Action Mode uses within my app?
0
1
305
Nov ’23
Unable to manually parse the HEVC video with alpha format
I wish to parse the bitstream of HEVC video with alpha (specific video format reference WWDC2019: https://developer.apple.com/videos/play/wwdc2019/506). Taking the 'puppets_with_alpha_hevc.mov' file from 'Using HEVC Video with Alpha' as an example, I would first extract the HEVC bitstream, then parse its fields. When it comes to the VPS field, as I reach the vps_extension, I find that the bitstream in 'puppets_with_alpha_hevc.mov' does not conform to the HEVC standard document, preventing further parsing. Besides the 'HEVC Video with Alpha Interoperability Profile.pdf', are there any more detailed documents describing the HEVC video with alpha format? Also, is there anyone who can encode or decode HEVC with alpha videos on systems other than macOS?
0
0
545
Nov ’23