Hi,
trying to wrap my head around Xcode's FXPlug. I already sell Final Cut Pro titles for a company. These titles were built in motion.
However, they want me to move them to an app and I'm looking for any help on how to accomplish this
*What the app should do is:
Allow users with an active subscription to our website the ability to access titles within FCPX and if they are not an active subscriber, for access to be denied.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support:
VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES
Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following:
{
mediaType:'vide'
mediaSubType:'av01'
mediaSpecific: {
codecType: 'av01' dimensions: 394 x 852
}
extensions: {{
CVFieldCount = 1;
CVImageBufferChromaLocationBottomField = Left;
CVImageBufferChromaLocationTopField = Left;
CVPixelAspectRatio = {
HorizontalSpacing = 1;
VerticalSpacing = 1;
};
FullRangeVideo = 0;
}}
}
but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume).
So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work 😿.
VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1.
As of today I am using XCode 15.0 with iOS 17.0.0 SDK.
Hi,
I've started learning swiftUI a few months ago, and now I'm trying to build my first app :)
I am trying to display VTT subtitles from an external URL into a streaming video using AVPlayer and AVMutableComposition.
I have been trying for a few days, checking online and on Apple's documentation, but I can't manage to make it work. So far, I managed to display the subtitles, but there is no video or audio playing...
Could someone help?
Thanks in advance, I hope the code is not too confusing.
// EpisodeDetailView.swift
// OroroPlayer_v1
//
// Created by Juan Valenzuela on 2023-11-25.
//
import AVKit
import SwiftUI
struct EpisodeDetailView4: View {
@State private var episodeDetailVM = EpisodeDetailViewModel()
let episodeID: Int
@State private var player = AVPlayer()
@State private var subs = AVPlayer()
var body: some View {
VideoPlayer(player: player)
.ignoresSafeArea()
.task {
do {
try await episodeDetailVM.fetchEpisode(id: episodeID)
let episode = episodeDetailVM.episodeDetail
guard let videoURLString = episode.url else {
print("Invalid videoURL or missing data")
return
}
guard let subtitleURLString = episode.subtitles?[0].url else {
print("Invalid subtitleURLs or missing data")
return
}
let videoURL = URL(string: videoURLString)!
let subtitleURL = URL(string: subtitleURLString)!
let videoAsset = AVURLAsset(url: videoURL)
let subtitleAsset = AVURLAsset(url: subtitleURL)
let movieWithSubs = AVMutableComposition()
let videoTrack = movieWithSubs.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = movieWithSubs.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
let subtitleTrack = movieWithSubs.addMutableTrack(withMediaType: .text, preferredTrackID: kCMPersistentTrackID_Invalid)
//
if let videoTrackItem = try await videoAsset.loadTracks(withMediaType: .video).first {
try await videoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)),
of: videoTrackItem,
at: .zero)
}
if let audioTrackItem = try await videoAsset.loadTracks(withMediaType: .audio).first {
try await audioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)),
of: audioTrackItem,
at: .zero)
}
if let subtitleTrackItem = try await subtitleAsset.loadTracks(withMediaType: .text).first {
try await subtitleTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)),
of: subtitleTrackItem,
at: .zero)
}
let playerItem = AVPlayerItem(asset: movieWithSubs)
player = AVPlayer(playerItem: playerItem)
let playerController = AVPlayerViewController()
playerController.player = player
playerController.player?.play()
// player.play()
} catch {
print("Error: \(error.localizedDescription)")
}
}
}
}
#Preview {
EpisodeDetailView4(episodeID: 39288)
}
Is there a way to play a specific rectangular region of interest of a video in an arbitrarily-sized view?
Let's say I have a 1080p video but I'm only interested in a sub-region of the full frame. Is there a way to specify a source rect to be displayed in an arbitrary view (SwiftUI view, ideally), and have it play that in real time, without having to pre-render the cropped region?
Update: I may have found a solution here: img DOT ly/blog/trim-and-crop-video-in-swift/ (Apple won't allow that URL for some dumb reason)
Who can I contact that can remove the gray frame from around the full screen video player on my iOS mobile application? This is an Apple iOS feature that I have no control over.
The screenshot attached below shows the full screen view of a video when the iOS mobile phone is held sideways. The issue is the big gray frame that is around the video, is taking up too much space from the video and it needs to be removed so the video can be fully screened.
我使用VideoToolBox库进行编码后再通过NDI发送到网络上,是可以成功再苹果电脑上接收到ndi源屏显示画面的,但是在windows上只能ndi源名称,并没有画面显示。
我想知道是不是使用VideoToolBox库无法在windows上进行正确编码,这个问题需要如何解决
I am using the official website API to decode now, the callback did not trigger.
Hi,
I am looking at display some spatial video content captured on iPhone 15 Pros in a side-by-side format. I've read the HEVC Stereo Video Profile provided by Apple, but I am confused on access the left and right eye video. Looking at the AVAsset track information, there is one video track, one sound, and three metadata ones.
Apple's document references them as layers, but I am unsure how to access them. Could anyone provide some guidance on the access of them?
Thanks,
Will
Hello, can anybody help me with this ? I am downloading video in FS, and when I give that url to player it gives me this error. but this comes up only in case of m3u8. other format like mp4 are working fine locally. please help !
{"error": {"code": -12865, "domain": "CoreMediaErrorDomain", "localizedDescription": "The operation couldn’t be completed. (CoreMediaErrorDomain error -12865.)", "localizedFailureReason": "", "localizedRecoverySuggestion": ""}, "target": 13367}
When I try to play video on my Apple Vision Pro simulator using a custom view with an AVPlayerLayer (as seen in my below VideoPlayerView), nothing displays but a black screen while the audio for the video i'm trying to play plays in the background. I've tried everything I can think of to resolve this issue, but to no avail.
import SwiftUI
import AVFoundation
import AVKit
struct VideoPlayerView: UIViewRepresentable {
var player: AVPlayer
func makeUIView(context: Context) -> UIView {
let view = UIView(frame: .zero)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.videoGravity = .resizeAspect
view.layer.addSublayer(playerLayer)
return view
}
func updateUIView(_ uiView: UIView, context: Context) {
if let layer = uiView.layer.sublayers?.first as? AVPlayerLayer {
layer.frame = uiView.bounds
}
}
}
I have noticed however that if i use the default VideoPlayer (as demonstrated below), and not my custom VideoPlayerView, the video displays just fine, but any modifiers I use on that VideoPlayer (like the ones in my above custom struct), cause the video to display black while the audio plays in the background.
import SwiftUI
import AVKit
struct MyView: View {
var player: AVPlayer
var body: some View {
ZStack {
VideoPlayer(player: player)
Does anyone know a solution to this problem to make it so that video is able to display properly and not just appear as a black screen with audio playing in the background?
HELP! How could I play a spatial video in my own vision pro app like the official app Photos? I've used API of AVKit to play a spatial video in XCode vision pro simulator with the guild of the official developer document, this video could be played but it seems different with what is played through app Photos. In Photos the edge of the video seems fuzzy but in my own app it has a clear edge.
How could I play the spatial video in my own app with the effect like what is in Photos?
Does Video Toolbox’s compression session yield data I can decompress on a different device that doesn’t have Apple’s decompression? i.e. so I can network data to other devices that aren’t necessarily Apple?
or is the format proprietary rather than just regular h.264 (for example)?
If I can decompress without video toolbox, may I have reference to some examples for how to do this using cross-platform APIs? Maybe FFMPEG has something?
Is AVQT capable of being used to measure encoding quality of PQ or HLG based content beyond SDR? If so, how am I able to leverage it. If not, is there a roadmap for timing to enable this type of tool?
Does the new MV-HEVC vision pro spatial video format supports having an alpha channel? I've tried converting a side by side video with alpha channel enabled by using this Apple example project, but the alpha channel is being removed.
https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc
Hi everyone, I need to add spatial video maker in my app which was wrote in objective-c. I found some reference code by swift, can you help me with converting the code to objective -c?
let left = CMTaggedBuffer(
tags: [.stereoView(.leftEye), .videoLayerID(leftEyeLayerIndex)], pixelBuffer: leftEyeBuffer)
let right = CMTaggedBuffer(
tags: [.stereoView(.rightEye), .videoLayerID(rightEyeLayerIndex)],
pixelBuffer: rightEyeBuffer)
let result = adaptor.appendTaggedBuffers(
[left, right], withPresentationTime: leftPresentationTs)
How Can I update the cookies of the previously set m3u8 video in AVPlayer without creating the new AVURLAsset and replacing the AVPlayer current Item with it
Hi,
Could you please explain how to use SF Symbols animations in Final Cut? I would greatly appreciate your help.
Thank you!
So I've been trying for weeks now to implement a compression mechanism into my app project that compresses MV-HEVC video files in-app without stripping videos of their 3D properties, but every single implementation I have tried has either stripped the encoded MV-HEVC video file of its 3D properties (making the video monoscopic), or has crashed with a fatal error. I've read the Reading multiview 3D video files and Converting side-by-side 3D video to multiview HEVC documentation files, but was unable to myself come out with anything useful.
My question therefore is: How do you go about compressing/encoding an MV-HEVC video file in-app whilst preserving the stereoscopic/3D properties of that MV-HEVC video file? Below is the best implementation I was able to come up with (which simply compresses uploaded MV-HEVC videos with an arbitrary bit rate). With this implementation (my compressVideo function), the MV-HEVC files that go through it are compressed fine, but the final result is the loss of that MV-HEVC video file's stereoscopic/3D properties.
If anyone could point me in the right direction with anything it would be greatly, greatly appreciated.
My current implementation (that strips MV-HEVC videos of their stereoscopic/3D properties):
static func compressVideo(sourceUrl: URL, bitrate: Int, completion: @escaping (Result<URL, Error>) -> Void) {
let asset = AVAsset(url: sourceUrl)
asset.loadTracks(withMediaType: .video) { videoTracks, videoError in
guard let videoTrack = videoTracks?.first, videoError == nil else {
completion(.failure(videoError ?? NSError(domain: "VideoUploader", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to load video track"])))
return
}
asset.loadTracks(withMediaType: .audio) { audioTracks, audioError in
guard let audioTrack = audioTracks?.first, audioError == nil else {
completion(.failure(audioError ?? NSError(domain: "VideoUploader", code: -2, userInfo: [NSLocalizedDescriptionKey: "Failed to load audio track"])))
return
}
let outputUrl = sourceUrl.deletingLastPathComponent().appendingPathComponent(UUID().uuidString).appendingPathExtension("mov")
guard let assetReader = try? AVAssetReader(asset: asset),
let assetWriter = try? AVAssetWriter(outputURL: outputUrl, fileType: .mov) else {
completion(.failure(NSError(domain: "VideoUploader", code: -3, userInfo: [NSLocalizedDescriptionKey: "AssetReader/Writer initialization failed"])))
return
}
let videoReaderSettings: [String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32ARGB]
let videoSettings: [String: Any] = [
AVVideoCompressionPropertiesKey: [AVVideoAverageBitRateKey: bitrate],
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoHeightKey: videoTrack.naturalSize.height,
AVVideoWidthKey: videoTrack.naturalSize.width
]
let assetReaderVideoOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)
let assetReaderAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)
if assetReader.canAdd(assetReaderVideoOutput) {
assetReader.add(assetReaderVideoOutput)
} else {
completion(.failure(NSError(domain: "VideoUploader", code: -4, userInfo: [NSLocalizedDescriptionKey: "Couldn't add video output reader"])))
return
}
if assetReader.canAdd(assetReaderAudioOutput) {
assetReader.add(assetReaderAudioOutput)
} else {
completion(.failure(NSError(domain: "VideoUploader", code: -5, userInfo: [NSLocalizedDescriptionKey: "Couldn't add audio output reader"])))
return
}
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)
let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
videoInput.transform = videoTrack.preferredTransform
assetWriter.shouldOptimizeForNetworkUse = true
assetWriter.add(videoInput)
assetWriter.add(audioInput)
assetReader.startReading()
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)
let videoInputQueue = DispatchQueue(label: "videoQueue")
let audioInputQueue = DispatchQueue(label: "audioQueue")
videoInput.requestMediaDataWhenReady(on: videoInputQueue) {
while videoInput.isReadyForMoreMediaData {
if let sample = assetReaderVideoOutput.copyNextSampleBuffer() {
videoInput.append(sample)
} else {
videoInput.markAsFinished()
if assetReader.status == .completed {
assetWriter.finishWriting {
completion(.success(outputUrl))
}
}
break
}
}
}
audioInput.requestMediaDataWhenReady(on: audioInputQueue) {
while audioInput.isReadyForMoreMediaData {
if let sample = assetReaderAudioOutput.copyNextSampleBuffer() {
audioInput.append(sample)
} else {
audioInput.markAsFinished()
break
}
}
}
}
}
}
Hello, I've noticed that my server-hosted video that is larger than 19 MB doesn't work on iOS mobile devices. What is the maximum size limit (in MB) for the video html tag on iOS mobile devices?
I am Using VideoToolbox VTCompressionSession For Encoding The Frame in H264 Format Which I will send through web socket to a browser. The Received Frames Will be Decoded and Output Will be rendered in the Website. Now, when using Some encoders the video is rendered always with four frame latency.
How Frame is sent to server :
start>------------ f1 ------------ f2 ------------ f3 ------------ f4 ------------- f5 ...
How rendering is happening :
start>-------------------------------------------------------------------------- f1 ------------ f2 ------------ f3 ------------ f4 ----------- ...
This Sometime becomes two frame latency and Sometime it becomes sixteen frame latency so the usability is getting affected.
Im using this configuration in videotoolbox's VTCompressionSession:
kVTCompressionPropertyKey_AverageBitRate=3MB
kVTCompressionPropertyKey_ExpectedFrameRate=24
kVTCompressionPropertyKey_RealTime=true
kVTCompressionPropertyKey_ProfileLevel=kVTProfileLevel_H264_High_AutoLevel
kVTCompressionPropertyKey_AllowFrameReordering = false
kVTCompressionPropertyKey_MaxKeyFrameInterval=1000
With Same Configuration i am able to achieve 1 in - 1 out with com.apple.videotoolbox.videoencoder.h264.gva.
This Issue Is replication with Encoder com.apple.videotoolbox.videoencoder.ave.avc
Not Sure if its Encoder Specific. I have also seen that there are difference in VUI Parameters between encoded output of both encoders.
I want to know if there is something i could do to solve this issue from the Encoder Configuration or another API which is provided by the VideoToolBox to ensure that frames are decoded and rendered at the same time by Decoder.
Thanks in Advance....