I film and edit content using my iPhone and PremierePro. I've been doing this since 2017, so I'd like to say I know my way around an iPhone and the PremierePro software enough to get a finished product to my client.
Before I send content to clients, I airdrop it to my phone to make sure that the quality of it is up to par. On this most recent project, I've had issues airdropping it to myself. Each time I do so, my iPhone prompts which 3rd party app I would like to open the video in, rather than automatically opening or saving the video into my photo library.
I will list the specs below:
Filmed on iPhone 15 Plus iOS Version 17.5.1 (at the moment this is the most up to date software update)
Filmed in 4K at 60 FPS
I have ample storage space on my phone and the video file size is 220MB
Premiere Pro Export Settings: Video Settings - H.264, Field Order: Progressive, Bit Rate Encoding: CBR
I will say that I purchased a transition and burns bundle and used it for the first time on this project. All materials used are in an .mp4 format and blend mode was set to overlay. Nothing out of pocket.
I figured it wouldn't be a problem since my client would just be downloading it via dropbox, but there was an issue there as well. My client received an error message saying, "Sorry, this type of video cannot be saved to this device".
The back road workaround was to take the saved video, plug it into a separate PremierePro project, and export it with the preset: Match Source - Adaptive High Bitrate. I was able to airdrop it to myself, it saved in my photo album and my client was able to download it without receiving that error message.
If there is an explanation as to why I am having this issue and how I can avoid it, I would really appreciate it as I have never had this problem in the past.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another.
I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist.
Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage.
Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
I see that Quicklook PreviewApplication.open has ability to show the videos in Immersive view, similar to Photos application. So I assume there is should be a control/configuration for VideoPlayer/AVPlayerViewController that would allow to do so.
How do you add this Immersive presentation for the VideoPlayer?
If it is not possible: FB13886809
Hi,
I’m developing an app that uses SharePlay. In specific, I’m using ShareLink in my SwiftUI-based app so that when 2 devices come close, it will start SharePlay via AirDrop, just like how Name Drop works (the animation is super cool, btw).
However, I’ve notice that SharePlay doesn’t start reliably under the following conditions:
Do both devices need to be signed in using different Apple ID? I wish it works with the same Apple ID.
When both devices are running my app, the sharing does not seem to start; maybe both of them are trying to be the host app?
When I try to demo this NameDrop-like transaction via Zoom, it usually doesn’t work; maybe because the cable is connected in Lightening port? Is some Mac app (in my case, Zoom or even QuickTime) capturing the screen of the device make it less likely to have successful SharePlay transaction?
Thanks!
Hello,
I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more.
However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second.
The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error.
The error is something like "Operation cannot be completed".
Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video.
Example:
while true {
if videoInput.isReadyForMoreMediaData {
break
}
if videoInput.isReadyForMoreMediaData,
let buffer = videoProvider.getNextFrame() {
adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30))
}
}
I await your response.
when audio file's magic number is 49443302 not 49443303, AVAudioPlayer's duration property return wrong value,
actually it cause by engiTunSMPB but I want to know why it happen only in ID3 version 2 (49443302)
example: only difference in two mp3 file is the magic number
and check the duration returns this result
source code under here
ContentView.swift
I have an app that displays overlays on top of an AVCaptureVideoPreviewLayer (basically AR without ARKit), and my users have repeatedly requested a button that will allow them to capture a screenshot of both the video and the overlays and surrounding UI with a single tap. However, I cannot find a way to actually take such a screenshot.
I have tried the usual methods of rendering views to images, such as calling drawViewHierarchyInRect on my top level view or calling renderInContext on the same view's layer. These all work perfectly to capture the overlays and the surrounding UI elements, but there is nothing but black where the video preview's contents should be.
snapshotViewAfterScreenUpdates: does capture exactly what I want, but snapshot views cannot be written to an image. From what I understand that's an intentional security decision by Apple.
I've considered using ReplayKit to take a very short screen recording and then using an AVAssetImageGenerator to grab a frame from that video, but I don't think that's how those APIs were intended to be used and it's an additional permission to request from the user. I would really rather not do this if there is any alternative (and I'm not even sure it would work).
Is there any reasonable method to render a view hierarchy to an image in such a way as to capture the contents of any video preview layers found within that hierarchy?
I made CameraExtension and installed by OSSystemExtensionRequest.
I got success callback. I did uninstall old version of my CameraExtension and install new version of my CameraExtension.
"systemextensionsctl list" command shows "[activated enabled]" on my new version.
But no daemon process with my CameraExtension is not running. I need to reboot OS to start the daemon process. This issue is new at macOS Sonoma 14.5. I did not see this issue on 14.4.x
Hi, I am working on an app that is very similar to TikTok in terms of video experience. There is an infinite scroll feed of videos, and I am using HLS URLs as the video source.
My requirement is to cache the initial few seconds of each video on the disk while the video is playing. The next time a user views the video, it should play the initial few seconds from the cache, with the subsequent chunks coming from the network. Additionally, when there is no network connection, the video should still play the initial few seconds from the cache.
I was able to achieve this with MP4 using AVAssetResourceLoaderDelegate, but the same approach is not possible with HLS.
What are some other ways through which I can implement this feature?
Thanks.
Hello everyone, I have been receiving this same crash report for the past month whenever I try and export a Final Cut Pro project. The FCP video will get to about 88% completion of export, then the application crashes and I get the attached report. Any leads on how to fix this would be greatly appreciated! Thank you.
-Lauren
I'm trying to cast the screen from an iOS device to an Android device.
I'm leveraging ReplayKit on iOS to capture the screen and VideoToolbox for compressing the captured video data into H.264 format using CMSampleBuffers. Both iOS and Android are configured for H.264 compression and decompression.
While screen casting works flawlessly within the same platform (iOS to iOS or Android to Android), I'm encountering an error ("not in avi mode") on the Android receiver when casting from iOS. My research suggests that the underlying container formats for H.264 might differ between iOS and Android.
Data transmission over the TCP socket seems to be functioning correctly.
My question is:
Is there a way to ensure a common container format for H.264 compression and decompression across iOS and Android platforms?
Here's a breakdown of the iOS sender details:
Device: iPhone 13 mini running iOS 17
Development Environment: Xcode 15 with a minimum deployment target of iOS 16
Screen Capture: ReplayKit for capturing the screen and obtaining CMSampleBuffers
Video Compression: VideoToolbox for H.264 compression
Compression Properties:
kVTCompressionPropertyKey_ConstantBitRate: 6144000 (bitrate)
kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Main_AutoLevel (profile and level)
kVTCompressionPropertyKey_MaxKeyFrameInterval: 60 (maximum keyframe interval)
kVTCompressionPropertyKey_RealTime: true (real-time encoding)
kVTCompressionPropertyKey_Quality: 1 (lowest quality)
NAL Unit Handling: Custom header is added to NAL units
Android Receiver Details:
Device: RedMi 7A running Android 10
Video Decoding: MediaCodec API for receiving and decoding the H.264 stream
I have generated FCPXML, but i can't figure out issue:
<?xml version="1.0"?>
<fcpxml version="1.11">
<resources>
<format id="r1" name="FFVideoFormat3840x2160p2997" frameDuration="1001/30000s" width="3840" height="2160" colorSpace="1-1-1 (Rec. 709)"/>
<asset id="video0" name="11a(1-5).mp4" start="0s" hasVideo="1" videoSources="1" duration="6.81s">
<media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/11a(1-5).mp4"/>
</asset>
<asset id="video1" name="12(4)r8 mute.mp4" start="0s" hasVideo="1" videoSources="1" duration="9.94s">
<media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/12(4)r8 mute.mp4"/>
</asset>
<asset id="video2" name="13 mute.mp4" start="0s" hasVideo="1" videoSources="1" duration="6.51s">
<media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/13 mute.mp4"/>
</asset>
<asset id="video3" name="13x (8,14,24,29,38).mp4" start="0s" hasVideo="1" videoSources="1" duration="45.55s">
<media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/13x (8,14,24,29,38).mp4"/>
</asset>
</resources>
<library>
<event name="Untitled">
<project name="Untitled Project" uid="28B2D4F3-05C4-44E7-8D0B-70A326135EDD" modDate="2024-04-17 15:44:26 -0400">
<sequence format="r1" duration="4802798/30000s" tcStart="0s" tcFormat="NDF" audioLayout="stereo" audioRate="48k">
<spine>
<asset-clip ref="video0" offset="0/10000s" name="11a(1-5).mp4" duration="0/10000s" format="r1" tcFormat="NDF"/>
<asset-clip ref="video1" offset="12119/10000s" name="12(4)r8 mute.mp4" duration="0/10000s" format="r1" tcFormat="NDF"/>
<asset-clip ref="video2" offset="22784/10000s" name="13 mute.mp4" duration="0/10000s" format="r1" tcFormat="NDF"/>
<asset-clip ref="video3" offset="34544/10000s" name="13x (8,14,24,29,38).mp4" duration="0/10000s" format="r1" tcFormat="NDF"/>
</spine>
</sequence>
</project>
</event>
</library>
</fcpxml>
Any ideas?
I am a bit confused on whether certain Video Toolbox (VT) encoders support hardware acceleration or not.
When I query the list of VT encoders (VTCopyVideoEncoderList(nil,&encoderList)) on an iPhone 14 Pro device, for avc1 (AVC / H.264) and hevc1 (HEVC / H.265) encoders, the kVTVideoEncoderList_IsHardwareAccelerated flag is not there, which -based on the documentation found on the VTVideoEncoderList.h- means that the encoders do not support hardware acceleration:
optional. CFBoolean. If present and set to kCFBooleanTrue, indicates that the encoder is hardware accelerated.
In fact, no encoders from this list return this flag as true and most of them do not include the flag at all on their dictionaries.
On the other hand, when I create a compression session using the VTCompressionSessionCreate() and pass the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder as true in the encoder specifications, after querying the kVTCompressionPropertyKey_UsingHardwareAcceleratedVideoEncoder using the following code, I get a CFBoolean value of true for both H.264 and H.265 encoder.
In fact, I get a true value (for both of the aforementioned encoders) even if I don't specify the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder during the creation of the compression session (note here that this flag was introduced in iOS 17.4 ^1).
So the question is: Are those encoders actually hardware accelerated on my device, and if so, why isn't that reflected on the VTCopyVideoEncoderList() call?
I have been seeing some crash reports for my app on some devices (not all of them). The crash occurs while converting a CVPixelBuffer captured from Video to a JPG using VTCreateCGImageFromCVPixelBuffer from VideoToolBox. I have not been able to reproduce the crash on local devices, even under adverse memory conditions (many apps running in the background).
The field crash reports show that VTCreateCGImageFromCVPixelBuffer does the conversion in another thread and that thread crashed at call to vConvert_420Yp8_CbCr8ToARGB8888_vec.
Any suggestions on how to debug this further would be helpful.
xtension Entity {
func addPanoramicImage(for media: WRMedia) {
let subscription = TextureResource.loadAsync(named:"image_20240425_201630").sink(
receiveCompletion: {
switch $0 {
case .finished: break
case .failure(let error): assertionFailure("(error)")
}
},
receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
self.components.set(ModelComponent(
mesh: .generateSphere(radius: 1E3),
materials: [material]
))
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, -1, 0.0)
}
)
components.set(Entity.WRSubscribeComponent(subscription: subscription))
}
func updateRotation(for media: WRMedia) {
let angle = Angle.degrees( 0.0)
let rotation = simd_quatf(angle: Float(angle.radians), axis: SIMD3<Float>(0, 0.0, 0))
self.transform.rotation = rotation
}
struct WRSubscribeComponent: Component {
var subscription: AnyCancellable
}
}
case .failure(let error): assertionFailure("(error)")
Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
I have a AVPlayer() which loads the video and places it on the screen ModelEntity in the immersive view using the VideoMaterial. This also makes the video untappable as it is a VideoMaterial.
Here's the code for the same:
let screenModelEntity = model.garageScreenEntity as! ModelEntity
let modelEntityMesh = screenModelEntity.model!.mesh
let url = Bundle.main.url(forResource: "<URL>",
withExtension: "mp4")!
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer()
let material = VideoMaterial(avPlayer: player)
screenModelEntity.components[ModelComponent.self] = .init(mesh: modelEntityMesh, materials: [material])
player.replaceCurrentItem(with: playerItem)
return player
I was able to load and play the video. However, I cannot figure out how to show the player controls (AVPlayerViewController) to the user, similar to the DestinationVideo sample app.
How can I add the video player controls in this case?
I am Using VideoToolbox VTCompressionSession For Encoding The Frame in H264 Format Which I will send through web socket to a browser. The Received Frames Will be Decoded and Output Will be rendered in the Website. Now, when using Some encoders the video is rendered always with four frame latency.
How Frame is sent to server :
start>------------ f1 ------------ f2 ------------ f3 ------------ f4 ------------- f5 ...
How rendering is happening :
start>-------------------------------------------------------------------------- f1 ------------ f2 ------------ f3 ------------ f4 ----------- ...
This Sometime becomes two frame latency and Sometime it becomes sixteen frame latency so the usability is getting affected.
Im using this configuration in videotoolbox's VTCompressionSession:
kVTCompressionPropertyKey_AverageBitRate=3MB
kVTCompressionPropertyKey_ExpectedFrameRate=24
kVTCompressionPropertyKey_RealTime=true
kVTCompressionPropertyKey_ProfileLevel=kVTProfileLevel_H264_High_AutoLevel
kVTCompressionPropertyKey_AllowFrameReordering = false
kVTCompressionPropertyKey_MaxKeyFrameInterval=1000
With Same Configuration i am able to achieve 1 in - 1 out with com.apple.videotoolbox.videoencoder.h264.gva.
This Issue Is replication with Encoder com.apple.videotoolbox.videoencoder.ave.avc
Not Sure if its Encoder Specific. I have also seen that there are difference in VUI Parameters between encoded output of both encoders.
I want to know if there is something i could do to solve this issue from the Encoder Configuration or another API which is provided by the VideoToolBox to ensure that frames are decoded and rendered at the same time by Decoder.
Thanks in Advance....
Hello, I've noticed that my server-hosted video that is larger than 19 MB doesn't work on iOS mobile devices. What is the maximum size limit (in MB) for the video html tag on iOS mobile devices?
So I've been trying for weeks now to implement a compression mechanism into my app project that compresses MV-HEVC video files in-app without stripping videos of their 3D properties, but every single implementation I have tried has either stripped the encoded MV-HEVC video file of its 3D properties (making the video monoscopic), or has crashed with a fatal error. I've read the Reading multiview 3D video files and Converting side-by-side 3D video to multiview HEVC documentation files, but was unable to myself come out with anything useful.
My question therefore is: How do you go about compressing/encoding an MV-HEVC video file in-app whilst preserving the stereoscopic/3D properties of that MV-HEVC video file? Below is the best implementation I was able to come up with (which simply compresses uploaded MV-HEVC videos with an arbitrary bit rate). With this implementation (my compressVideo function), the MV-HEVC files that go through it are compressed fine, but the final result is the loss of that MV-HEVC video file's stereoscopic/3D properties.
If anyone could point me in the right direction with anything it would be greatly, greatly appreciated.
My current implementation (that strips MV-HEVC videos of their stereoscopic/3D properties):
static func compressVideo(sourceUrl: URL, bitrate: Int, completion: @escaping (Result<URL, Error>) -> Void) {
let asset = AVAsset(url: sourceUrl)
asset.loadTracks(withMediaType: .video) { videoTracks, videoError in
guard let videoTrack = videoTracks?.first, videoError == nil else {
completion(.failure(videoError ?? NSError(domain: "VideoUploader", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to load video track"])))
return
}
asset.loadTracks(withMediaType: .audio) { audioTracks, audioError in
guard let audioTrack = audioTracks?.first, audioError == nil else {
completion(.failure(audioError ?? NSError(domain: "VideoUploader", code: -2, userInfo: [NSLocalizedDescriptionKey: "Failed to load audio track"])))
return
}
let outputUrl = sourceUrl.deletingLastPathComponent().appendingPathComponent(UUID().uuidString).appendingPathExtension("mov")
guard let assetReader = try? AVAssetReader(asset: asset),
let assetWriter = try? AVAssetWriter(outputURL: outputUrl, fileType: .mov) else {
completion(.failure(NSError(domain: "VideoUploader", code: -3, userInfo: [NSLocalizedDescriptionKey: "AssetReader/Writer initialization failed"])))
return
}
let videoReaderSettings: [String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32ARGB]
let videoSettings: [String: Any] = [
AVVideoCompressionPropertiesKey: [AVVideoAverageBitRateKey: bitrate],
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoHeightKey: videoTrack.naturalSize.height,
AVVideoWidthKey: videoTrack.naturalSize.width
]
let assetReaderVideoOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)
let assetReaderAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)
if assetReader.canAdd(assetReaderVideoOutput) {
assetReader.add(assetReaderVideoOutput)
} else {
completion(.failure(NSError(domain: "VideoUploader", code: -4, userInfo: [NSLocalizedDescriptionKey: "Couldn't add video output reader"])))
return
}
if assetReader.canAdd(assetReaderAudioOutput) {
assetReader.add(assetReaderAudioOutput)
} else {
completion(.failure(NSError(domain: "VideoUploader", code: -5, userInfo: [NSLocalizedDescriptionKey: "Couldn't add audio output reader"])))
return
}
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)
let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
videoInput.transform = videoTrack.preferredTransform
assetWriter.shouldOptimizeForNetworkUse = true
assetWriter.add(videoInput)
assetWriter.add(audioInput)
assetReader.startReading()
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)
let videoInputQueue = DispatchQueue(label: "videoQueue")
let audioInputQueue = DispatchQueue(label: "audioQueue")
videoInput.requestMediaDataWhenReady(on: videoInputQueue) {
while videoInput.isReadyForMoreMediaData {
if let sample = assetReaderVideoOutput.copyNextSampleBuffer() {
videoInput.append(sample)
} else {
videoInput.markAsFinished()
if assetReader.status == .completed {
assetWriter.finishWriting {
completion(.success(outputUrl))
}
}
break
}
}
}
audioInput.requestMediaDataWhenReady(on: audioInputQueue) {
while audioInput.isReadyForMoreMediaData {
if let sample = assetReaderAudioOutput.copyNextSampleBuffer() {
audioInput.append(sample)
} else {
audioInput.markAsFinished()
break
}
}
}
}
}
}
Hi,
Could you please explain how to use SF Symbols animations in Final Cut? I would greatly appreciate your help.
Thank you!