Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

FCPXML Creation issue...
I have generated FCPXML, but i can't figure out issue: <?xml version="1.0"?> <fcpxml version="1.11"> <resources> <format id="r1" name="FFVideoFormat3840x2160p2997" frameDuration="1001/30000s" width="3840" height="2160" colorSpace="1-1-1 (Rec. 709)"/> <asset id="video0" name="11a(1-5).mp4" start="0s" hasVideo="1" videoSources="1" duration="6.81s"> <media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/11a(1-5).mp4"/> </asset> <asset id="video1" name="12(4)r8 mute.mp4" start="0s" hasVideo="1" videoSources="1" duration="9.94s"> <media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/12(4)r8 mute.mp4"/> </asset> <asset id="video2" name="13 mute.mp4" start="0s" hasVideo="1" videoSources="1" duration="6.51s"> <media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/13 mute.mp4"/> </asset> <asset id="video3" name="13x (8,14,24,29,38).mp4" start="0s" hasVideo="1" videoSources="1" duration="45.55s"> <media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/13x (8,14,24,29,38).mp4"/> </asset> </resources> <library> <event name="Untitled"> <project name="Untitled Project" uid="28B2D4F3-05C4-44E7-8D0B-70A326135EDD" modDate="2024-04-17 15:44:26 -0400"> <sequence format="r1" duration="4802798/30000s" tcStart="0s" tcFormat="NDF" audioLayout="stereo" audioRate="48k"> <spine> <asset-clip ref="video0" offset="0/10000s" name="11a(1-5).mp4" duration="0/10000s" format="r1" tcFormat="NDF"/> <asset-clip ref="video1" offset="12119/10000s" name="12(4)r8 mute.mp4" duration="0/10000s" format="r1" tcFormat="NDF"/> <asset-clip ref="video2" offset="22784/10000s" name="13 mute.mp4" duration="0/10000s" format="r1" tcFormat="NDF"/> <asset-clip ref="video3" offset="34544/10000s" name="13x (8,14,24,29,38).mp4" duration="0/10000s" format="r1" tcFormat="NDF"/> </spine> </sequence> </project> </event> </library> </fcpxml> Any ideas?
2
0
819
Oct ’24
FxPlug4.3_FxRemoteWindowAPI_window.frame.origin?
1.在Fxplug4.3的 FxRemoteWindowAPI 的协议中,没有提供window.frame.origin的设置。 2.如果我自定义NSWindow时,在FxPlug中 [Window setLevel:NSFloatingWindowLevel];也没有执行。 3.请问我应该如何把窗口保留在Final cut pro的前面,并且不影响Final cut pro 的操作呢?
1
0
491
Oct ’24
FxPlug4.3_NSPanel_setLevel
NSPanel *panel = [[myPanel alloc] initWithContentRect:NSMakeRect(100, 100, 400, 300) styleMask:NSWindowStyleMaskTitled | NSWindowStyleMaskClosable backing:NSBackingStoreBuffered defer:NO]; [panel setLevel:NSFloatingWindowLevel];//无效???? [panel makeKeyAndOrderFront:self]; 问题:在FxPlug4.3中使用setLevel不能将panel放在Final cut pro和Mition的前面? 救命~~~全世界都没找到答案!
1
0
502
Oct ’24
FxPlug4.3 & Window
1.In the FxRemoteWindowAPI protocol, there is no way to set window.frame.origin. 2.When using NSWindow, you cannot set [Window setLevel:NSFloatingWindowLevel]. 3.How can I keep the window in front of Final Cut Pro without affecting the normal use of Final Cut Pro?
1
0
416
Oct ’24
fcpxml asset-clip "tcFormat" attribute question
I'm trying to create code to generate an fcpxml file so I can automate Final Cut Pro timeline (project) creation. Here's an xml element that FCP successfully imports (and successfully creates a project/timeline). <project name="2013-08-09 19_23_07 (id).mov"> <sequence format="r1"> <spine> <asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="146173027/60000s" duration="871871/60000s" tcFormat="DF" audioRole="dialogue"></asset-clip> </spine> </sequence> </project> The xml element example above was generated by exporting a simple timeline with a single clip. The problem I'm having is the media asset has timecode that gives a start time in relation to the timecode. When I try to remove timecode attributes and change the start time to "0s" <asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="0s" duration="871871/60000s" audioRole="dialogue"></asset-clip> FCP complains with the import error: 2013-08-09 19_23_07 (id).fcpxml Invalid edit with no respective media. (/fcpxml[1]/project[1]/sequence[1]/spine[1]/asset-clip[1]) I guess the question is, does AVAsset provide a way to get the timecode information and the timecode based start offset, or is there a way to tell FCP to use a default start time independent of timecode?
1
0
383
Oct ’24
Using image instead of circles in particle system
Hi all, I'm working on a particle system. Got it to work using drawn circles. Now I want to replace the circle with an image. Trying to do so in Draw section, but not sure if that's the right place. Any suggestions for coding to: connect the image from BIN to Xcode to replace particles with the image(s). Kindly ty
1
0
335
Oct ’24
How to Load Stereoscopic Video Using AVFoundation?
I’m currently working on an iOS project that involves loading and playing stereoscopic/spatial videos. I’m using the AVFoundation framework, specifically AVURLAsset, but I’m having trouble determining how to correctly load and handle stereoscopic videos. I would like to know: Any guidance or code snippets would be greatly appreciated, I´m not understanding pretty well the apple developer videos... Thank you in advance for your help! Best, Lau
1
0
553
Oct ’24
VideoPlayer crashes in Xcode Preview for macOS app
I think I have the simplest possible Mac app trying to see if I can have VideoPlayer work in an Xcode Preview. It works in an iOS app project. In a Mac app project it builds and runs. But if I preview in Xcode it crashes. The diagnostic says: | [Remote] Unknown Error: The operation couldn’t be completed. XPC error received on message reply handler | | BSServiceConnectionErrorDomain (3): | ==NSLocalizedFailureReason: XPC error received on message reply handler | ==BSErrorCodeDescription: OperationFailed The code I'm using is the exact code from the VideoPlayer documentation page. See this link. Any ideas about this XPC error, and how to work around? I'm using Xcode 16.0 on macOS 14.6.1
2
0
650
Oct ’24
Spatial video export fails with AVAssetExportSession
We captured a spatial video with iPhone 15 pro. When we try to export the video with AVAssetExportSession and AVAssetExportPresetMVHEVC960x960 it always go failed state and exportSession.error?.localizedDescription yield "Operation Stopped" error. Code implementation is straight forward .. other HEVC file works well.This problem occurred with only mv-hevc file. func exportSpatialVideo(videoFilePath: String, outputUrl: URL){ let url:URL? = URL(fileURLWithPath: videoFilePath) let asset: AVAsset = AVAsset(url:url!) print(asset.description) print(asset.tracks.first?.mediaType.rawValue) let preset = "AVAssetExportPresetMVHEVC960x960" let exportSession:AVAssetExportSession = AVAssetExportSession(asset: asset, presetName: preset)! exportSession.outputURL = outputUrl exportSession.shouldOptimizeForNetworkUse = true exportSession.outputFileType = AVFileType.mov exportSession.exportAsynchronously(completionHandler: { switch exportSession.status { case .unknown: print("Unknown Error") case .waiting: print( "waiting ... ") case .exporting: print( "exporting ...") case .completed: print( "completed.") case .failed: print("failed.\(String(describing: exportSession.error?.localizedDescription))") case .cancelled: } }) } is there any solution for it ?
1
0
456
Oct ’24
Video Hardware Acceleration on Mac
I observe significant performance differences when encoding a video in mp4 format (H264). The code I use is standard (using AVAssetWriter, AVAssetWriterInput...). Here is what I notice when I run the same code on different platforms: On an iPhone, the video is encoded in 3 seconds (iPhone 13, 14, 15, 16, Pro...). On a Mac equipped with an M2 Pro, the video is encoded in 50 seconds. On a Mac equipped with an Intel processor (2,3 GHz Intel Xeon W 18 cœurs), the video is encoded in 2 minutes. The encoding on an iPhone is very fast due to hardware acceleration. However, I don’t understand why I don’t get similar performance with a Mac M2 Pro, which is equipped with a dedicated component for hardware acceleration (H264 media engine)? Is hardware acceleration disabled on a Mac?
0
0
445
Oct ’24
Overlapping Video Frames in RPBroadcastSampleHandler with ReplayKit
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping. I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo: - (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType { switch (sampleBufferType) { case RPSampleBufferTypeVideo: { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); int ret = 0; uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1); if (oYSize <= 672) { return; } uint8_t tempValue = oYData[672]; uint8_t *tYData = malloc(oYSize); memcpy(tYData, oYData, oYSize); if (tYData[672] != oYData[672]) { NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue); } free(tYData); CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); break; } default: { break; } } } Output: $$$$$$$$$$$$$$$$------ t:110 o:124 temp:110 $$$$$$$$$$$$$$$$------ t:111 o:133 temp:111 $$$$$$$$$$$$$$$$------ t:124 o:138 temp:124 $$$$$$$$$$$$$$$$------ t:133 o:144 temp:133 $$$$$$$$$$$$$$$$------ t:138 o:151 temp:138 $$$$$$$$$$$$$$$$------ t:144 o:156 temp:144 $$$$$$$$$$$$$$$$------ t:151 o:135 temp:151 $$$$$$$$$$$$$$$$------ t:156 o:78 temp:156 $$$$$$$$$$$$$$$$------ t:135 o:76 temp:135 $$$$$$$$$$$$$$$$------ t:78 o:77 temp:78 $$$$$$$$$$$$$$$$------ t:76 o:80 temp:76 $$$$$$$$$$$$$$$$------ t:77 o:80 temp:77 $$$$$$$$$$$$$$$$------ t:80 o:79 temp:80 $$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
0
0
501
Oct ’24
Max HLS segment file size?
The media services used for HLS streaming in an AVPlayer seem to crash if your segments are too large. Anything over 20Mbps seems to cause a crash. I have tried adjusting the segment length to 1 second also and it didn't help. I am remuxing Dolby Vision and HDR video and want to avoid transcoding and losing any metadata. However the segments are too large. Is there a workaround for this? Otherwise it seems AVFoundation is not suited to high bitrate HLS and I should be using MPV or similar.
2
0
644
Nov ’24
AirPlay fails when app is also installed on the receiver
We have a universal iOS/tvOS app that also supports iOS App on Mac. In our AVPlayer-based video player we support AirPlay with AVRouteDetector and AVRoutePickerView. We play HLS streams. When we try to AirPlay from an iOS device to an Apple TV or a Mac that has our app installed, it doesn't work. The receiver is marked as active in the route picker UI but the video doesn't show up on the receiver and playback stops. When our app isn't installed on the receiver device, everything works as expected. Has anyone encountered the same issue? Any solutions available for this?
0
0
570
Nov ’24
AVKit Video Player not working
Hey. I am trying to create a present view with a bunch of media (images/videos). Right now I am using a ZStack to render each media and change opacity based on the index selected using a scrollView. The issue seems to be that sometimes, videos don't seem to load in the main slide. There is a slide created as the video exists, the Player shows controls too but doesn't play anything. Present View Z-Stack ZStack { ForEach(presentation.slides.indices, id: .self) { index in if let media = mediaCacheManager.mediaCache[index] { if let player = media as? AVPlayer { PlayerView(player: player) .aspectRatio(16/10, contentMode: .fit ) .frame(width: UIScreen.main.bounds.width * 0.8) .background(Color.gray.opacity(0.2)) .clipShape(RoundedRectangle(cornerRadius: 40)) .overlay( RoundedRectangle(cornerRadius: 40) .stroke(Color.gray.opacity(0.5), lineWidth: 1) ) .onDisappear { player.pause() } .opacity(appModel.currentSlide == index ? 1 : 0) } else if let image = media as? Image { image .resizable() .scaledToFit() .frame(width: UIScreen.main.bounds.width * 0.8) .background(Color.gray.opacity(0.2)) .clipShape(RoundedRectangle(cornerRadius: 40)) .overlay( RoundedRectangle(cornerRadius: 40) .stroke(Color.gray.opacity(0.5), lineWidth: 1) ) .padding(.vertical, 10) .opacity(appModel.currentSlide == index ? 1 : 0) } } } } The PlayerView public class PlayerUIView: UIView { let playerVC = AVPlayerViewController() let gravity: AVLayerVideoGravity let manageAudio: Bool override init(frame: CGRect) { self.gravity = .resizeAspectFill self.manageAudio = true super.init(frame: frame) } deinit { if manageAudio { try? AVAudioSession.sharedInstance().setActive(false) } } init(player: AVPlayer?, gravity: AVLayerVideoGravity, manageAudio: Bool = true) { self.gravity = gravity self.manageAudio = manageAudio super.init(frame: .zero) guard let player = player else { return } self.playerSetup(player: player) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } public override func layoutSubviews() { super.layoutSubviews() playerVC.view.frame = bounds playerVC.view.backgroundColor = .clear playerVC.allowsVideoFrameAnalysis = false } private func playerSetup(player: AVPlayer) { playerVC.updatesNowPlayingInfoCenter = true playerVC.player = player playerVC.showsPlaybackControls = true playerVC.view.backgroundColor = .clear playerVC.exitsFullScreenWhenPlaybackEnds = true playerVC.videoGravity = gravity self.addSubview(playerVC.view) } }
0
0
503
Nov ’24
Firewire video OUTPUT to hardware 2024
Looking to output dv video to my JVC SR-VS30 video deck. I used to be able to do this, but with most firewire stuff being deprecated, I'm not sure how to go about this. I found this old developer sample code that seems to do exactly what I'd like. Surely this could be rolled or updated for current macOS? https://developer.apple.com/library/archive/samplecode/SimpleVideoOut/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000809-Intro-DontLinkElementID_2
0
0
340
Nov ’24
[AVPlayer][5G] The buffer duration (preferredForwardBufferDuration) configuration property of AVPlayerItem does not work on a 5G network
I tried configuring the preferredForwardBufferDuration on devices using 4G and Wi-Fi, and in these cases, AVPlayer works correctly according to the configured buffer duration. However, when the device is connected to a 5G network, the configuration value no longer works. For example, if I set preferredForwardBufferDuration to 30 seconds, AVPlayer preloads with a buffer of over 100 seconds. I’m not sure how to resolve this, as it’s causing issues with my system.
0
0
614
Nov ’24
Managing Excessive Memory Usage with AVAssetReader and AVASSETWriter
Hello, I.m deaf-blind programmer. I'm experiencing memory issues in my app. Essentially, I'm writing a video. In this output video, I get content from two sources. The first source is an already recorded video of 18 seconds (just for testing). It will be shown at the beginning of the output video. The second source is an array with photos and another array with audio buffers from AVSpeechSynthesizer.write(). The photos will be added along with the audio buffers to the output video, right after adding the 18-second video. So, in the end, the output video should be: 18-second video + array of photos as video images and, for audio, the buffers from AVSpeechSynthesizer.write(). However, my app crashes as soon as I start the first process. I'm using AVAssetWriter to write the video and AVAssetReader to read the video. Below, I'll show the code where I get the CMSampleBuffer. I'd like an example of how to add the 18-second video to the beginning of the output video. It doesn't need to be a big piece of code. Here it is: // Variables var audioReaderBuffers = [CMSAMPLEBUFFER]() var videoReaderBuffers = [(frame: CVPixelBuffer, time: CMTIME)]() // Get CMSampleBuffer of a video asset if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first! let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width, kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height ] as [String: Any] let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerVideoOutput) reader.add(readerAudioOutput) reader.startReading() // Video CMSampleBuffer while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() { autoreleasepool { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let pixBuf = imgBuffer as CVPixelBuffer let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) videoReaderBuffers.append((frame: pixBuf, time: pTime)) } } } if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first! let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width, kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height ] as [String: Any] let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerVideoOutput) reader.add(readerAudioOutput) reader.startReading() while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() { autoreleasepool { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let pixBuf = imgBuffer as CVPixelBuffer let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) }
1
0
523
Nov ’24