Hello, I am trying to get the new iPhone 16 pro to achieve 4k 120fps encoding when we are getting the video feed from the default, wide angle camera on the back. We are using the apple API to capture the individual frames from the camera as they are processed and we get them in this callback:
// this is the main callback function to handle video frames captured
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
We are then taking these frames as they come in and encoding them using VideoToolBox. After they are encoded, they are added to a ring buffer so we can access them after they have been encoded.
The problem is that when we are encoding these frames on an iPhone 16 Pro, we are only reaching 80-90fps instead of 120fps. We have removed as much processing as we can. We get some small attributes about the frame when it comes in, encode the frame, and then add it to our ring buffer.
I have attached a sample project that is broken down as much as possible to the basic task of encoding 4k 120fps footage. Inside the sample app, there is an fps and pps display showing how many frames we are encoding per second. FPS represents how many frames we are coming in per second from the camera, and PPS represents how many frames we are processing (encoding) per second.
Link to sample project: https://github.com/jake-fishtech/EncoderPerformance
Thanks you for any help or suggestions.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am creating an app that decodes H.265 elementary streams on iOS.
I use VideoToolBox to decode from H.265 to NV12.
The decoded data is enqueued in the CMSampleBufferDisplayLayer as a CMSampleBuffer.
However, nothing is displayed in the VideoPlayerView. It remains black.
The decoding in VideoToolBox is successful. I confirmed this by saving the NV12 data in the CMSampleBuffer to a file and displaying it using a tool.
Why is nothing displayed in the VideoPlayerView?
I can provide other source code as well.
//
// ContentView.swift
// H265Decoder
//
// Created by Kohshin Tokunaga on 2025/02/15.
//
import SwiftUI
struct ContentView: View {
var body: some View {
VStack {
Text("H.265 Player (temp.h265)")
.font(.headline)
VideoPlayerView()
.frame(width: 360, height: 640) // Adjust or make it responsive for iOS
}
.padding()
}
}
#Preview {
ContentView()
}
//
// VideoPlayerView.swift
// H265Decoder
//
// Created by Kohshin Tokunaga on 2025/02/15.
//
import SwiftUI
import AVFoundation
struct VideoPlayerView: UIViewRepresentable {
// Return an H265Player as the coordinator, and start playback there.
func makeCoordinator() -> H265Player {
H265Player()
}
func makeUIView(context: Context) -> UIView {
let uiView = UIView(frame: .zero)
// Base layer for attaching sublayers
uiView.backgroundColor = .black // Screen background color (for iOS)
// Create the display layer and add it to uiView.layer
let displayLayer = context.coordinator.displayLayer
displayLayer.frame = uiView.bounds
displayLayer.backgroundColor = UIColor.clear.cgColor
uiView.layer.addSublayer(displayLayer)
// Start playback
context.coordinator.startPlayback()
return uiView
}
func updateUIView(_ uiView: UIView, context: Context) {
// Reset the frame of the AVSampleBufferDisplayLayer when the view's size changes.
let displayLayer = context.coordinator.displayLayer
displayLayer.frame = uiView.layer.bounds
// Optionally update the layer's background color, etc.
uiView.backgroundColor = .black
displayLayer.backgroundColor = UIColor.clear.cgColor
// Flush transactions if necessary
CATransaction.flush()
}
}
//
// H265Player.swift
// H265Decoder
//
// Created by Kohshin Tokunaga on 2025/02/15.
//
import Foundation
import AVFoundation
import CoreMedia
class H265Player: NSObject, VideoDecoderDelegate {
let displayLayer = AVSampleBufferDisplayLayer()
private var decoder: H265Decoder?
override init() {
super.init()
// Initial configuration for the display layer
displayLayer.videoGravity = .resizeAspect
// Initialize the decoder (delegate = self)
decoder = H265Decoder(delegate: self)
// For simple playback, set isBaseline to true
decoder?.isBaseline = true
}
func startPlayback() {
// Load the file "cars_320x240.h265"
guard let url = Bundle.main.url(forResource: "temp2", withExtension: "h265") else {
print("File not found")
return
}
do {
let data = try Data(contentsOf: url)
// Set FPS and video size as needed
let packet = VideoPacket(data: data,
type: .h265,
fps: 30,
videoSize: CGSize(width: 1080, height: 1920))
// Decode as a single packet
decoder?.decodeOnePacket(packet)
} catch {
print("Failed to load file: \(error)")
}
}
// MARK: - VideoDecoderDelegate
func decodeOutput(video: CMSampleBuffer) {
// When decoding is complete, send the output to AVSampleBufferDisplayLayer
displayLayer.enqueue(video)
}
func decodeOutput(error: DecodeError) {
print("Decoding error: \(error)")
}
}
I have a crash related to playing video in AVPlayerViewController and AVQueuePlayer. I download the video locally from the network and then initialize it using AVAsset and AVPlayerItem. Can't reproduce locally, but crashes occur from firebase crashlytics only for users starting with iOS 18.4.0 with this trace:
Crashed: com.apple.avkit.playerControllerBackgroundQueue
0 libobjc.A.dylib 0x1458 objc_retain + 16
1 libobjc.A.dylib 0x1458 objc_retain_x0 + 16
2 AVKit 0x12afdc __77-[AVPlayerController currentEnabledAssetTrackForMediaType:completionHandler:]_block_invoke + 108
3 libdispatch.dylib 0x1aac _dispatch_call_block_and_release + 32
4 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
5 libdispatch.dylib 0x6560 _dispatch_continuation_pop + 596
6 libdispatch.dylib 0x5bd4 _dispatch_async_redirect_invoke + 580
7 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364
8 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156
9 libsystem_pthread.dylib 0x4624 _pthread_wqthread + 232
10 libsystem_pthread.dylib 0x19f8 start_wqthread + 8
FxPlug is one of Apple’s official SDKs, recently updated to version 4.3.2. In theory the SDK should guarantee third-parties can build plug-ins that are backward compatible with older versions of Final Cut Pro, Motion and Compressor.
FxPlug SDK includes two frameworks that third-party developers like me end up bundling inside our third-party plugins: FxPlug.framework and PlugInManager.framework.
Behind the scenes, the SDK relies on PlugInKit, but the FxPlug.framework provides abstractions so that third-parties don't have to handle the intricacies of XPC directly.
The most recent version of FxPlug.framework included with the SDK was possibly built with an error: the Info.plist shows a LSMinimumSystemVersion entry of 14.6, suggesting the binary may have been compiled and linked with MACOSX_DEPLOYMENT_TARGET set to 14.6 by accident.
The problem: when older versions of Final Cut Pro or Motion load a third-party plugin (itself built with the appropriate deployment target, macOS 11 or 12, for example) on pre-macOS 14.6, the dynamic linker immediately loads Apple’s own FxPlug.framework, but this causes the process to crash immediately:
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libobjc.A.dylib 0x7ff81e065955 map_images_nolock + 5399
1 libobjc.A.dylib 0x7ff81e0643d6 map_images + 67
2 dyld 0x10bd551fb invocation function for block in dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 275
3 dyld 0x10bd506c9 dyld4::RuntimeState::withLoadersReadLock(void () block_pointer) + 41
4 dyld 0x10bd550e2 dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 82
5 dyld 0x10bd68d45 dyld4::APIs::_dyld_objc_notify_register(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 79
6 libobjc.A.dylib 0x7ff81e064244 _objc_init + 1279
7 libdispatch.dylib 0x7ff81e01d993 _os_object_init + 13
8 libdispatch.dylib 0x7ff81e02b1b8 libdispatch_init + 311
9 libSystem.B.dylib 0x7ff828fd585f libSystem_initializer + 238
10 dyld 0x10bd5ae4f invocation function for block in dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 182
11 dyld 0x10bd81aad invocation function for block in dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 242
12 dyld 0x10bd78e26 invocation function for block in dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 557
13 dyld 0x10bd47db3 dyld3::MachOFile::forEachLoadCommand(Diagnostics&, void (load_command const*, bool&) block_pointer) const + 129
14 dyld 0x10bd78bb7 dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 179
15 dyld 0x10bd81604 dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 466
16 dyld 0x10bd5ad82 dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 144
17 dyld 0x10bd6165a dyld4::PrebuiltLoader::runInitializers(dyld4::RuntimeState&) const + 30
18 dyld 0x10bd6e76e dyld4::APIs::runAllInitializersForMain() + 38
19 dyld 0x10bd4c38d dyld4::prepare(dyld4::APIs&, dyld3::MachOAnalyzer const*) + 3443
20 dyld 0x10bd4b4e4 start + 388
Can someone at Apple with the right domain expertise confirm that this is the type of crash you would see because the framework was built assuming it would run on macOS 14.6 and later, and when facing an older environment (e.g. ObjC runtime) it lacks extra code that would ensure backward compatibility with the earlier ObjC runtime found on macOS 12.x?
Movies taken with Android phones store their location metadata (and probably others) in ways that are ignored by Apple's ecosystem (QuickTime Player, Photos.app).
I am considering creating a Spotlight importer so that this metadata is available to the sytem. But I have a couple of questions:
Can a Spotlight importer add new data (like location) to the data that the standard importer already captured? Or would the new importer need to take over the whole data gathering? If so, would macOS allow that?
Would that Spotlight importer be somehow used by e.g. Photos.app and QT Player to capture the location? Or would this end up in Spotlight "knowing" the location but Photos.app ignoring it?
If so, maybe there is something more broadly useful than a Spotlight importer?
Starting in iOS 18.4, (and still in the iOS 18.5 beta), the AVPlayer seems to freeze when we:
Replace the current AVPlayerItem, ReplaceCurrentItemWithPlayerItem and then:
Call Seek very shortly afterwards (seekToTime:toleranceBefore:toleranceAfter: / seek(to:))
And then subsequent calls to play after have no effect. However, it feels scrubbing to see after works and also changing the playback rate (i.e. fast forward) tends to clear up the frozen state.
Our primary workflow involves video playback, replacing video to show new clips and in some cases seeking to specific frames. This appears to only be occurring while streaming video, reports are all that local downloaded video playback remains fine.
This same code path has worked without issue on 17.x and 18.3.2 and for years before that.
What is particularly strange is that time observers log that video is still playing or feeding frames. The reported status is ReadyToPlay, IsLikelyToKeepUp is true, and there are no indications of stalling or buffering.
A similar issue is true for our web application in Safari. While on Sonoma and Safari 17.x, there is no issue. When you update to macOS Sequoia 15.4.1 and Safari 18.4, you begin observing a similar freezing. The same does not occur on Chrome or other tested browsers.
There appears to be in the release notes for Safari 18.4, an interesting "fix" note that seems similar to what we are now experiencing:
https://developer.apple.com/documentation/safari-release-notes/safari-18_4-release-notes
"Fixed an issue where playback doesn’t always resume after a seek. (140097993)"
"Fixed playing video generating non-monotonic ‘timeupdate’ events. (142275184) (FB16222910)"
"Fixed websites calling play() during a seek() is allowed by the specification so that the play event is fired even if the seek hasn’t completed. (142517488)"
"Fixed seek not completing for WebM under some circumstances. (143372794)"
"Fixed MediaRecorderPrivateEncoder writing frames out of order. (143956063)"
I’m building a SwiftUI app whose primary job is to play audio. I manage all of the Now-Playing metadata and Command center manually via the available shared instances:
MPRemoteCommandCenter.shared()
MPNowPlayingInfoCenter.default().nowPlayingInfo
In certain parts of the app I also need to display videos, but as soon as I attach another AVPlayer, it automatically pushes its own metadata into the Control Center and overwrites my audio info.
What I need: a way to show video inline without ever having that video player update the system’s Now-Playing info (or Control Center).
In my app, I start by configuring the shared audio session
do {
try AVAudioSession.sharedInstance().setCategory(.playback,
mode: .default,
options: [
.allowAirPlay,
.allowBluetoothA2DP
])
try AVAudioSession.sharedInstance().setActive(true)
} catch {
NSLog("%@", "**** Failed to set up AVAudioSession \(error.localizedDescription)")
}
and then set the MPRemoteCommandCenter commands and MPNowPlayingInfoCenter nowPlayingInfo like mentioned above.
All this works without any issues as long as I only have one AVPlayer in my app. But when I add other AVPlayers to display some videos (and keep the main AVPlayer for the sound) they push undesired updates to MPNowPlayingInfoCenter:
struct VideoCardView: View {
@State private var player: AVPlayer
let videoName: String
init(player: AVPlayer = AVPlayer(), videoName: String) {
self.player = player
self.videoName = videoName
guard let path = Bundle.main.path(forResource: videoName, ofType: nil) else { return }
let url = URL(fileURLWithPath: path)
let item = AVPlayerItem(url: url)
self.player.replaceCurrentItem(with: item)
}
var body: some View {
VideoPlayer(player: player)
.aspectRatio(contentMode: .fill)
.onAppear {
player.isMuted = true
player.allowsExternalPlayback = false
player.actionAtItemEnd = .none
player.play()
MPNowPlayingInfoCenter.default().nowPlayingInfo = nil
MPNowPlayingInfoCenter.default().playbackState = .stopped
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime,
object: player.currentItem,
queue: .main) { notification in
guard let finishedItem = notification.object as? AVPlayerItem,
finishedItem === player.currentItem else { return }
player.seek(to: .zero)
player.play()
}
}
.onDisappear {
player.pause()
}
}
}
Which is why I tried adding:
MPNowPlayingInfoCenter.default().nowPlayingInfo = nil
MPNowPlayingInfoCenter.default().playbackState = .stopped // or .interrupted, .unknown
But that didn't work.
I also tried making a wrapper around the AVPlayerViewController in order to set updatesNowPlayingInfoCenter to false, but that didn’t work either:
struct CustomAVPlayerView: UIViewControllerRepresentable {
let player: AVPlayer
func makeUIViewController(context: Context) -> AVPlayerViewController {
let vc = AVPlayerViewController()
vc.player = player
vc.updatesNowPlayingInfoCenter = false
vc.showsPlaybackControls = false
return vc
}
func updateUIViewController(_ controller: AVPlayerViewController, context: Context) {
controller.player = player
}
}
Hence any help on how to embed video in SwiftUI without its AVPlayer touching MPNowPlayingInfoCenter would be greatly appreciated.
All this was tested on an actual device with iOS 18.4.1, and built with Xcode 16.2 on macOS 15.5
Hi everyone,
I'm developing a visionOS app for Apple Vision Pro, and I've encountered an issue related to window resizing at runtime when using AVPlayer to play a live HLS stream.
✅ What I'm Trying to Do
Play a live HLS stream (from Wowza) inside my app using AVPlayer.
Support resizing the immersive window using Vision Pro’s built-in runtime scaling gesture.
Stream works fine at default window size when the app launches.
❌ Problem
If I resize the app’s window at runtime (using the Vision Pro pinch-drag gesture), then try to start the stream, it does not play.
Instead, it just shows the "Loading live stream..." state and never proceeds to playback.
This issue only occurs after resizing the window — if I don’t resize, the stream works perfectly every time.
🧪 What I’ve Tried
Verified the HLS URL — it’s working and plays fine in Safari and in the app before resizing.
Set .automaticallyWaitsToMinimizeStalling = false on AVPlayer.
Observed that .status on AVPlayerItem never reaches .readyToPlay after resizing.
Tried to force window size back using UIWindowScene.requestGeometryUpdate(...), but behavior persists.
When using AVSampleBufferDisplayLayer to play uncompressed H.264 and H.265 video with B-frames more than 7, frame drops occur. The more B-frames there are, the more noticeable the frame drops become, for example 15 bframes.
Use FFmpeg to transcode a video file with visible timestamps and frame numbers (x264 or x265 ):
ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx264 -x264-params "bframes=15:b-adapt=0" -crf 30 -y x264_bf15.mp4
ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx265 -x265-params "bframes=15:b-adapt=0" -crf 30 -y x265_bf15.mp4
Use the demo player from this repository to reproduce the issue: https://github.com/msfrms/CustomPlayer
frame drops can be observed. And following log can be found in devices console.
mediaserverd <<<< IQ-CA >>>> piqca_gmstats_dump: FIQCA(0x1266f4000) recent frames: enqueued: 184, displayed: 138, dropped: 42, flushed: 0, evicted: 3, >16ms late: 2
PS. I was using iphone11 iOS14.6, to replay this issue.
May I ask why frame drops occur in this case?
Is there any configuration or API usage change that could help fix the frame drop issue?
Many thanks!
Hi everyone,
We are working on a prototype app for Apple Vision Pro that is similar in functionality to Omegle or Chatroulette, but exclusively for Vision Pro owners.
The core idea is:
– a matching system where one user connects to another through a virtual persona;
– real-time video and audio transmission;
– time limits for sessions with the ability to extend them;
– users can skip a match and move on to the next one.
We have explored WebRTC and Twilio, but unfortunately, they don’t fit our use case.
Question:
What alternative services or SDKs are available for implementing real-time video/audio communication on Vision Pro that would work with this scenario?
Has anyone encountered a similar challenge and can recommend which technologies or tools to use?
Thanks in advance!
I'm writing some camera functionality that uses AVCaptureVideoDataOutput.
I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput.
My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? For example, does [AVCaptureSession stopRunning] imply a blocking call until all pending frame-callbacks are done?
I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource.
I have setup similar to the following:
// Foreground thread logic
dispatch_queue_t queue = dispatch_queue_create("qt_avf_camera_queue", nullptr);
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
setupInputDevice(captureSession); // Connects the AVCaptureDevice...
// Store some arbitrary data to be attached to the frame, stored on the foreground thread
FrameMetaData frameMetaData = ...;
MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc];
// Capture frameMetaData by reference in lambda
[sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }];
AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate
queue:queue];
[captureSession addOutput:captureVideoDataOutput];
[captureSession startRunning];
[captureSession stopRunning];
// Is it now safe to destroy frameMetaData, or do we need manual barrier?
And then in MySampleBufferDelegate:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Invokes the callback set above
FrameMetaData *frameMetaData = frameMetaDataGetter();
emitSampleBuffer(sampleBuffer, frameMetaData);
}
Hello,
Environment
macOS 15.6.1 / Xcode 26 beta 7 / iOS 26 Beta 9
In a simple AVFoundation video-playback sample, I’m seeing different behavior between iOS 18 and iOS 26 regarding AVPlayerItem.didPlayToEndTimeNotification.
I’ve attached a minimal sample below. Please replace videoURL with a valid short video URL.
Repro steps
Tap “Play” to start playback and let the video finish.
The AVPlayerItem.didPlayToEndTimeNotification registered with NotificationCenter should fire, and you should see Play finished. in the console.
Without relaunching, tap “Play” again. This is where the issue arises.
Observed behavior
On iOS 18 and earlier: The video does not play again (it does not restart from the beginning), but AVPlayerItem.didPlayToEndTimeNotification is posted and Play finished. appears in the console. The same happens every time you press “Play”.
On iOS 26: Pressing “Play” does not post AVPlayerItem.didPlayToEndTimeNotification. The code path that prints Play finished. is never called (the callback enclosing that line is not invoked again).
Building the same program with Xcode 16.4 and running it on an iOS 26 beta device shows the same phenomenon, which suggests there has been a behavioral change for AVPlayerItem.didPlayToEndTimeNotification on iOS 26. I couldn’t find any mention of this in the release notes or API Reference.
Because the semantics around AVPlayerItem.didPlayToEndTimeNotification appear to differ, we’re forced to adjust our logic. If there is a way to achieve the iOS 18–style behavior on iOS 26, I would appreciate guidance.
Alternatively, if this change is intentional, could you share the reasoning? Is iOS 26 the correct behavior from Apple’s perspective and iOS 18 (and earlier) behavior considered incorrect? Any official clarification would be extremely helpful.
import UIKit
import AVFoundation
final class ViewController: UIViewController {
private let videoURL = URL(string: "https://......mp4")!
private var player: AVPlayer?
private var playerItem: AVPlayerItem?
private var playerLayer: AVPlayerLayer?
private var observeForComplete: NSObjectProtocol?
// UI
private let playerContainerView = UIView()
private let playButton = UIButton(type: .system)
private let stopButton = UIButton(type: .system)
private let replayButton = UIButton(type: .system)
deinit {
if let observeForComplete {
NotificationCenter.default.removeObserver(observeForComplete)
}
}
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBackground
setupUI()
setupPlayer()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
playerLayer?.frame = playerContainerView.bounds
}
// MARK: - Setup
private func setupUI() {
playerContainerView.translatesAutoresizingMaskIntoConstraints = false
playerContainerView.backgroundColor = .black
view.addSubview(playerContainerView)
// Buttons
playButton.setTitle("Play", for: .normal)
stopButton.setTitle("Pause", for: .normal)
replayButton.setTitle("RePlay", for: .normal)
[playButton, stopButton, replayButton].forEach {
$0.titleLabel?.font = .systemFont(ofSize: 16, weight: .semibold)
$0.translatesAutoresizingMaskIntoConstraints = false
$0.contentEdgeInsets = UIEdgeInsets(top: 10, left: 16, bottom: 10, right: 16)
}
let stack = UIStackView(arrangedSubviews: [playButton, stopButton, replayButton])
stack.axis = .horizontal
stack.spacing = 16
stack.alignment = .center
stack.distribution = .equalCentering
stack.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(stack)
NSLayoutConstraint.activate([
playerContainerView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20),
playerContainerView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
playerContainerView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
playerContainerView.heightAnchor.constraint(equalToConstant: 200),
stack.topAnchor.constraint(equalTo: playerContainerView.bottomAnchor, constant: 20),
stack.centerXAnchor.constraint(equalTo: view.centerXAnchor)
])
// Action
playButton.addTarget(self, action: #selector(didTapPlay), for: .touchUpInside)
stopButton.addTarget(self, action: #selector(didTapStop), for: .touchUpInside)
replayButton.addTarget(self, action: #selector(didTapReplayFromStart), for: .touchUpInside)
}
private func setupPlayer() {
// AVURLAsset -> AVPlayerItem → AVPlayer
let asset = AVURLAsset(url: videoURL)
let item = AVPlayerItem(asset: asset)
self.playerItem = item
let player = AVPlayer(playerItem: item)
player.automaticallyWaitsToMinimizeStalling = true
self.player = player
let layer = AVPlayerLayer(player: player)
layer.videoGravity = .resizeAspect
playerContainerView.layer.addSublayer(layer)
layer.frame = playerContainerView.bounds
self.playerLayer = layer
// Notification
if let observeForComplete {
NotificationCenter.default.removeObserver(observeForComplete)
}
if let playerItem {
observeForComplete = NotificationCenter.default.addObserver(
forName: AVPlayerItem.didPlayToEndTimeNotification,
object: playerItem,
queue: .main
) { [weak self] _ in
guard self != nil else { return }
Task { @MainActor in
print("Play finished.")
}
}
}
}
// MARK: - Actions
@objc private func didTapPlay() {
player?.play()
}
@objc private func didTapStop() {
player?.pause()
}
// RePlay
@objc private func didTapReplayFromStart() {
player?.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero) { [weak self] _ in
self?.player?.play()
}
}
}
I would greatly appreciate an official response from Apple engineering on whether this is an intentional change, a regression, or an API contract clarification, and what the recommended approach is going forward. Thank you.
Hi,
Could you please explain how to use SF Symbols animations in Final Cut? I would greatly appreciate your help.
Thank you!
Hello everyone, I have been receiving this same crash report for the past month whenever I try and export a Final Cut Pro project. The FCP video will get to about 88% completion of export, then the application crashes and I get the attached report. Any leads on how to fix this would be greatly appreciated! Thank you.
-Lauren
1.在Fxplug4.3的 FxRemoteWindowAPI 的协议中,没有提供window.frame.origin的设置。
2.如果我自定义NSWindow时,在FxPlug中 [Window setLevel:NSFloatingWindowLevel];也没有执行。
3.请问我应该如何把窗口保留在Final cut pro的前面,并且不影响Final cut pro 的操作呢?
Topic:
Media Technologies
SubTopic:
Video
NSPanel *panel = [[myPanel alloc] initWithContentRect:NSMakeRect(100, 100, 400, 300) styleMask:NSWindowStyleMaskTitled | NSWindowStyleMaskClosable
backing:NSBackingStoreBuffered
defer:NO];
[panel setLevel:NSFloatingWindowLevel];//无效????
[panel makeKeyAndOrderFront:self];
问题:在FxPlug4.3中使用setLevel不能将panel放在Final cut pro和Mition的前面?
救命~~~全世界都没找到答案!
Topic:
Media Technologies
SubTopic:
Video
1.In the FxRemoteWindowAPI protocol, there is no way to set window.frame.origin.
2.When using NSWindow, you cannot set [Window setLevel:NSFloatingWindowLevel].
3.How can I keep the window in front of Final Cut Pro without affecting the normal use of Final Cut Pro?
Topic:
Media Technologies
SubTopic:
Video
I'm trying to create code to generate an fcpxml file so I can automate Final Cut Pro timeline (project) creation. Here's an xml element that FCP successfully imports (and successfully creates a project/timeline).
<project name="2013-08-09 19_23_07 (id).mov">
<sequence format="r1">
<spine>
<asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="146173027/60000s" duration="871871/60000s" tcFormat="DF" audioRole="dialogue"></asset-clip>
</spine>
</sequence>
</project>
The xml element example above was generated by exporting a simple timeline with a single clip. The problem I'm having is the media asset has timecode that gives a start time in relation to the timecode. When I try to remove timecode attributes and change the start time to "0s"
<asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="0s" duration="871871/60000s" audioRole="dialogue"></asset-clip>
FCP complains with the import error:
2013-08-09 19_23_07 (id).fcpxml Invalid edit with no respective media. (/fcpxml[1]/project[1]/sequence[1]/spine[1]/asset-clip[1])
I guess the question is, does AVAsset provide a way to get the timecode information and the timecode based start offset, or is there a way to tell FCP to use a default start time independent of timecode?
Topic:
Media Technologies
SubTopic:
Video
I have an intra-oral video device that's supported by Apple's AVCaptureDevice. I can use the AV classes to connect to the device and get video. However, this device has a button that's used to acquire still images from the video stream. I can't use the IOUSBDeviceInterface to do an asynchronous read, because the Apple driver has the device opened exclusively. How do I go about receiving the button event in this scenario? I know which pipe to read, based on a bus analyzer when I run this on Windows, I just need to know how to access that pipe when the device is opened by another process.
Hi all,
I'm working on a particle system. Got it to work using drawn circles. Now I want to replace the circle with an image. Trying to do so in Draw section, but not sure if that's the right place.
Any suggestions for coding to:
connect the image from BIN to Xcode
to replace particles with the image(s).
Kindly ty