Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

The behavior of AVPlayerItem.didPlayToEndTimeNotification is not as expected in iOS 26.
Hello, Environment macOS 15.6.1 / Xcode 26 beta 7 / iOS 26 Beta 9 In a simple AVFoundation video-playback sample, I’m seeing different behavior between iOS 18 and iOS 26 regarding AVPlayerItem.didPlayToEndTimeNotification. I’ve attached a minimal sample below. Please replace videoURL with a valid short video URL. Repro steps Tap “Play” to start playback and let the video finish. The AVPlayerItem.didPlayToEndTimeNotification registered with NotificationCenter should fire, and you should see Play finished. in the console. Without relaunching, tap “Play” again. This is where the issue arises. Observed behavior On iOS 18 and earlier: The video does not play again (it does not restart from the beginning), but AVPlayerItem.didPlayToEndTimeNotification is posted and Play finished. appears in the console. The same happens every time you press “Play”. On iOS 26: Pressing “Play” does not post AVPlayerItem.didPlayToEndTimeNotification. The code path that prints Play finished. is never called (the callback enclosing that line is not invoked again). Building the same program with Xcode 16.4 and running it on an iOS 26 beta device shows the same phenomenon, which suggests there has been a behavioral change for AVPlayerItem.didPlayToEndTimeNotification on iOS 26. I couldn’t find any mention of this in the release notes or API Reference. Because the semantics around AVPlayerItem.didPlayToEndTimeNotification appear to differ, we’re forced to adjust our logic. If there is a way to achieve the iOS 18–style behavior on iOS 26, I would appreciate guidance. Alternatively, if this change is intentional, could you share the reasoning? Is iOS 26 the correct behavior from Apple’s perspective and iOS 18 (and earlier) behavior considered incorrect? Any official clarification would be extremely helpful. import UIKit import AVFoundation final class ViewController: UIViewController { private let videoURL = URL(string: "https://......mp4")! private var player: AVPlayer? private var playerItem: AVPlayerItem? private var playerLayer: AVPlayerLayer? private var observeForComplete: NSObjectProtocol? // UI private let playerContainerView = UIView() private let playButton = UIButton(type: .system) private let stopButton = UIButton(type: .system) private let replayButton = UIButton(type: .system) deinit { if let observeForComplete { NotificationCenter.default.removeObserver(observeForComplete) } } override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .systemBackground setupUI() setupPlayer() } override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() playerLayer?.frame = playerContainerView.bounds } // MARK: - Setup private func setupUI() { playerContainerView.translatesAutoresizingMaskIntoConstraints = false playerContainerView.backgroundColor = .black view.addSubview(playerContainerView) // Buttons playButton.setTitle("Play", for: .normal) stopButton.setTitle("Pause", for: .normal) replayButton.setTitle("RePlay", for: .normal) [playButton, stopButton, replayButton].forEach { $0.titleLabel?.font = .systemFont(ofSize: 16, weight: .semibold) $0.translatesAutoresizingMaskIntoConstraints = false $0.contentEdgeInsets = UIEdgeInsets(top: 10, left: 16, bottom: 10, right: 16) } let stack = UIStackView(arrangedSubviews: [playButton, stopButton, replayButton]) stack.axis = .horizontal stack.spacing = 16 stack.alignment = .center stack.distribution = .equalCentering stack.translatesAutoresizingMaskIntoConstraints = false view.addSubview(stack) NSLayoutConstraint.activate([ playerContainerView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20), playerContainerView.leadingAnchor.constraint(equalTo: view.leadingAnchor), playerContainerView.trailingAnchor.constraint(equalTo: view.trailingAnchor), playerContainerView.heightAnchor.constraint(equalToConstant: 200), stack.topAnchor.constraint(equalTo: playerContainerView.bottomAnchor, constant: 20), stack.centerXAnchor.constraint(equalTo: view.centerXAnchor) ]) // Action playButton.addTarget(self, action: #selector(didTapPlay), for: .touchUpInside) stopButton.addTarget(self, action: #selector(didTapStop), for: .touchUpInside) replayButton.addTarget(self, action: #selector(didTapReplayFromStart), for: .touchUpInside) } private func setupPlayer() { // AVURLAsset -> AVPlayerItem → AVPlayer let asset = AVURLAsset(url: videoURL) let item = AVPlayerItem(asset: asset) self.playerItem = item let player = AVPlayer(playerItem: item) player.automaticallyWaitsToMinimizeStalling = true self.player = player let layer = AVPlayerLayer(player: player) layer.videoGravity = .resizeAspect playerContainerView.layer.addSublayer(layer) layer.frame = playerContainerView.bounds self.playerLayer = layer // Notification if let observeForComplete { NotificationCenter.default.removeObserver(observeForComplete) } if let playerItem { observeForComplete = NotificationCenter.default.addObserver( forName: AVPlayerItem.didPlayToEndTimeNotification, object: playerItem, queue: .main ) { [weak self] _ in guard self != nil else { return } Task { @MainActor in print("Play finished.") } } } } // MARK: - Actions @objc private func didTapPlay() { player?.play() } @objc private func didTapStop() { player?.pause() } // RePlay @objc private func didTapReplayFromStart() { player?.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero) { [weak self] _ in self?.player?.play() } } } I would greatly appreciate an official response from Apple engineering on whether this is an intentional change, a regression, or an API contract clarification, and what the recommended approach is going forward. Thank you.
2
3
672
Sep ’25
Enterprise API Passthrough in screen capture not working after visionOS26 update
We have been using passthrough in screen capture since visionOS26 with broadcast upload extension which was working in visionOS2.2 but now with visionOS26 it doesn't update. It fails with Invalid Broadcast session started, after a few seconds of starting the broadcast session. Is there a bug filed for it? or is it a known bug for it?
2
0
374
Oct ’25
iPhone 17 Pro, 17 Pro Maxで撮った映像をWebRTCで送るとブラウザで表示した際に緑の映像になる
iPhoneで撮影した映像をブラウザのアプリへ送信して画面に映す機能を持ったアプリを開発しています。 iPhone 17 Pro, 17 Pro Maxでこのアプリを利用するとブラウザ側に表示される映像が緑一色や、緑がメインのカラフルな映像になってしまいます。 調べてみると17Proと17ProMaxで超広角カメラと望遠カメラの画素数が変更になっている(1200万画素→4800万画素)ためエンコーディングで失敗しているのではないかと疑っています。 なんでも情報下さい。 環境情報 WebRTCライブラリ: GoogleWebRTC バージョン 1.1 (CocoaPodsで導入) シグナリングサーバー: AWS Kinesis Video Streams 問題が発生するデバイス: モデル名: iPhone18,1, OS: 26.0 モデル名: iPhone18,1, OS: 26.1 問題が発生しないデバイス: iPhone17,5 以前の多数のモデル モデル名: iPhone18,1, OS: 26.0 モデル名: iPhone18,3, OS: 26.0
2
0
157
4w
WKWebView Crashes on iOS During YouTube Playlist Playback
I’m encountering a consistent crash in WebKit when using WKWebView to play a YouTube playlist in my iOS app. Playback starts successfully, but the web process terminates during the second video in the playlist. This only occurs on physical devices, not in the simulator. Here’s a simplified Swift example of my setup: import SwiftUI import WebKit struct ContentView: View { private let playlistID = "PLig2mjpwQBZnghraUKGhCqc9eAy0UbpDN" var body: some View { YouTubeWebView(playlistID: playlistID) .edgesIgnoringSafeArea(.all) } } struct YouTubeWebView: UIViewRepresentable { let playlistID: String func makeUIView(context: Context) -> WKWebView { let config = WKWebViewConfiguration() config.allowsInlineMediaPlayback = true let webView = WKWebView(frame: .zero, configuration: config) webView.scrollView.isScrollEnabled = true let html = """ <!doctype html> <html> <head> <meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0"> <style>body,html{height:100%;margin:0;background:#000}iframe{width:100%;height:100%;border:0}</style> </head> <body> <iframe src="https://www.youtube-nocookie.com/embed/videoseries?list=\(playlistID)&controls=1&rel=0&playsinline=1&iv_load_policy=3" frameborder="0" allow="encrypted-media; picture-in-picture; fullscreen" webkit-playsinline allowfullscreen ></iframe> </body> </html> """ webView.loadHTMLString(html, baseURL: nil) return webView } func updateUIView(_ uiView: WKWebView, context: Context) {} } #Preview { ContentView() } Observed behavior: First video plays without issue. Web process crashes when the second video in the playlist starts. Console logs show WebProcessProxy::didClose and repeated memory status messages. Using ProcessAssertion or background activity does not prevent the crash. Only occurs on physical devices; simulators do not reproduce the issue. Questions: Is there something I should change or add in my WKWebView setup or HTML/iframe to prevent the crash when playing the second video in a playlist on physical iOS devices? Is there an officially supported way to limit memory or prevent WebKit from terminating the web process during multi-video playback? Are there recommended patterns for playing YouTube playlists in a WKWebView on iOS without risking crashes? Any tips for debugging or configuring WKWebView to make it more stable for continuous playlist playback? Thanks in advance for any guidance!
2
0
351
Oct ’25
Accessing External Timecode from Blackmagic ProDock in Custom App
Hi everyone, I’m exploring using the iPhone 17 Pro with the Blackmagic ProDock in a custom capture app. The genlock functionality seems accessible via AVExternalSyncDevice and related APIs, which is great. I’m specifically curious about external timecode coming in from the ProDock: • Is there a public way to access the timecode feed in a custom app via AVFoundation or another Apple API? • If so, what is the recommended approach to read or apply that timecode during capture? • Are there any current limitations or entitlements required to access timecode from ProDock in a third-party app? I’m excited to start integrating synchronized capture in my app, and any guidance or sample patterns would be greatly appreciated. Thanks in advance! — [Artem]
2
0
252
Oct ’25
VTLowLatencyFrameInterpolationConfiguration supported dimensions
Is there limits on the supported dimension for VTLowLatencyFrameInterpolationConfiguration. Querying VTLowLatencyFrameInterpolationConfiguration.maximumDimensions and VTLowLatencyFrameInterpolationConfiguration.minimumDimensions returns nil. When I try the WWDC sample project EnhancingYourAppWithMachineLearningBasedVideoEffects with a 4k video this statement try frameProcessor.startSession(configuration: configuration) executes but try await frameProcessor.process(parameters: parameters) throws error Error Domain=VTFrameProcessorErrorDomain Code=-19730 "Processor is not initialized" UserInfo={NSLocalizedDescription=Processor is not initialized}. Also, why is VTLowLatencyFrameInterpolationConfiguration able to run while app is backgrounded but VTFrameRateConversionParameters can't (due to gpu usage)?
2
0
217
20h
Control system video effects support for CMIO extension
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX. To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far. Can this be done?
2
0
143
Nov ’25
Attaching color properties to CVPixelBufferRef
I believe this should work: CFMutableDictionaryRef attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionaryAddValue(attrs, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2); CFDictionaryAddValue(attrs, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2); CFDictionaryAddValue(attrs, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2); CVPixelBufferRef pixelBuffer = NULL; CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, attrs, &pixelBuffer); assert(CFDictionaryGetCount(CVBufferGetAttachments(pixelBuffer, kCVAttachmentMode_ShouldPropagate)) > 0); But that last assert fails, so it appears the color info does not get attached. kCVImageBufferColorPrimariesKey and the others are not one of the keys listed under BufferAttributeKeys, but I think they're supposed to be allowed because they're listed by CMVideoFormatDescriptionGetExtensionKeysCommonWithImageBuffers(). I'm hoping that putting the color matrix info in there will control how AVAssetWriter converts the RGB to YCbCr.
2
0
295
1w
VTFrameRateConversionConfiguration don't support 640x480
hello, I'm using VideoTololbox VTFrameRateConversionConfiguration to perform frame interpolation: https://developer.apple.com/documentation/videotoolbox/vtframerateconversionconfiguration?language=objc ,when using 640x480 vidoe input, I got error: Error ! Invalid configuration [VEEspressoModel] build failure : flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 [EpsressoModel] Cannot load Net file flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 Error: failed to create FRCFlowAdaptationFeatureExtractor for usage 8 Failed to switch (0x12c40e140) [usage:8, 1/4 flow:0, adaptation layer:1, twoStage:0, revision:2, flow size (320x240)]. Could not init FlowAdaptation initFlowAdaptationWithError fail tried 2048x1080 is ok.
2
0
156
20h
iPhone Video Upload Error - reason unknown
Hi everyone, I'm developing a customization tool in which our customer can upload a mp3 or mp4 file that will be scannable through our AR application. On desktop and Android, this is working perfectly. For some reason however, on iPhone we're unable to load most of the video files. I've checked the clips, and they are .mov/h.264 files which are supported by iPhone. We're currently not sure how we can fix this issue to allow customers that own an iPhone to upload clips to our website. Any tips in the right direction are more than welcome. thanks in advance!
1
0
520
Dec ’24
How to implement Picture-in-Picture (PiP) in Flutter for iOS using LiveKit without a video URL?
I am building a video conferencing app using LiveKit in Flutter and want to implement Picture-in-Picture (PiP) mode on iOS. My goal is to display a view showing the speaker's initials or avatar during PiP mode. I successfully implemented this functionality on Android but am struggling to achieve it on iOS. I am using a MethodChannel to communicate with the native iOS code. Here's the Flutter-side code: import 'package:flutter/foundation.dart'; import 'package:flutter/services.dart'; class PipController { static const _channel = MethodChannel('pip_channel'); static Future<void> startPiP() async { try { await _channel.invokeMethod('enterPiP'); } catch (e) { if (kDebugMode) { print("Error starting PiP: $e"); } } } static Future<void> stopPiP() async { try { await _channel.invokeMethod('exitPiP'); } catch (e) { if (kDebugMode) { print("Error stopping PiP: $e"); } } } } On the iOS side, I am using AVPictureInPictureController. Since it requires an AVPlayerLayer, I had to include a dummy video URL to initialize the AVPlayer. However, this results in the dummy video’s audio playing in the background, but no view is displayed in PiP mode. Here’s my iOS code: import Flutter import UIKit import AVKit @main @objc class AppDelegate: FlutterAppDelegate { var pipController: AVPictureInPictureController? var playerLayer: AVPlayerLayer? override func application( _ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]? ) -> Bool { let controller: FlutterViewController = window?.rootViewController as! FlutterViewController let pipChannel = FlutterMethodChannel(name: "pip_channel", binaryMessenger: controller.binaryMessenger) pipChannel.setMethodCallHandler { [weak self] (call: FlutterMethodCall, result: @escaping FlutterResult) in if call.method == "enterPiP" { self?.startPictureInPicture(result: result) } else if call.method == "exitPiP" { self?.stopPictureInPicture(result: result) } else { result(FlutterMethodNotImplemented) } } GeneratedPluginRegistrant.register(with: self) return super.application(application, didFinishLaunchingWithOptions: launchOptions) } private func startPictureInPicture(result: @escaping FlutterResult) { guard AVPictureInPictureController.isPictureInPictureSupported() else { result(FlutterError(code: "UNSUPPORTED", message: "PiP is not supported on this device.", details: nil)) return } // Set up the AVPlayer let player = AVPlayer(url: URL(string: "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4")!) let playerLayer = AVPlayerLayer(player: player) self.playerLayer = playerLayer // Create a dummy view let dummyView = UIView(frame: CGRect(x: 0, y: 0, width: 1, height: 1)) dummyView.isHidden = true window?.rootViewController?.view.addSubview(dummyView) dummyView.layer.addSublayer(playerLayer) playerLayer.frame = dummyView.bounds // Initialize PiP Controller pipController = AVPictureInPictureController(playerLayer: playerLayer) pipController?.delegate = self // Start playback and PiP player.play() pipController?.startPictureInPicture() print("Picture-in-Picture started") result(nil) } private func stopPictureInPicture(result: @escaping FlutterResult) { guard let pipController = pipController, pipController.isPictureInPictureActive else { result(FlutterError(code: "NOT_ACTIVE", message: "PiP is not currently active.", details: nil)) return } pipController.stopPictureInPicture() playerLayer = nil self.pipController = nil result(nil) } } extension AppDelegate: AVPictureInPictureControllerDelegate { func pictureInPictureControllerDidStartPictureInPicture(_ pictureInPictureController: AVPictureInPictureController) { print("PiP started") } func pictureInPictureControllerDidStopPictureInPicture(_ pictureInPictureController: AVPictureInPictureController) { print("PiP stopped") } } Questions: How can I implement PiP mode on iOS without using a video URL (or AVPlayerLayer)? Is there a way to display a custom UIView (like a speaker’s initials or an avatar) in PiP mode instead of requiring a video? Why does PiP not display any view, even though the dummy video URL is playing in the background? I am new to iOS development and would greatly appreciate any guidance or alternative approaches to achieve this functionality. Thank you!
1
0
751
Dec ’24
Does tvOS Support the Multiview Feature?
I've seen the Multiview feature on tvOS that displays a small grid icon when available. However, I only see this functionality in VisionOS using the AVMultiviewManager. Does a different name refer to this feature on tvOS? Relevant Links: https://www.reddit.com/r/appletv/comments/12opy5f/handson_with_the_new_multiview_split_screen/ https://www.pocket-lint.com/how-to-use-multiview-apple-tv/#:~:text=You'll%20see%20a%20grid,running%20at%20the%20same%20time.
1
0
556
Dec ’24
Is Apple Log open to developers for 3rd party apps?
Hello! I am building a video camera app and trying to implement Apple log for iPhone 15 Pro and 16 Pro. I am not seeing a lot of documentation on it and notice the amount of apps that use it on the app is rather limited. Less an 5 to be exact. Is Apple Log recording a feature that is accessible to developers? Here is a link to documentation: https://developer.apple.com/documentation/avfoundation/avcapturecolorspace/applelog
1
0
485
Jan ’25
AVAssetWriter append audio/video streams concurrently in Real time recording setup
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance? `videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1 `audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
1
0
630
Jan ’25
Exporting Higher Resolution Spatial Videos in Final Cut
How do we export a spatial video at a higher resolution than 2200 x 2200? For example, I want to export a video that I edited in a spatial timeline as 4096 x 4096 resolution with spatial metadata to output as an MV-HEVC file. I looked under both Final Cut and Compressor export settings, but couldn't find a way to do this. Thanks for your help!
1
0
473
Jan ’25
Help diagnosing crash with only system code in its stack trace
Hello, I work on a video streaming app, and I have been working on this crash that we are seeing quite frequently (it is our #2 crasher at the moment). The stack trace indicates that an AVPictureInPictureController is being deallocated on a background thread. This leads to dangling AutoLayout constraints getting cleaned up, further resulting in an exception being thrown from the framework about the layout engine being accessed from a background thread. We have internal analytics which indicate that the crash occurs after the user comes back to the app after putting it in the background, and switches playback from Picture in Picture mode back to the app's regular playback UI. What has me puzzled here is that, as I'm sure you know, we have no control over the PIP UI. It is entirely system-provided, and there is none of our app's code in the stack trace. The whole process is even initiated by a KVO on an AVPlayerController property, and our app doesn't use that class directly anywhere. So how did we manage to cause this process to happen on a background thread? Add to this the fact that the bug only appeared once we switched to compiling with Xcode 16, and is overwhelmingly present only on devices running iOS 18. These factors lead me to believe that this is probably an OS issue. But before I go to file a feedback, I thought I would post here in case anyone has any ideas. I have attached an instance of the crash log to this post. 2025-01-08_17-32-45.7003_-0800-ea8d5c3323e0f1fc059cf83f6ec86377bdae1788.crash
1
0
429
Jan ’25
How start AVPictureInPicture when video is paused
I have AVPlayer with AVPictureInPictureController. Play video in app and picture In Picture works except one situation. Issue is: I pause video in application and during switch to background is not PiP activate. What do I wrong? import UIKit import AVKit import AVFoundation class ViewControllerSec: UIViewController,AVPictureInPictureControllerDelegate { var pipPlayer: AVPlayer! var avCanvas : UIView! var pipCanvas: AVPlayerLayer? var pipController: AVPictureInPictureController! var mainViewControler : UIViewController! var playerItem : AVPlayerItem! var videoAvasset : AVAsset! public func link(to parentViewController : UIViewController) { mainViewControler = parentViewController setup() } @objc func appWillResignActiveNotification(application: UIApplication) { guard let pipController = pipController else { print("PiP not supported") return } print("PIP isSuspend: \(pipController.isPictureInPictureSuspended)") print("PIP isPossible: \(pipController.isPictureInPicturePossible)" if playerItem.status == .readyToPlay { if pipPlayer.rate == 0 { pipPlayer.play() } pipController.startPictureInPicture(). ---> Errorin log: Failed to start picture in picture. } else { print("Player not ready for PiP.") } } private func setupAudio() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .moviePlayback) try session.setActive(true) } catch { print("Audio session setup failed: \(error.localizedDescription)") } } @objc func playerItemDidFailToPlayToEnd(_ notification: Notification) { if let error = notification.userInfo?[AVPlayerItemFailedToPlayToEndTimeErrorKey] as? Error { print("Failed to play to end: \(error.localizedDescription)") } } func setup() { setupAudio() guard let videoURL = URL(string: "https://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.mp4/.m3u8") else { return } videoAvasset = AVAsset(url: videoURL) playerItem = AVPlayerItem(asset: videoAvasset) addPlayerObservers() pipPlayer = AVPlayer(playerItem: playerItem) avCanvas = UIView(frame: view.bounds) pipCanvas = AVPlayerLayer(player: pipPlayer) guard let pipCanvas else { return } pipCanvas.frame = avCanvas.bounds //pipCanvas.videoGravity = .resizeAspectFill mainViewControler.view.addSubview(avCanvas) avCanvas.layer.addSublayer(pipCanvas) if AVPictureInPictureController.isPictureInPictureSupported() { pipController = AVPictureInPictureController(playerLayer: pipCanvas) pipController?.delegate = self pipController?.canStartPictureInPictureAutomaticallyFromInline = true } let playButton = UIButton(frame: CGRect(x: 20, y: 50, width: 100, height: 50)) playButton.setTitle("Play", for: .normal) playButton.backgroundColor = .blue playButton.addTarget(self, action: #selector(playTapped), for: .touchUpInside) mainViewControler.view.addSubview(playButton) let pauseButton = UIButton(frame: CGRect(x: 140, y: 50, width: 100, height: 50)) pauseButton.setTitle("Pause", for: .normal) pauseButton.backgroundColor = .red pauseButton.addTarget(self, action: #selector(pauseTapped), for: .touchUpInside) mainViewControler.view.addSubview(pauseButton) let pipButton = UIButton(frame: CGRect(x: 260, y: 50, width: 150, height: 50)) pipButton.setTitle("Start PiP", for: .normal) pipButton.backgroundColor = .green pipButton.addTarget(self, action: #selector(startPictureInPicture), for: .touchUpInside) mainViewControler.view.addSubview(pipButton) print("Error:\(String(describing: pipPlayer.error?.localizedDescription))") NotificationCenter.default.addObserver(forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: nil) { [weak self] _ in guard let self = self else { return } if self.pipPlayer.rate == 0 { self.pipPlayer.play() pipController?.startPictureInPicture() } } func addPlayerObservers() { playerItem?.addObserver(self, forKeyPath: "status", options: [.old, .new], context: nil) NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(_:)), name: .AVPlayerItemDidPlayToEndTime, object: playerItem) } override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) { if keyPath == "status" { if let statusNumber = change?[.newKey] as? NSNumber { let status = AVPlayer.Status(rawValue: statusNumber.intValue)! switch status { case .readyToPlay: print("Player is ready to play") case .failed: print("Player failed: \(String(describing: playerItem?.error))") case .unknown: print("Player status is unknown") @unknown default: fatalError() } } } } @objc func playerDidFinishPlaying(_ notification: Notification) { print("Video finished playing.") } deinit { playerItem?.removeObserver(self, forKeyPath: "status") NotificationCenter.default.removeObserver(self) } @objc func playTapped() { pipPlayer.play() } @objc func pauseTapped() { pipPlayer.pause() } @objc func startPictureInPicture() { if let pipController = pipController, !pipController.isPictureInPictureActive { pipController.startPictureInPicture() } } @objc func stopPictureInPicture() { if let pipController = pipController, pipController.isPictureInPictureActive { pipController.stopPictureInPicture() } } func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: Error) { print("Failed to start PiP: \(error.localizedDescription)") if let underlyingError = (error as NSError).userInfo[NSUnderlyingErrorKey] { print("Underlying error: \(underlyingError)") } } }
1
0
536
Jan ’25