I can't play video content with HEVC and DRM. Tested HEVC only: OK. Tested DRM+AVC: Ok.
Tested 2 players (Clappr/Stevie and BitMovin)
Master, variants and EXT-X-MAPs are downloaded Ok, DRM keys Ok and then, for instance with BitMovin Player:
[BMP] [Player] [Error] Event: SourceError, Data: {"code":2001,"data":{"message":"The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.)","code":-12927},"message":"Source Error. The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.)","timestamp":1740320663.4505711,"type":"onSourceError"} code: 2001 [Data code: -12927, message: The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.), underlying error: Error Domain=CoreMediaErrorDomain Code=-12927 "(null)"]
4k-master.m3u8.txt
4k.m3u8.txt
4k-audio.m3u8.txt
Video
RSS for tagIntegrate video and other forms of moving visual media into your apps.
Posts under Video tag
81 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello,
Let me ask you a question about Apple Immersive Video.
https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/
I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format.
First, I would like to know if it is possible to integrate Apple Immersive Video into an app.
Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app?
It would be great if you could also share any helpful website resources.
I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos.
As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements.
I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed?
I’ve asked several questions, and that’s all for now. Thank you in advance!
I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
No external cameras show up in the app on visionOS. We use this sample code as a basis for our tests: https://developer.apple.com/documentation/visionos/displaying-video-from-connected-devices
We also received the needed entitlement from Apple, but every camera we tried so far does not show up on visionOS.
We tried the following devices and hubs:
Insta360 X4
Somikon Endoscope Camera: USB HD Endoscope Camera
EMEET Full HD Webcam - C960
BENFEI Video/Audio Capture Card, 4K HDMI auf USB C/A
Logitech C920 HD PRO Webcam,
Anker PowerConf C200
Insta360 GO 3S
Anker 341 USB-C Hub
UGREEN Revodok Pro 10Gbps USB-C Hub
All Vision Pro devices we tried run with visionOS 2.3. When trying the same code on iPad we can actually use external cameras.
Steps to reproduce:
Start the app on a Vision Pro device and connect an external camera. The connected camera does not show up in the dropdown.
Development environment:
Xcode 16.2, macOS 15.3
Run-time configuration:
iOS 18.3, visionOS 2.3
I am developing a custom Picture in Picture (PiP) app that plays videos.
The video continues to play even when the app goes to the background, but I would like to be able to get the bool value when the PiP is hidden on the edge, as shown in the attached image.
The reason why we need this is because we don't want the user to consume extra network bandwidth, so we want to pause the video when the PiP is hidden on the edge of the screen.
Please let me know if this is possible.
This can be done in the foreground or in the background.
Hello Apple Community,
We are working on a real-time streaming feature where we receive chunks of raw MP4 data through a custom protocol and store them in a buffer (array). Our goal is to use these data chunks to play a continuous video stream in AVPlayer.
What We've Tried:
Custom URL Scheme with AVAssetResourceLoaderDelegate:
We implemented a custom URL scheme (customscheme://) to serve the buffered data using AVAssetResourceLoaderDelegate.
The method shouldWaitForLoadingOfRequestedResource is called only during the initial allocation. It doesn't get triggered when new chunks are appended to the buffer.
Despite appending new data to the buffer, AVPlayer doesn’t request further chunks from the delegate.
What We Need:
We are looking for a solution where:
The player continuously fetches data from the buffer as new chunks are added.
The playback remains smooth and uninterrupted, even with real-time data being appended.
Ideally, this solution works with AVPlayer while adhering to HLS-like behavior without implementing an HLS server.
Questions:
Is AVAssetResourceLoaderDelegate the right approach for this use case?
If so, how can we ensure shouldWaitForLoadingOfRequestedResource is called whenever new data is available in the buffer?
Are there alternative APIs or recommended patterns for playing real-time MP4 data chunks in AVPlayer?
Would implementing a custom FFmpeg-based player be necessary, or can this be achieved using AVPlayer and its APIs?
We appreciate any guidance, suggestions, or examples that can help us achieve this. Thank you!
Hello there,
I need to move through video loaded in an AVPlayer one frame at a time back or forth. For that I tried to use AVPlayerItem's method step(byCount:) and it works just fine.
However I need to know when stepping happened and as far as I observed it is not immediate using the method. If I check the currentTime() just after calling the method it's the same and if I do it slightly later (depending of the video itself) it shows the correct "jumped" time.
To achieve my goal I tried subclassing AVPlayerItem and implement my own async method utilizing NotificationCenter and the timeJumpedNotification assuming it would deliver it as the time actually jumps but it's not the case.
Here is my "stripped" and simplified version of the custom Player Item:
import AVFoundation
final class PlayerItem: AVPlayerItem {
private var jumpCompletion: ( (CMTime) -> () )?
override init(asset: AVAsset, automaticallyLoadedAssetKeys: [String]?) {
super .init(asset: asset, automaticallyLoadedAssetKeys: automaticallyLoadedAssetKeys)
NotificationCenter.default.addObserver(self, selector: #selector(timeDidChange(_:)), name: AVPlayerItem.timeJumpedNotification, object: self)
}
deinit {
NotificationCenter.default.removeObserver(self, name: AVPlayerItem.timeJumpedNotification, object: self)
jumpCompletion = nil
}
@discardableResult func step(by count: Int) async -> CMTime {
await withCheckedContinuation { continuation in
step(by: count) { time in
continuation.resume(returning: time)
}
}
}
func step(by count: Int, completion: @escaping ( (CMTime) -> () )) {
guard jumpCompletion == nil else {
completion(currentTime())
return
}
jumpCompletion = completion
step(byCount: count)
}
@objc private func timeDidChange(_ notification: Notification) {
switch notification.name {
case AVPlayerItem.timeJumpedNotification where notification.object as? AVPlayerItem [==](https://www.example.com/) self:
jumpCompletion?(currentTime())
jumpCompletion = nil
default: return
}
}
}
In short the notification never gets called thus the above is not working.
I guess the key there is that in the docs about the timeJumpedNotification: is said:
"A notification the system posts when a player item’s time changes discontinuously."
so the step(byCount:) is not considered as discontinuous operation and doesn't trigger it.
I'd be really helpful if somebody can help as I don't want to use seek(to:toleranceBefore:toleranceAfter:) mainly cause it's not accurate in terms of the exact next/previous frame as the video might have VFR and that causes repeating frames sometimes or even skipping one or another.
Thanks a lot
I have AVPlayer with AVPictureInPictureController. Play video in app and picture In Picture works except one situation. Issue is: I pause video in application and during switch to background is not PiP activate. What do I wrong?
import UIKit
import AVKit
import AVFoundation
class ViewControllerSec: UIViewController,AVPictureInPictureControllerDelegate {
var pipPlayer: AVPlayer!
var avCanvas : UIView!
var pipCanvas: AVPlayerLayer?
var pipController: AVPictureInPictureController!
var mainViewControler : UIViewController!
var playerItem : AVPlayerItem!
var videoAvasset : AVAsset!
public func link(to parentViewController : UIViewController) {
mainViewControler = parentViewController
setup()
}
@objc func appWillResignActiveNotification(application: UIApplication) {
guard let pipController = pipController else {
print("PiP not supported")
return
}
print("PIP isSuspend: \(pipController.isPictureInPictureSuspended)")
print("PIP isPossible: \(pipController.isPictureInPicturePossible)"
if playerItem.status == .readyToPlay {
if pipPlayer.rate == 0 {
pipPlayer.play()
}
pipController.startPictureInPicture(). ---> Errorin log: Failed to start picture in picture.
} else {
print("Player not ready for PiP.")
}
}
private func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .moviePlayback)
try session.setActive(true)
} catch {
print("Audio session setup failed: \(error.localizedDescription)")
}
}
@objc func playerItemDidFailToPlayToEnd(_ notification: Notification) {
if let error = notification.userInfo?[AVPlayerItemFailedToPlayToEndTimeErrorKey] as? Error {
print("Failed to play to end: \(error.localizedDescription)")
}
}
func setup() {
setupAudio()
guard let videoURL = URL(string: "https://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.mp4/.m3u8") else { return }
videoAvasset = AVAsset(url: videoURL)
playerItem = AVPlayerItem(asset: videoAvasset)
addPlayerObservers()
pipPlayer = AVPlayer(playerItem: playerItem)
avCanvas = UIView(frame: view.bounds)
pipCanvas = AVPlayerLayer(player: pipPlayer)
guard let pipCanvas else { return }
pipCanvas.frame = avCanvas.bounds
//pipCanvas.videoGravity = .resizeAspectFill
mainViewControler.view.addSubview(avCanvas)
avCanvas.layer.addSublayer(pipCanvas)
if AVPictureInPictureController.isPictureInPictureSupported() {
pipController = AVPictureInPictureController(playerLayer: pipCanvas)
pipController?.delegate = self
pipController?.canStartPictureInPictureAutomaticallyFromInline = true
}
let playButton = UIButton(frame: CGRect(x: 20, y: 50, width: 100, height: 50))
playButton.setTitle("Play", for: .normal)
playButton.backgroundColor = .blue
playButton.addTarget(self, action: #selector(playTapped), for: .touchUpInside)
mainViewControler.view.addSubview(playButton)
let pauseButton = UIButton(frame: CGRect(x: 140, y: 50, width: 100, height: 50))
pauseButton.setTitle("Pause", for: .normal)
pauseButton.backgroundColor = .red
pauseButton.addTarget(self, action: #selector(pauseTapped), for: .touchUpInside)
mainViewControler.view.addSubview(pauseButton)
let pipButton = UIButton(frame: CGRect(x: 260, y: 50, width: 150, height: 50))
pipButton.setTitle("Start PiP", for: .normal)
pipButton.backgroundColor = .green
pipButton.addTarget(self, action: #selector(startPictureInPicture), for: .touchUpInside)
mainViewControler.view.addSubview(pipButton)
print("Error:\(String(describing: pipPlayer.error?.localizedDescription))")
NotificationCenter.default.addObserver(forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: nil) { [weak self] _ in
guard let self = self else { return }
if self.pipPlayer.rate == 0 {
self.pipPlayer.play()
pipController?.startPictureInPicture()
}
}
func addPlayerObservers() {
playerItem?.addObserver(self, forKeyPath: "status", options: [.old, .new], context: nil)
NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(_:)), name: .AVPlayerItemDidPlayToEndTime, object: playerItem)
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if keyPath == "status" {
if let statusNumber = change?[.newKey] as? NSNumber {
let status = AVPlayer.Status(rawValue: statusNumber.intValue)!
switch status {
case .readyToPlay:
print("Player is ready to play")
case .failed:
print("Player failed: \(String(describing: playerItem?.error))")
case .unknown:
print("Player status is unknown")
@unknown default:
fatalError()
}
}
}
}
@objc func playerDidFinishPlaying(_ notification: Notification) {
print("Video finished playing.")
}
deinit {
playerItem?.removeObserver(self, forKeyPath: "status")
NotificationCenter.default.removeObserver(self)
}
@objc func playTapped() {
pipPlayer.play()
}
@objc func pauseTapped() {
pipPlayer.pause()
}
@objc func startPictureInPicture() {
if let pipController = pipController, !pipController.isPictureInPictureActive {
pipController.startPictureInPicture()
}
}
@objc func stopPictureInPicture() {
if let pipController = pipController, pipController.isPictureInPictureActive {
pipController.stopPictureInPicture()
}
}
func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: Error) {
print("Failed to start PiP: \(error.localizedDescription)")
if let underlyingError = (error as NSError).userInfo[NSUnderlyingErrorKey] {
print("Underlying error: \(underlyingError)")
}
}
}
Hello,
I’m experiencing an issue with video playback in my Javascript (SvelteKit) application using Capacitor. The video plays and loops correctly on Android and web browsers (including Safari) but stops unexpectedly after a few iterations on iOS native App.
<video src={videoPath} autoplay muted loop playsinline
class="h-auto w-full max-w-full object-cover"></video>
Has anyone encountered a similar issue or have insights into what might be causing this behavior on iOS?
Any suggestions or workarounds would be greatly appreciated. Maybe it has something to do with the iOS power saving policy?
Thank you in advance for your help!
I have a SwiftUI app that needs to be able to receive photos and videos from the Photos app. When the user shares an item from the Photos app they can choose to share to a destination app. When doing so from the Files app, my app appears as a share destination and the .onOpenURL successfully handles the incoming content.
The CFBundleTypeName and CFBundleTypeRole have been configured accordingly. What I don't understand is why I can share from the Files app and select my app as the destination, while not being able to select my app when sharing from the Photos app.
What does the Photos app require to allow a given app to available as a share destination?
I'd also like to be able to do this from other apps such as the YouTube app.
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames.
For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0.
However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither.
Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
Hi, I'm trying to decode a HLS livestream with VideoToolbox. The CMSampleBuffer is successfully created (OSStatus == noErr). When I enqueue the CMSampleBuffer to a AVSampleBufferDisplayLayer the view isn't displaying anything and the status of the AVSampleBufferDisplayLayer is 1 (rendering).
When I use a VTDecompressionSession to convert the CMSampleBuffer to a CVPixelBuffer the VTDecompressionOutputCallback returns a -8969 (bad data error).
What do I need to fix in my code? Do I incorrectly parse the data from the segment for the CMSampleBuffer?
let segmentData = try await downloadSegment(from: segment.url)
let (sps, pps, idr) = try parseH264FromTSSegment(tsData: segmentData)
if self.formatDescription == nil {
self.formatDescription = try CMFormatDescription(h264ParameterSets: [sps, pps])
}
if let sampleBuffer = try createSampleBuffer(from: idr, segment: segment) {
try self.decodeSampleBuffer(sampleBuffer)
}
func parseH264FromTSSegment(tsData: Data) throws -> (sps: Data, pps: Data, idr: Data) {
let tsSize = 188
var pesData = Data()
for i in stride(from: 0, to: tsData.count, by: tsSize) {
let tsPacket = tsData.subdata(in: i..<min(i + tsSize, tsData.count))
guard let payload = extractPayloadFromTSPacket(tsPacket) else { continue }
pesData.append(payload)
}
let nalUnits = parseNalUnits(from: pesData)
var sps: Data?
var pps: Data?
var idr: Data?
for nalUnit in nalUnits {
guard let firstByte = nalUnit.first else { continue }
let nalType = firstByte & 0x1F
switch nalType {
case 7: // SPS
sps = nalUnit
case 8: // PPS
pps = nalUnit
case 5: // IDR
idr = nalUnit
default:
break
}
if sps != nil, pps != nil, idr != nil {
break
}
}
guard let validSPS = sps, let validPPS = pps, let validIDR = idr else {
throw NSError()
}
return (validSPS, validPPS, validIDR)
}
func extractPayloadFromTSPacket(_ tsPacket: Data) -> Data? {
let syncByte: UInt8 = 0x47
guard tsPacket.count == 188, tsPacket[0] == syncByte else {
return nil
}
let payloadStart = (tsPacket[1] & 0x40) != 0
let adaptationFieldControl = (tsPacket[3] & 0x30) >> 4
var payloadOffset = 4
if adaptationFieldControl == 2 || adaptationFieldControl == 3 {
let adaptationFieldLength = Int(tsPacket[4])
payloadOffset += 1 + adaptationFieldLength
}
guard adaptationFieldControl == 1 || adaptationFieldControl == 3 else {
return nil
}
let payload = tsPacket.subdata(in: payloadOffset..<tsPacket.count)
return payloadStart ? payload : nil
}
func parseNalUnits(from h264Data: Data) -> [Data] {
let startCode = Data([0x00, 0x00, 0x00, 0x01])
var nalUnits: [Data] = []
var searchRange = h264Data.startIndex..<h264Data.endIndex
while let range = h264Data.range(of: startCode, options: [], in: searchRange) {
let nextStart = h264Data.range(of: startCode, options: [], in: range.upperBound..<h264Data.endIndex)?.lowerBound ?? h264Data.endIndex
let nalUnit = h264Data.subdata(in: range.upperBound..<nextStart)
nalUnits.append(nalUnit)
searchRange = nextStart..<h264Data.endIndex
}
return nalUnits
}
private func createSampleBuffer(from data: Data, segment: HLSSegment) throws -> CMSampleBuffer? {
var blockBuffer: CMBlockBuffer?
let alignedData = UnsafeMutableRawPointer.allocate(byteCount: data.count, alignment: MemoryLayout<UInt8>.alignment)
data.copyBytes(to: alignedData.assumingMemoryBound(to: UInt8.self), count: data.count)
let blockStatus = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: alignedData,
blockLength: data.count,
blockAllocator: nil,
customBlockSource: nil,
offsetToData: 0,
dataLength: data.count,
flags: 0,
blockBufferOut: &blockBuffer
)
guard blockStatus == kCMBlockBufferNoErr, let validBlockBuffer = blockBuffer else {
alignedData.deallocate()
throw NSError()
}
var sampleBuffer: CMSampleBuffer?
var timing = [calculateTiming(for: segment)]
var sampleSizes = [data.count]
let sampleStatus = CMSampleBufferCreate(
allocator: kCFAllocatorDefault,
dataBuffer: validBlockBuffer,
dataReady: true,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: formatDescription,
sampleCount: 1,
sampleTimingEntryCount: 1,
sampleTimingArray: &timing,
sampleSizeEntryCount: sampleSizes.count,
sampleSizeArray: &sampleSizes,
sampleBufferOut: &sampleBuffer
)
guard sampleStatus == noErr else {
alignedData.deallocate()
throw NSError()
}
return sampleBuffer
}
private func decodeSampleBuffer(_ sampleBuffer: CMSampleBuffer) throws {
guard let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else {
throw NSError()
}
if decompressionSession == nil {
try setupDecompressionSession(formatDescription: formatDescription)
}
guard let session = decompressionSession else {
throw NSError()
}
let flags: VTDecodeFrameFlags = [._EnableAsynchronousDecompression, ._EnableTemporalProcessing]
var flagOut = VTDecodeInfoFlags()
let status = VTDecompressionSessionDecodeFrame(
session,
sampleBuffer: sampleBuffer,
flags: flags,
frameRefcon: nil,
infoFlagsOut: nil)
if status != noErr {
throw NSError()
}
}
private func setupDecompressionSession(formatDescription: CMFormatDescription) throws {
self.formatDescription = formatDescription
if let session = decompressionSession {
VTDecompressionSessionInvalidate(session)
self.decompressionSession = nil
}
var decompressionSession: VTDecompressionSession?
var callback = VTDecompressionOutputCallbackRecord(
decompressionOutputCallback: decompressionOutputCallback,
decompressionOutputRefCon: Unmanaged.passUnretained(self).toOpaque())
let status = VTDecompressionSessionCreate(
allocator: kCFAllocatorDefault,
formatDescription: formatDescription,
decoderSpecification: nil,
imageBufferAttributes: nil,
outputCallback: &callback,
decompressionSessionOut: &decompressionSession
)
if status != noErr {
throw NSError()
}
self.decompressionSession = decompressionSession
}
let decompressionOutputCallback: VTDecompressionOutputCallback = { (
decompressionOutputRefCon,
sourceFrameRefCon,
status,
infoFlags,
imageBuffer,
presentationTimeStamp,
presentationDuration
) in
guard status == noErr else {
print("Callback: \(status)")
return
}
if let imageBuffer = imageBuffer {
}
}
Hi everyone,
I'm developing a customization tool in which our customer can upload a mp3 or mp4 file that will be scannable through our AR application. On desktop and Android, this is working perfectly. For some reason however, on iPhone we're unable to load most of the video files. I've checked the clips, and they are .mov/h.264 files which are supported by iPhone.
We're currently not sure how we can fix this issue to allow customers that own an iPhone to upload clips to our website.
Any tips in the right direction are more than welcome.
thanks in advance!
we are using angular and Html5 to develop our application, in our application we play videos that are placed on s3. Video when played on desktop borwser are adequatley audible but when played on iPad their volume is too low to be audible. I have tried
video.volume =1 but it does not work for iPad because this property is only readable for ios devices.
I have tried using javascript audioContext. It worked for my local machine. But when code is deployed on some hosted environments, it just does not work.
Did anyone face the same issue? Any help regarding it will be appreciated.
We have developed a simple video player Swift application for macOS, which uses the AVFoundation Framework. A special feature of this app is the ability to play the video backward with speeds like -0.25x, -0.5x, and -1.0x. MP4 video file is played directly from the local file system, video codec is h.264, and audio AAC. Video files are huge, like 10 GB, and a length of 3 hours.
Playing video in reverse direction works well on a Macbook Air with M1 or M2 chip. When we run the same app with the same video on a Macbook Air with M3 chip the reverse playback is much worse. Playback might stutter badly, especially in the latter part of the video. This same behavior also happens in Apple's Quicktime video player when playing in the reverse direction with -1x speed. What's even more strange is that at one point of a time, the video playback is totally smooth, but again, after a while, the playback is stuttering. For example, this morning reverse playback worked 100 % smoothly, then I rebooted the Mac and tried again: the result was stuttering. After this the Mac stayed idle for several hours and I tried to reverse play video again: smooth performance! My conclusion: M3 playback works fine if the stars in the sky are aligned correctly. :-)
So it's not only our app, but also Quicktime player is having exactly the same behavior. And only with the M3 chip. The same symptom appears with another similar M3 Mac, so it can't be a single fault. At the same time, open-source video player iina can reverse play the video well on the same Mac.
All Macs have otherwise identical configuration: 16 GB RAM and macOS 15.1.1.
Have you experienced the same problem? Any chance to solve this problem?
I really hope that the M4 chip Mac is behaving better here.
i use navigator.mediaDevices.getUserMedia to wake up camera, when i use constraint with '{video:{facingMode: {exact: 'user'}}}',occur a error"invalid constraint"
My app is not a VOIP application. I use devices that support character centering, such as iPad 10 or iPad 13.18. The system is iOS 18.0, 18.1, or 18.1.1. When entering live classes, the "Character Centering" button does not appear in the control center, as shown in the following picture. However, if Voice over IP is selected for Background Modes in the project and the app is run again, it will not be reproduced, even if it is uninstalled or reinstalled. Could you please help me investigate the reason? thank you!
我的app不是VOIP应用,使用iPad10或者iPad13.18等支持人物居中的设备,系统是iOS18.0,18.1或者18.1.1,进入直播课中,控制中心不显示"人物居中"按钮,必现的,如下图。但是项目中Background Modes勾选Voice over IP,再次运行app,就不会再复现,即便是卸载重装也不会再复现。麻烦帮忙看看什么原因?谢谢
I don’t know what you felt was wrong with video scrubbing that you needed to **** it up this badly. I can’t scrub within the video, only within several seconds of the pause or worse it restarts the whole damn video.
used to be an Easy and enjoyable process to harvest photos from a video and you’ve turned it into the most frustrating part of operating my phone.
release me a patch to optionally enable the old scrubbing behavior.
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export).
For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that.
However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example:
let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
let input = request.sourceImage // <- this still has the video's original size
// ...
})
composition.renderSize = CGSize(width: 1280, heigth: 720) // for example
So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler.
Is there a way to tell AVFoundation to load smaller video frames?