I have using the Destination Video sample project as a template to recreate a similar project.
I have been able to update all text and images but when I go to change out the videos, it still plays the old videos. If I rename the new video to the old video name, the new video appears. Problem is, there are only two videos that repeat on different links.
But changing the URLs in SampleData.swift to the new file path does not work.
Any suggestions on how to replace a video with a newly imported asset? Here is an example of the code.
Video(
id: 1,
name: "Landing",
synopsis: """
After a long journey through the stars, the robot botanist and its trusty spaceship finally arrive at Wolf 1069 B,
ready to explore the mysteries that lie on the planet’s surface.
New plants to catalog, new animals to discover, and cool rocks to unearth. Follow along as the botanist’s mission begins!
""",
categoryIDs: [
1004,
1005
],
url: URL(string: "file://BOT-anist_video.mov")!,
imageName: "landing",
yearOfRelease: 2024,
duration: 66,
contentRating: "NR",
isFeatured: true
),
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
I am trying to play HLS videos on my iPhone app and I see frequent glitch on the video, it is not happening with MP4.
https://drive.google.com/file/d/1lhdpHTyjPYCYLHjzvb6ZF6P6jehIuwY0/view?usp=sharing.
We have observed consistent glitches in video playback when using AVPlayer to stream HLS (HTTP Live Streaming) videos on iOS. The issue manifests as intermittent frame drops, stuttering, and playback instability during HLS streams. However, the same behavior is not present when playing MP4 videos using the same AVPlayer instance. The HLS streams being used follow standard encoding practices, and network conditions have been ruled out as a cause for this problem.
https://drive.google.com/file/d/1lhdpHTyjPYCYLHjzvb6ZF6P6jehIuwY0/view?usp=sharing
Steps to Reproduce:
1. Load an HLS video into AVPlayer and initiate playback.
2. Observe intermittent glitches and stuttering during video playback.
3. Load and play an MP4 video in the same AVPlayer instance.
4. Notice that MP4 playback is smooth without any glitches.
The app needs to play remote videos. Sometimes it takes very long time (~10 seconds) to load the media and play with AVPlayer.
So I use a timer to check and try to play next video if it is over 5 seconds:
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoUrl options:nil];// line 1
NSArray *keys = @[@"playable"];
mediaLoaded = NO;
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() { // line 2
mediaLoaded = YES; // line 4
dispatch_async(dispatch_get_main_queue(), ^{
[self.player replaceCurrentItemWithPlayerItem:[AVPlayerItem playerItemWithAsset:asset]];
[self.player playImmediatelyAtRate:playSpeed];
});
}];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
if (!mediaLoaded) {
[self playNextVideo]; // line 3
}
});
So the flow is: line 1 (of video 1)- line 2 (of video 1)- line 3 (if over 5 seconds and video 1 is not playing)- line 1 (of video 2)-...
Now the problem is that seems line 2 is blocking line 1: only line 4 (for video 1 after ~10 seconds) or the completionHandler is executed will line 2 (for video 2) will be executed.
Anybody can give any insight?
Thx!
Hi,
I'm recording videos frame by frame and occasionally a sound plays (from an MP3 asset). I want to composite these sounds into the video at the correct timings. But this doesn't work. Really pulling my hair out here. I've tried everything, including adding one after another and then inserting silence in between (allegedly this pushes subsequent clips back) but nothing works.
Here, _currentTime is the current time according to the video frames added, which are added at 20Hz.
You can see I am adding silence long enough to cover the time from the end of the last audio clip to now, plus extra padding to contain the audio we are about to add. Doesn't matter if I remove this, it just doesn't work. Sometimes I can get two pieces of audio to play but never a third and usually, only the first audio plays, and then nothing after.
I'm completely stumped.
func addFrame(_ pixelBuffer: CVPixelBuffer) {
guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == _outputSize else { return }
let frameTime = CMTimeMake(value: Int64(_frameCount), timescale: _frameRate)
if _videoInput?.isReadyForMoreMediaData == true {
_pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime)
_frameCount += 1
_currentTime = frameTime
}
}
func addMP3AudioClip(_ audioData: Data) async throws {
let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3")
try audioData.write(to: tempURL)
let asset = AVAsset(url: tempURL)
let duration = try await asset.load(.duration)
let audioTrack = try await asset.loadTracks(withMediaType: .audio).first!
let currentAudioTime = _currentTime.convertScale(duration.timescale, method: .default)
_audioTrack?.insertEmptyTimeRange(CMTimeRangeFromTimeToTime(start: _lastAudioClipEndTime, end: currentAudioTime))
_audioTrack?.insertEmptyTimeRange(CMTimeRangeFromTimeToTime(start: currentAudioTime, end: CMTimeAdd(currentAudioTime, duration)))
let timeRange = CMTimeRangeMake(start: .zero, duration: duration)
try _audioTrack?.insertTimeRange(timeRange, of: audioTrack, at: currentAudioTime)
_lastAudioClipEndTime = CMTimeAdd(currentAudioTime, duration)
try FileManager.default.removeItem(at: tempURL)
_audioClipTimeRanges.append(CMTimeRangeMake(start: _currentTime, duration: duration))
}
Thank you,
-- B.
When update as IOS18 , Display automatically dims while the iPhone is video calling with LINE and it is stabilizing.
Hello,
I used following technical note to develop app that record mov file with SMPTE timecode.
https://developer.apple.com/library/archive/technotes/tn2310/_index.html
As result, a timecode track is present within .mov file (other tracks are audio and video)
Unfortunately, QuickTime Player doesn't display timecode information.
Analyser tools like mediainfo or online service as https://media-analyzer.pro/app show that timecode track has null duration (and so no "time code of last frame"
example n° of TC track :
Other
ID : 3
Type : Time code
Format : QuickTime TC
Frame rate : 60.000 FPS
Time code of first frame : 17:39:59:00
Time code, stripped : Yes
Title : Core Media Time Code
Encoded date : 2024-09-10 15:39:46 UTC
Tagged date : 2024-09-10 15:39:59 UTC
example 2 of Timecode track :
0000569562Quicktime Timecode #0
00007f6b8a'trak' Track atom #1
00007f6b92'tkhd' Track header atom #2
size 92 (0x5C)
type 'tkhd' (hex 74 6B 68 64)
version 0
flags 15 (0xF)
creation_time 0xE30618C2, '2024-09-10 15:39:46'
modification_time 0xE30618CF, '2024-09-10 15:39:59'
track_ID 3
reserved 0
duration 0
reserved [0, 0]
In each case, duration is considered as null even if the record's duration is more than 20s.
STEPS TO REPRODUCE
Use AVAssetWriter for video and audio.
Create AVAssetWrite for timecode and associate it with video track.
Just before stopping record, a sample buffer containing SMPTE is generated and added.
All track are marked as finished before stopping the record with finishWritingWithCompletionHandler.
I have an app that lets the user draw a circle or line over a live video feed. There’s a view for circles and a view for lines. These are displayed in ContentView and their position in the ZStack is altered based on the toolbar section by the user. Whichever view is on top of the stack is active and usable. That all works fine but I want the user to be able to select any shape on screen whether it is a circle or line. My question is what is best practice to manage tool selection and drawing the shapes? I’m guessing I should draw all shapes onto one view but I would like to avoid this as I have all my logic in each shapes respective view.
I have a Mac Catalyst video conferencing app that streams video using AVCaptureMultiCamSession. Everything has been working well for me in a variety of scenarios and hardware, but recently I got a report that virtual cameras / camera extensions do not seem to work - which I can reproduce 100% of the time by using something like OBS's virtual camera. FaceTime and Photo Booth work okay with these virtual cameras. Although my app can see and add the external AVCaptureDevice, I get an AVCaptureSessionRuntimeError posted when I start the session with a connection between the virtual camera and a AVCaptureVideoDataOutput (I don't get the error if I don't connect or add an output). The posted error is AVUnknown:
AVCaptureSessionRuntimeErrorNotification with Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x600001dcd680 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}}
Which doesn't tell me too much. I do see some fig assertions just above in Console though:
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:3964) - (err=-12780)
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:1591) - (err=-12780)
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:1418) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:3572) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:4518) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:483) - (err=-12780)
I've verified formats are sane (the usual 420v 1080p 30fps I have everywhere else) and data output functions and such, but I'm a bit stuck as to where to go from here.
One thing that did stand out is that in the AVCamBarcode example I can see the virtual camera in that app's preview layer, but if I create an AVCaptureVideoDataOutput and add it to the session in that example, it fails in what looks like exactly the same way that my app does, with the same assertions.
Does anyone have any advice? Thanks!
Hi Team,
When we select the screen Timing option and if the screen limit is exists for the day limit we have an option to click Ignore Limit—-> Ignore limit for today. After we select this option, suppose if we are watching any videos via Third party apps (YouTube) at that moment it got hang after we clicked multiple times the play button the video started to play but the volume is not working, again if we click previous or forward button then only the sounds are coming. Kindly look into this issue and address it.
Thanks,
Pemkumar S
I am working on a macOS app that uses AVFoundation to record the screen. During a recording if I make a window full screen, AVFoundation stops capturing screen frames (or does it at a very slow rate). In my logs I get the following error:
Error Domain=AVFoundationErrorDomain Code=-11844
note that I have had instances where I could not reproduce the error but they were rare.
The screen recording sometimes resumes normally if I switch desktops or minimize the full screen window.
Did anyone ever run across a similar issue or knows how to fix it ?
When in Final Cut Pro/Logic/Compressor/Motion, the command set is set to German, there is no way to set it to English.
Hello all,
This is my first post on the developer forums.
I am developing an app that records the screen of my app, using AVAssetWriter and RPScreenRecorder startCapture.
Everything is working as it should on most cases. There are some seemingly random times where the file generated is of some kb and it is corrupted. There seems to be no pattern on what the device is or the iOS version is. It can happen on various phones and iOS versions.
The steps I have followed in order to create the file are:
configuring the AssetWritter
videoAssetWriter = try? AVAssetWriter(outputURL: url!, fileType: AVFileType.mp4)
let size = UIScreen.main.bounds.size
let width = (Int(size.width / 4)) * 4
let height = (Int(size.height / 4)) * 4
let videoOutputSettings: Dictionary<String, Any> = [
AVVideoCodecKey : AVVideoCodecType.h264,
AVVideoWidthKey : width,
AVVideoHeightKey : height
]
videoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoOutputSettings)
videoInput?.expectsMediaDataInRealTime = true
guard let videoInput = videoInput else { return }
if videoAssetWriter?.canAdd(videoInput) ?? false {
videoAssetWriter?.add(videoInput)
}
let audioInputsettings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputsettings)
audioInput?.expectsMediaDataInRealTime = true
guard let audioInput = audioInput else { return }
if videoAssetWriter?.canAdd(audioInput) ?? false {
videoAssetWriter?.add(audioInput)
}
The urlForVideo returns the URL to the documentDirectory, after appending and creating the folders needed. This part seems to be working as it should as the directories are created and the video file exists on them.
Start the recording
if RPScreenRecorder.shared().isRecording { return }
RPScreenRecorder.shared().startCapture(handler: { [weak self] sample, bufferType, error in
if let error = error {
onError?(error.localizedDescription)
} else {
if (!RPScreenRecorder.shared().isMicrophoneEnabled) {
RPScreenRecorder.shared().stopCapture { error in
if let error = error { return }
}
onError?("Microphone was not enabled")
}
else {
succesCompletion?()
succesCompletion = nil
self?.processSampleBuffer(sample, with: bufferType)
}
}
}) { error in
if let error = error {
onError?(error.localizedDescription)
}
}
Process the sampleBuffers
guard CMSampleBufferDataIsReady(sampleBuffer) else { return }
DispatchQueue.main.async { [weak self] in
switch sampleBufferType {
case .video:
self?.handleVideoBaffer(sampleBuffer)
case .audioMic:
self?.add(sample: sampleBuffer, to: self?.audioInput)
self?.audioInput)
default:
break
}
}
// The add function from above
fileprivate func add(sample: CMSampleBuffer, to writerInput: AVAssetWriterInput?) {
if writerInput?.isReadyForMoreMediaData ?? false {
writerInput?.append(sample)
}
// The handleVideoBaffer function from above
fileprivate func handleVideoBaffer(_ sampleBuffer: CMSampleBuffer) {
if self.videoAssetWriter?.status == AVAssetWriter.Status.unknown {
self.videoAssetWriter?.startWriting()
self.videoAssetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
} else {
if (self.videoInput?.isReadyForMoreMediaData) ?? false {
if self.videoAssetWriter?.status == AVAssetWriter.Status.writing {
self.videoInput?.append(sampleBuffer)
}
}
}
}
}
Finally the stop recording
func stopRecording(completion: @escaping (URL?, URL?, Error?) -> Void) {
RPScreenRecorder.shared().stopCapture { error in
if let error = error {
completion(nil, nil, error)
return
}
self.finish { videoURL, _ in
completion(videoURL, nil, nil)
}
}
}
// The finish function mentioned above
fileprivate func finish(completion: @escaping (URL?, URL?) -> Void) {
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
finishRecordVideo {
dispatchGroup.leave()
}
dispatchGroup.notify(queue: .main) {
print("Finish with url:\(String(describing: self.urlForVideo()))")
completion(self.urlForVideo(), nil)
}
}
// The finishRecordVideo mentioned above
fileprivate func finishRecordVideo(completion: @escaping ()-> Void) {
videoInput?.markAsFinished()
audioInput?.markAsFinished()
videoAssetWriter?.finishWriting {
if let writer = self.videoAssetWriter {
if writer.status == .completed {
completion()
}
else if writer.status == .failed {
// Print the error to find out what went wrong
if let error = writer.error {
print("Video asset writing failed with error: \(error.localizedDescription). Url: \(writer.outputURL.path)")
} else {
print("Video asset writing failed, but no error description available.")
}
completion()
}else {
completion()
}
}
}
}
What could it be the reason of the corrupted files generated? This issue has never happened to my devices so there is no way to debug using xcode. Also there are no errors popping out on the logs.
Can you spot any issues on the code that can create this kind of issue? Do you have any suggestions on the problem at hand?
Thanks
Hi there,
I have some code that's been working fine for the last few versions of iOS and macOS and all the others, and now causes a runtime crash in iOS 18/macOS 15 etc.
I have an actor called Player which is basically a big wrapper around an AVPlayer. It all gets compiled down to a Framework, and my clients use it by dropping it in to their video player app code. It handles everything needed for them to be able to talk to our media infrastructure and handles telemetry.
It has its own property called avplayer which is an AVPlayer. Gets created at the init().
It has a function called load(_ avPlayerItem: AVPlayerItem) which the clients use to load a new video into player.
The offending code (which used to work!) looks like this:
Task { @MainActor in
avplayer.replaceCurrentItem(with: avPlayerItem)
}
No warnings in Xcode. When you run it, it crashes on iOS 18 and macOS 15 with this error in the debugger:
Incorrect actor executor assumption
I thought, "Okay well maybe replaceCurrentItem has changed and doesn't need to be on the main actor anymore, so even if you say this outside of a Main Actor-scoped task:
avplayer.replaceCurrentItem(with: avPlayerItem)
...it still crashes the exact same way.
Does anyone have any ideas? I'm under some heavy pressure here to get this working and I don't even know where to start with this.
Big thanks in advance.
Hi, Recording Videos with AVAssetWriter, capture fps(camera output fps) is ok, but final result video fps was lower, the reason is AVAssetWriterInput.isReadyForMoreMediaData is false sometimes.
Yes, I have read document many times, it said need to set expectsMediaDataInRealTime to true and balabala...
I really be tortured by this problem for a long time, can I debug this problem? or any advice?
I have a FairPlay-encrypted HLS stream and played the video in an AVPlayer.And I want to generate scrubbing thumbnails using the AVAssetImageGenerator.
Also, I am able to generate thumbnails for clear streams but get errors for protected content.
*How to generate thumbnails for protected content.
func getImageThumbnail(forTime: CMTime) {
let generator = AVAssetImageGenerator(asset: asset)
generator.appliesPreferredTrackTransform = true
generator.cancelAllCGImageGeneration()
generator.generateCGImagesAsynchronously(forTimes: [NSValue(time: forTime)]) { [weak self] requestedTime, image, actualTime, result, error in
if let error = error {
print("Error generate: \(error.localizedDescription)")
return
}
if let image = image {
DispatchQueue.main.async {
let image = UIImage(cgImage: image).jpegData(compressionQuality: 1.0)
self?.playerImg.image = UIImage(data: image!)
}
}
}
}
Hello,
I recently started integrating HLS downloads into my application by using AVAssetDownloadTask and AVAssetDownloadConfiguration. I took an example from the documentation as a basis, with only one small difference: the minimum target for my application is iOS 16, so I replaced urlSession(_:assetDownloadTask:willDownloadTo:) with urlSession(_:assetDownloadTask:didFinishDownloadingTo:).
And I encountered the following issue: after pausing a download and resuming it later, the progress no longer functions as expected.
Could you, please, help me with this? What are the right approaches to implementing pause and progress tracking?
Some details:
I used devices with iOS 16.0.2 and 17.6.1 for testing.
There was no code in the example that pauses the download and resumes it. So, I used the following methods to do this: suspend and resume
Also, I have tried to track downloading progress using two different approaches:
Using task.progress.observe(\.fractionCompleted) { ... }, which was presented in the example. In this scenario, after a pause, an observation callback will only be called once, when the download has completed, despite the fact that data is being successfully downloaded over the network.
Using urlSession(_:assetDownloadTask:didLoad:totalTimeRangesLoaded:timeRangeExpectedToLoad:) and calculating progress as totalTimeRangesLoaded.reduce(0.0) { $0 + CMTimeGetSeconds($1.timeRangeValue.duration) / CMTimeGetSeconds(timeRangeExpectedToLoad.duration) }. In this scenario, I have noticed that the result of the calculation does not always increase, but sometimes there are outliers. Example of logs: 68%, 69%, 70%, 72%, 63%, 65%, 66%, 69%, 70%, 71%, 72%. Such fluctuations are most easily reproduced when I try to resume the download after pause. However, sometimes they occur spontaneously. It's important to mention, that this method marked as deprecated, perhaps for this reason.
In both cases download is successful, the problem is with progress reporting only.
Full version of code can be found here.
(AVPlayerViewController *)avPlayerVC {
if(!_avPlayerVC){
_avPlayerVC =[[AVPlayerViewController alloc] init];
_avPlayerVC.videoGravity = AVLayerVideoGravityResizeAspectFill;
_avPlayerVC.showsPlaybackControls = NO;
[self addSubview:_avPlayerVC.view];
[_avPlayerVC.view mas_makeConstraints:^(MASConstraintMaker *make) {
make.edges.mas_equalTo(0);
}];
[self sendSubviewToBack:_avPlayerVC.view];
}
return _avPlayerVC;
}
我在一个cell里添加这个,界面无法动弹。只有在iOS18会这样
AVKit provides the SwiftUI view VideoPlayer, and allows you to add an interactive overlay. But that overlay is normally placed behind the system-provided playback controls.
Is there any way to suppress those controls, without resorting to wrapping AVPlayerView?
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
My project uses AVPlayer (AVPlayerViewController) to play video. There are continuous warning logs while playing and when it goes to dealloc, it prints information below.
<<<< PlayerRemoteXPC >>>> remoteXPCItem_handleSetProperty signalled err=-12860 (kFigPlayerError_ParamErr) (propertyValue should be MTAudioProcessingTap) at FigPlayer_RemoteXPC.m:2760
This only happens in iOS 18 and I have no idea about this. There is no any information for FigPlayerInterstitial and else.