I’m using AVFoundation in my iPhone application to encode a video in MP4 format with H.264, which can then be shared or exported.
Do I need to pay a license for using the H.264 format to MPEG LA? Or are these fees already covered by Apple?
I’ve read articles suggesting that Apple covers these fees when encoding is done through its native APIs (or via its dedicated encoding hardware components), but I haven’t found any explicit confirmation of this point in the various documentation or contracts... Did I miss something?
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
When is playbackBufferEmpty triggered? Why is this property YES observed from the observer method when returning to the foreground from the background even if the buffer is not empty?
Topic:
Media Technologies
SubTopic:
Video
There are significant crash reports coming from iOS 18 users regarding AVKit framework that starts from this line [AVPlayerController _observeValueForKeyPath:oldValue:newValue:] which seems to be coming from iOS internal SDK. There are 2 kinds of crash we found:
UI modification on background thread
From the stack trace it seems like when AVPictureInPictureController is being deallocated and its view is being removed from superview somehow the code is being executed in background thread because there is this line there _AssertAutoLayoutOnAllowedThreadsOnly highlighted before the crash.
But I’ve checked our code that plays around AVPictureInPictureController, in the locations where we would deallocate the object it will always be called on main thread which are insideviewDidLoad and deinit inside UIViewController class. From the log, it seems like the crash happened when user try to open another content when PIP player is active resulting in the current PIP instance will be replaced with a new one. My suspect is the observation logic inside AVPlayerController could be the hint to this issue, probably something broken over there since this issue happened across our app versions on iOS 18 users only.
Unfortunately, I was unable to reproduce this issue yet but one of my colleagues reproduced it once but haven’t been able to do it again since. The reports keep raising each day up to 1.3k events in the last 30 days now.
Over release object
This one has lower reports than the first one but I decided to include it since it might have relevant information regarding the first crash since the starting stack trace is similar. The crash timing seems to be similar to the first one, where we deallocate existing AVPictureInPictureController and later replace it with a new one and also found only in iOS 18 users which also refers to [AVPlayerController _observeValueForKeyPath:oldValue:newValue:]. I also was unable to reproduce this issue so far.
Oh, and both of the issues happened on both iPhone and iPad.
We’d appreciate any advice on what we can do to avoid this in the future and probably any hint on why it could happened.
I have reported this issue with bug number: FB15620734
I also attached one sample crash report for each of the crashes here.
non ui thread access.crash
over release.crash
Hello, I will use AVFoundation's AVAssetWriter and AVPlayer for H.264 and H.265 encoding and decoding in my app. It will be used commercially, so I would like to know if I need to pay any licensing fees for H.264 and H.265 encoding and decoding.
Topic:
Media Technologies
SubTopic:
Video
I observe significant performance differences when encoding a video in mp4 format (H264). The code I use is standard (using AVAssetWriter, AVAssetWriterInput...).
Here is what I notice when I run the same code on different platforms:
On an iPhone, the video is encoded in 3 seconds (iPhone 13, 14, 15, 16, Pro...).
On a Mac equipped with an M2 Pro, the video is encoded in 50 seconds.
On a Mac equipped with an Intel processor (2,3 GHz Intel Xeon W 18 cœurs), the video is encoded in 2 minutes.
The encoding on an iPhone is very fast due to hardware acceleration. However, I don’t understand why I don’t get similar performance with a Mac M2 Pro, which is equipped with a dedicated component for hardware acceleration (H264 media engine)?
Is hardware acceleration disabled on a Mac?
Topic:
Media Technologies
SubTopic:
Video
We captured a spatial video with iPhone 15 pro.
When we try to export the video with AVAssetExportSession and AVAssetExportPresetMVHEVC960x960 it always go failed state and
exportSession.error?.localizedDescription yield "Operation Stopped" error.
Code implementation is straight forward .. other HEVC file works well.This problem occurred with only mv-hevc file.
func exportSpatialVideo(videoFilePath: String, outputUrl: URL){
let url:URL? = URL(fileURLWithPath: videoFilePath)
let asset: AVAsset = AVAsset(url:url!)
print(asset.description)
print(asset.tracks.first?.mediaType.rawValue)
let preset = "AVAssetExportPresetMVHEVC960x960"
let exportSession:AVAssetExportSession = AVAssetExportSession(asset: asset, presetName: preset)!
exportSession.outputURL = outputUrl
exportSession.shouldOptimizeForNetworkUse = true
exportSession.outputFileType = AVFileType.mov
exportSession.exportAsynchronously(completionHandler: {
switch exportSession.status {
case .unknown:
print("Unknown Error")
case .waiting:
print( "waiting ... ")
case .exporting:
print( "exporting ...")
case .completed:
print( "completed.")
case .failed:
print("failed.\(String(describing: exportSession.error?.localizedDescription))")
case .cancelled:
}
})
}
is there any solution for it ?
Topic:
Media Technologies
SubTopic:
Video
In iOS, when I use AVPlayerViewController to play back a slow motion video, it has a "ramp-up" stage at the start and a "ramp-down" stage at the end, and the video plays at the normal speed (i.e. not slow motion) during these stages.
My question is: are these non-slow-motion stages defined in the video file itself? (e.g. some kind of meta data?) Or, is it just a playback approach used by AVPlayerViewController ?
Thanks!
I am making an app that can two two videos, and then stitch them together on the screen (one video on top half and the other on bottom half).
This is achieved with AVMutableComposition, and then I am using AVAssetExportSession to export a mp4 file out:
guard let export = AVAssetExportSession(asset: composition, presetName: AVAssetExportPreset1920x1080) else {
return
}
export.exportAsynchronously {
....
When the two input videos are around 1GB each, starting the export session immediately increases memory usage by ~2GB, as if it moves the input files into memory immediately (my guess), and at some point my app is killed for using too much memory.
Is there a way to avoid this upfront memory usage and/or avoid getting killed?
Topic:
Media Technologies
SubTopic:
Video
I think I have the simplest possible Mac app trying to see if I can have VideoPlayer work in an Xcode Preview. It works in an iOS app project. In a Mac app project it builds and runs. But if I preview in Xcode it crashes.
The diagnostic says:
| [Remote] Unknown Error: The operation couldn’t be completed. XPC error received on message reply handler
|
| BSServiceConnectionErrorDomain (3):
| ==NSLocalizedFailureReason: XPC error received on message reply handler
| ==BSErrorCodeDescription: OperationFailed
The code I'm using is the exact code from the VideoPlayer documentation page. See this link.
Any ideas about this XPC error, and how to work around?
I'm using Xcode 16.0 on macOS 14.6.1
xcode 16:
VT_EXPORT void
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) API_AVAILABLE(macos(10.8), ios(16.0), tvos(16.0), visionos(1.0)) API_UNAVAILABLE(watchos);
xcode 15:
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) VT_AVAILABLE_STARTING(10_8);
I’m currently working on an iOS project that involves loading and playing stereoscopic/spatial videos. I’m using the AVFoundation framework, specifically AVURLAsset, but I’m having trouble determining how to correctly load and handle stereoscopic videos.
I would like to know:
Any guidance or code snippets would be greatly appreciated, I´m not understanding pretty well the apple developer videos...
Thank you in advance for your help!
Best,
Lau
Hi all,
I'm working on a particle system. Got it to work using drawn circles. Now I want to replace the circle with an image. Trying to do so in Draw section, but not sure if that's the right place.
Any suggestions for coding to:
connect the image from BIN to Xcode
to replace particles with the image(s).
Kindly ty
I am developing an iOS app that uses YOLOv8 for object detection and aims to detect objects at 60 FPS using the UltraWide camera. My goal is to process every frame within captureOutput and utilize the detected data (such as coordinates) for each one.
I have a question regarding how background thread processing behaves in this scenario. Does the size of the YOLO model (n, s, m, etc.) or the weight of the operations inside captureOutput affect the number of frames that can be successfully processed?
Specifically, I would like to know if all frames will be processed sequentially with a delay due to heavy processing in the background, or if some frames will be dropped and not processed at all. Any insights on how to handle this would be greatly appreciated.
Thank you!
Hello,
First, some version and software details:
Software: iOS 18.1
Hardware: iPhone 14 Pro Max and later
Xcode: 16.0
Summary: AVAssetReader is not concatenating a video at the beginning of the output video. The output video should contain a scene of me introducing the content, followed by a blue screen with AVSpeechSynthesizer reading out a text that I pasted above the "Generate Video" button.
Details:
Now, let's talk about the app.
Basically, I’m developing an app that generates a video with the following features:
My app will create an output video that is split into an opening scene followed by a fully blue screen.
The opening scene will be taken from a video I choose from my gallery.
I will read the opening video using AVAssetReader as usual.
After the opening scene, I will use the content of a text read by AVSpeechSynthesizer.write().
After the opening scene, the synthesized audio will start playing while the blue screen is displayed.
All of this is already defined in the attached project.
Each project file has a comment at the beginning introducing its content.
How to test:
Write something in the field above the "Generate Video" button. For example, type "Hello, world!"
Then, press the "Library" button and select a video from the gallery, about 30 seconds long.
That’s it. Press the "Generate Video" button.
The result I’ve experienced is a crash or failure to generate the video.
Practical example of what I want to achieve:
Suppose I record a 30-second video where I say, "I’m going to tell you the story of Snow White."
Then, I paste the "Snow White" story into the field above the "Generate Video" button.
The output video should contain me saying, "I’m going to tell you the story of Snow White."
After that, the AVSpeechSynthesizer will read the story I pasted, while displaying a blue screen.
I look forward to a solution.
Thank you very much!
convertToCMSampleBuffer.swift
convertToPixelBuffer.swift
createInputs.swift
createVideo.swift
test.swift
saveVideo.swift
TestApp.swift
editingVideo.swift
sampleReaderProvider.swift
misc.swift
sampleProvider.swift
Hi everyone,
I’ve encountered an issue with the showsPlaybackControls property in AVPlayerViewController after updating to iOS 18. Even though it’s set to true, the native playback controls (play, pause, etc.) are no longer appearing as they used to in previous iOS versions. This behavior was consistent and worked perfectly prior to iOS 18.
Additionally, I’m seeing the same problem when using the VideoPlayer in SwiftUI. The native controls that should appear by default seem to have vanished after the update. Has anyone else experienced this? Is there any workaround or additional configuration required to restore the native controls?
Any help or insights would be appreciated. Thanks!
struct CustomPlayerView: UIViewControllerRepresentable {
let player: AVPlayer
func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) {
playerController.player = player
playerController.showsPlaybackControls = true
player.play()
}
func makeUIViewController(context: Context) -> AVPlayerViewController {
return AVPlayerViewController()
}
}
I have an intra-oral video device that's supported by Apple's AVCaptureDevice. I can use the AV classes to connect to the device and get video. However, this device has a button that's used to acquire still images from the video stream. I can't use the IOUSBDeviceInterface to do an asynchronous read, because the Apple driver has the device opened exclusively. How do I go about receiving the button event in this scenario? I know which pipe to read, based on a bus analyzer when I run this on Windows, I just need to know how to access that pipe when the device is opened by another process.
My App is live on app store , user are using it with iPhone 16 pro max and they are getting Operation Stopped while combining videos and audios only specifically on iPhone 16 pro max , on every other device its working fine. And When i adding AVAssetExportPresetPassthrough it able to combine videos and audios but not respecting the encoding and without audio.
NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:composition];
if ([compatiblePresets containsObject:AVAssetExportPresetHighestQuality]) {
presetName = AVAssetExportPresetHighestQuality;
} else if ([compatiblePresets containsObject:AVAssetExportPreset1920x1080]) {
presetName = AVAssetExportPreset1920x1080;
} else if ([compatiblePresets containsObject:AVAssetExportPreset1280x720]) {
presetName = AVAssetExportPreset1280x720;
} else {
presetName = AVAssetExportPresetPassthrough;
}
} else {
presetName = AVAssetExportPreset1280x720;
}
I'm trying to create code to generate an fcpxml file so I can automate Final Cut Pro timeline (project) creation. Here's an xml element that FCP successfully imports (and successfully creates a project/timeline).
<project name="2013-08-09 19_23_07 (id).mov">
<sequence format="r1">
<spine>
<asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="146173027/60000s" duration="871871/60000s" tcFormat="DF" audioRole="dialogue"></asset-clip>
</spine>
</sequence>
</project>
The xml element example above was generated by exporting a simple timeline with a single clip. The problem I'm having is the media asset has timecode that gives a start time in relation to the timecode. When I try to remove timecode attributes and change the start time to "0s"
<asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="0s" duration="871871/60000s" audioRole="dialogue"></asset-clip>
FCP complains with the import error:
2013-08-09 19_23_07 (id).fcpxml Invalid edit with no respective media. (/fcpxml[1]/project[1]/sequence[1]/spine[1]/asset-clip[1])
I guess the question is, does AVAsset provide a way to get the timecode information and the timecode based start offset, or is there a way to tell FCP to use a default start time independent of timecode?
Topic:
Media Technologies
SubTopic:
Video
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?
1.In the FxRemoteWindowAPI protocol, there is no way to set window.frame.origin.
2.When using NSWindow, you cannot set [Window setLevel:NSFloatingWindowLevel].
3.How can I keep the window in front of Final Cut Pro without affecting the normal use of Final Cut Pro?
Topic:
Media Technologies
SubTopic:
Video