I’m using AVFoundation in my iPhone application to encode a video in MP4 format with H.264, which can then be shared or exported.
Do I need to pay a license for using the H.264 format to MPEG LA? Or are these fees already covered by Apple?
I’ve read articles suggesting that Apple covers these fees when encoding is done through its native APIs (or via its dedicated encoding hardware components), but I haven’t found any explicit confirmation of this point in the various documentation or contracts... Did I miss something?
Video
RSS for tagIntegrate video and other forms of moving visual media into your apps.
Posts under Video tag
87 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello, we have HLS stream app and we use AVPlayer for HLS stream. We want to implement dynamic resulotion feature as user's selection. For example, if user want to watch only 1080p user has to watch only 1080p but we have tried to implement "preferredMaximumResolution" and "preferredPeakBitRate" parameters and but AVPlayer does not force it which means that setting preferredMaximumResolution= CGSize(width: 1920, height: 1080) player does not only force to play 1080p profile, player drops resulotion to 720p but we do not want 720p stream if user selected 1080p resulotion. Is there any method to force it even if stream stalls? Thank you in advance
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc.
The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions:
return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); }
arguments:@[src]
];
The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate:
float4 backwards(sampler image, destination dest)
{
float2 dc = dest.coord();
dc.x = image.size().x - dc.x;
return image.sample(image.transform(dc)));
}
When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large.
All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing.
I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet.
Any thoughts/input appreciated, thanks!
I’m currently working on an iOS project that involves loading and playing stereoscopic/spatial videos. I’m using the AVFoundation framework, specifically AVURLAsset, but I’m having trouble determining how to correctly load and handle stereoscopic videos.
I would like to know:
Any guidance or code snippets would be greatly appreciated, I´m not understanding pretty well the apple developer videos...
Thank you in advance for your help!
Best,
Lau
This ideal gonna be cool:
When people finish recording a video and later realize there's something else worth capturing, they can only create a second clip. But what if it were possible to reopen the first video and continue recording from where they left off? This would be a great convenience for many people
Hello,
My goal is to display the live video stream from a HomeKit camera in my Swift/SwiftUI iOS app.
Using HMHomeManager, I have retrieved the HMHome object and found the camera as an HMAccessory object. Within this, I have located the HMCameraProfile and, through it, the HMCameraStreamControl. So far, I believe I understand the object model.
However, I do not know how to display the stream.
I have come across the CameraView, but I am not sure how to access the required HMCameraSource for the CameraView from the aforementioned object model.
Is my current approach fundamentally incorrect?
I would appreciate it if you could explain the relationships involved and possibly point me to an example project.
Best regards,
Matthias
While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing.
ImmersiveView's two scenes:
Scene 1 has 1 toggle button
Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video
Attached are the files and images of the memory leak as captured in Xcode.
To replicate this memory leak, follow these steps:
Create a new visionOS app using Xcode template as illustrated below.
Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist.
Replace all swift files with those you will find in the attached texts.
In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project.
Launch the app in debug mode via Xcode and onto the AVP device or simulator
Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph
Press on the first immersive space's button "Open ImmersiveView"
Press on the second immersive space's button "Show Immersive Video"
Background the app
When the app tray appears, foreground the app by selecting it
The first immersive space should appear
Repeat steps 7, 8, 9, and 10 multiple times
Observe the memory use going up, the graph should look similar to the below illustration.
In ImmersiveView, upon backgrounding the app, I do:
a reset method to clear the video's memory
dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..)
Am I not releasing the memory correctly?
Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app?
App file TestVideoLeakOneImmersiveView
First ImmersiveSpace file InitialImmersiveView
Second ImmersiveSpace File ImmersiveView
Skysphere Model File Immersive180VideoViewModel
File AppModel
[[PHImageManager defaultManager]
requestAVAssetForVideo:asset
options:videoOptions
resultHandler:^(AVAsset *_Nullable avAsset,
AVAudioMix *_Nullable audioMix,
NSDictionary *_Nullable info) {
if ([avAsset isKindOfClass:[AVURLAsset class]]) {
AVURLAsset *urlAsset = (AVURLAsset *)avAsset;
NSURL *videoURL = urlAsset.URL;
mediaInfo[@"path"] = videoURL.absoluteString;
} else {
// Failed to get video asset
completion(nil);
}
}];```
Before iOS 18, i could able access AVAsset video using the method mentioned above with the url, but starting from the iOS 18 version, the following error appears
'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
I'm attempting to use AVExternalStorageDevice.requestAccess on iOS 18 using Xcode 16.
When calling requestAccess, a dialog does appear, but the completionHandler closure is never called to indicate whether access was granted. If using the async version, the function just never returns.
Calling requestAccess also results in a mediaServicesWereReset (-11819) error without fail.
Supposedly, "the system only presents the dialog to a person the first time your app calls the method." That also doesn't appear to be the case. The dialog appears every time requestAccess is called, regardless of previous invocations and whether "Allow" or "Don't Allow" was selected.
The dialog itself says "You can change this in Privacy settings." I cannot find this permission anywhere in the Settings app, neither under Privacy & Security nor under the app-specific settings page.
Has anyone else experienced these issues? Am I missing something here? I did suspect permissions issues and tried adding a NSRemovableVolumesUsageDescription entry to the app. This did not appear to change anything.
Hello,
I'm experiencing an issue with WKWebView in my iOS app when running on devices with iOS 18. Specifically, when auto-playing videos, they do not play inline as expected but instead switch to full-screen mode. This problem only occurs on iOS 18; in earlier versions, the videos played inline without any issues.
Here is the configuration I'm using for WKWebView:
let configuration: WKWebViewConfiguration = WKWebViewConfiguration.init()
configuration.allowsInlineMediaPlayback = true
configuration.mediaTypesRequiringUserActionForPlayback = []
Additionally, I have set the playsinline attribute in the video tag on the web page.
Despite these settings, when the webpage initially loads, the video sometimes plays in full-screen mode rather than inline. I am looking for a solution to ensure that videos always play inline as intended on iOS 18.
Has anyone encountered a similar issue or know of any workarounds? Any help would be greatly appreciated!
Thank you!
Hello,
I'm experiencing an issue with WKWebView in my iOS app when running on devices with iOS 18. Specifically, when auto-playing videos, they do not play inline as expected but instead switch to full-screen mode. This problem only occurs on iOS 18; in earlier versions, the videos played inline without any issues.
Here is the configuration I'm using for WKWebView:
let configuration: WKWebViewConfiguration = WKWebViewConfiguration.init()
configuration.allowsInlineMediaPlayback = true
configuration.mediaTypesRequiringUserActionForPlayback = []
Additionally, I have set the 'playsinline' attribute in the video tag on the web page.
Despite these settings, when the webpage initially loads, the video sometimes plays in full-screen mode rather than inline. I am looking for a solution to ensure that videos always play inline as intended on iOS 18.
Has anyone encountered a similar issue or know of any workarounds? Any help would be greatly appreciated!
Thank you!
We are trying to build a video recording app using AVFoundation and AVCaptureDevice.
No custom settings are used like iso, exposure duration. All the settings are kept to auto.
But when video is captured using front camera and 1080x1920 dimensions, the video captured from the app and front native camera does not match.
In settings i have kept video setting as 30 fps 1080x1920.
The video captured from the app includes more area than the native app. Also some values like iso, exposure duration does not match.
So is there any way to capture video exactly same as native camera using AVFoundation and AVCaptureDevice.
I have attached screenshots from video for reference.
Native
AVCapture
AVKit provides the SwiftUI view VideoPlayer, and allows you to add an interactive overlay. But that overlay is normally placed behind the system-provided playback controls.
Is there any way to suppress those controls, without resorting to wrapping AVPlayerView?
Hi, Recording Videos with AVAssetWriter, capture fps(camera output fps) is ok, but final result video fps was lower, the reason is AVAssetWriterInput.isReadyForMoreMediaData is false sometimes.
Yes, I have read document many times, it said need to set expectsMediaDataInRealTime to true and balabala...
I really be tortured by this problem for a long time, can I debug this problem? or any advice?
Hi all,
we want to play downloaded encrypted HLS Fragment MP4 files on iPhone, and we are using UsingAVFoundationToPlayAndPersistHTTPLiveStreams to test.
on this HLSCatalog app, we can playback encrypted HLS Fragment MP4 streaming, but when we download the encrypted HLS Fragment MP4 to device/iPhone, then try to playback on device, but now, it has an issue:The error is: Error: Optional("The operation couldn’t be completed. (CoreMediaErrorDomain error 1718449215.)")
So, we want to know how can I playback downloaded encrypted HLS fmp4 on iPhone.
and you can try below url:
http://69.234.244.220/prod/vod/HDR10_2D_LEFT_48FPS_FMP4_Encrypted/prog_index.m3u8
Steps to reproduce:
1: create a HLS fragment mp4 with mediafilesegmenter:
mediafilesegmenter --iso-fragmented -t 4 --encrypt-key-file=BT709-2D-48FPS.key --encrypt-key-url=http://69.234.244.220:5000/download/BT709-2D-48FPS.key -f prod/vod/HDR10_2D_LEFT_48FPS_FMP4_Encrypted HDR10_2D_LEFT_48FPS.mp4
2: upload to content server
3: download UsingAVFoundationToPlayAndPersistHTTPLiveStreams from https://developer.apple.com/documentation/avfoundation/offline_playback_and_storage/using_avfoundation_to_play_and_persist_http_live_streams
4: in HLSCatalog app, replace playlist_url of Item-1 of Streams to http://69.234.244.220/prod/vod/HDR10_2D_LEFT_48FPS_FMP4_Encrypted/prog_index.m3u8
5: in HLSCatalog app->click the icon of Advanced Stream-->click download, when download success, the try play...now, it can NOT playback on iPhone.
Hello Apple,
I am yet again concerned about the new iOS Screen Mirroring that going to be available on iOS 18 stable.
I have an app that is only meant to be viewed on iPhones (not Macs or Computers, due to various security reasons.
I have raised a Feedback Assistant on this and Apple have ignored this.
Other apps that might benefit from a Disabling / Detecting API for iOS Screen Mirroring for Apps may be Snapchat, DAZN, Sony, Netflix, Amazon Prime, etc.
Is there still pans for an API that can disable this functionality now or in the future as I am sure that a company like Snapchat doesn't want people screenshotting photos using iOS Screen Mirroring and the app doesn't know.
Thanks.
Hello,
I'm trying to load a video on immersive space.
Actually using VideoMaterial and applying it on a plane surface I'm able to load video that I have locally.
If I want to load something from an external link like youtube or other service how can I do that?
Remembering obv that I'm loading it in an immersive space.
Thanks ;)
Hello,
I'm trying to stream stereoscopic SBS video on the APV. I see that AVPlayerViewController supports MV-HEVC video playback but it's not clear how to play SBS video on the Apple Vision Pro.
Are there any docs or examples you can share?
For my use case, SBS is the only format I can support.
Hi, I would like to put a video in an ImmersiveSpace but, when people tap on maximize button from top left corner, the app crashes. Is it a way to remove that button?
I need it to be in an ImmersiveSpace because I would like to be able to instantiate it wherever I want.
Hi,
We are currently building an app for immersive experiences of our custom content. This is displayed from a video on a custom geometry in the immersive on the Vision Pro
I have enabled the AVPlayerViewController system controls that detach when entering immersive like in the sample:
https://developer.apple.com/documentation/visionos/building-an-immersive-media-viewing-experience
For our case, we do not need the 2D screen showing after entering the immersive, only the environment
So my question is how to remove the screen with the video and keep the controls, like in the Apple TV app for immersive experiences?
Thanks in advance