Our app plays TS files on an iPhone.
The app fragments the TS files, creates an M3U8 playlist, converts them to HLS(HTTP Live Streaming), and then uses AVPlayer to play the video content.
On a device running iOS 26, after starting playback and seeking, restarting playback causes the video and audio to be out of sync (by about 2-3 seconds depending on the situation).
This also occurs on iPadOS/macOS 26.
This issue was not observed prior to iOS 18.
We are trying to fix this issue on the app side, but we have the following questions:
The behavior of AVPlayer is different between iOS 26 and previous versions. Has there been any change that could be considered? Or is it a bug?
We tried pausing before seeking, but it didn’t seem to have any effect. Are there any APIs or workarounds that can improve this?
We would appreciate it if you could tell us any other helpful documents or URLs.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Currently, I am using the Broadcast UploadExtension function to obtain samplebuffer data through APP Group and IPC (based on the local Unix Domain Socket) The screen recording data transmission method of the domain socket is transmitted to the APP. However, when the APP goes back to the background to view videos in the album or other audio and video, the data transmission stops and the APP cannot obtain the screen recording data. I would like to ask how to solve this problem. I suspect that the system has suspended the extended screen recording
Topic:
Media Technologies
SubTopic:
Video
I'm at my wit's end with a problem I'm facing while developing an app. The app is designed to send video captured on an iPhone to a browser application for real-time display.
While it works on many older iPhone models, whenever I test it on an iPhone 17 Pro or 17 Pro Max, the video displayed in the browser becomes a solid green screen, or a colorful, garbled image that's mostly green.
I've been digging into this, and my main suspicion is an encoding failure. It seems the resolution of the ultra-wide and telephoto cameras was significantly increased on the 17 Pro and Pro Max (from 12MP to 48MP), and I think this might be overwhelming the encoder.
I'm really hoping someone here has encountered a similar issue or has any suggestions. I'm open to any information or ideas you might have. Please help!
Environment Information:
WebRTC Library: GoogleWebRTC Version 1.1 (via CocoaPods)
Signaling Server: AWS Kinesis Video Streams
Problem Occurs on:
Model: iPhone18,1, OS: 26.0
Model: iPhone18,1, OS: 26.1
Works Fine on:
Many models before iPhone17,5
Model: iPhone18,1, OS: 26.0
Model: iPhone18,3, OS: 26.0
SBS ViewPacking add a half a frame to the opposite eye.
Meaning if you look all the way right you can see an extra half frame with left eye and vice versa.
OU doesn't work at all, the preview just doesn't show a thumbnail and the video doesn't play.
Any hints on how to fix this? I submitted a bug report but haven't heard anything.
context:Explore low-latency video encoding with VideoToolbox https://developer.apple.com/videos/play/wwdc2021/10158/
I see above post said HEVC VideoToolbox can support SVC, temporal layering of streams in low-latency mode with all P frames.
my question is that Does HEVC VideoToolbox support temporal layering of streams with B-frames ?? thanks
Does videotoolbox hevc encoder on iphone16 pro encode with 10bit hdr vui info in bitstream ?
I want to encode 4k 10bit hdr bitstream with vui info use hevc videotoolbox .but I don't know how to encode vui info in bitstream ? please give me some sample code ?
Adding both AVCaptureMovieFileOutput and AVCaptureVideoDataOutput is supported in AVCaptureSession as seen in documentation (copied snippet below) but then when AVCaptureDevice is configured with ProRes422 codec, it fails unless one of the two outputs is removed from the capture session. It is very much reproducible on iPhone 14 pro running iOS 26.0.
Prior to iOS 16, you can add an AVCaptureVideoDataOutput and an AVCaptureMovieFileOutput to the same session, but only one may have its connection active. If you attempt to enable both connections, the system chooses the movie file output as the active connection and disables the video data output’s connection. For apps that link against iOS 16 or later, this restriction no longer exists.
Playback of any kind of HD H.264 MP4 files (720p, 50fps) could randomly cause a stalled image. Playback does not stop in such a case. Image is frozen/stalled. Audio goes on. Timeline goes on. By tapping play/pause or scrubbing in the timeline, the playback recovers.
It could also happen, if you are scrubbing in the timeline, especially to areas not loaded already (progressive MP4 download). Behaviour is always the same: image is stalled/frozen, audio goes on.
To reproduce: use example project https://developer.apple.com/documentation/AVKit/playing-video-content-in-a-standard-user-interface
Example file: https://www.keepinmind.info/test.mp4
I’m building a macOS video editor that uses AVComposition and AVVideoComposition.
Initially, my renderer creates a composition with some default video/audio tracks:
@Published var composition: AVComposition?
@Published var videoComposition: AVVideoComposition?
@Published var playerItem: AVPlayerItem?
Then I call a buildComposition() function that inserts all the default video segments.
Later in the editing workflow, the user may choose to add their own custom video clip. For this I have a function like:
private func handlePickedVideo(_ url: URL) {
guard url.startAccessingSecurityScopedResource() else {
print("Failed to access security-scoped resource")
return
}
let asset = AVURLAsset(url: url)
let videoTracks = asset.tracks(withMediaType: .video)
guard let firstVideoTrack = videoTracks.first else {
print("No video track found")
url.stopAccessingSecurityScopedResource()
return
}
renderer.insertUserVideoTrack(from: asset, track: firstVideoTrack)
url.stopAccessingSecurityScopedResource()
}
What I want to achieve is the same behavior professional video editors provide,
after the composition has already been initialized and built, the user should be able to add a new video track and the composition should update live, meaning the preview player should immediately reflect the changes without rebuilding everything from scratch manually.
How can I structure my AVComposition / AVMutableComposition and my rendering pipeline so that adding a new clip later updates the existing composition in real time (similar to Final Cut/Adobe Premiere), instead of needing to rebuild everything from zero?
You can find a playable version of this entire setup at :- https://github.com/zaidbren/SimpleEditor
Avplayer encapsulates a player. After connecting to an Apple Bluetooth headset, it immediately reports an error. Non-Apple Bluetooth headsets can be played, but currently the issue is that the player works normally in one app but not in another. We are an educational app.
Hello, I requested Fairplay credentials a week ago. Where can I check the status of my request? Thank you very much.
The team ID is: 6WZ8SVRYV9
We’re developing an AVFoundation-based video recording app (4K @ 60 fps required for biomechanical analysis). On most devices this works perfectly (iPhone 12/14/15/16 non-Pro models), but on several iPhone Pro models (12 Pro, 13 Pro, 14 Pro, 15 Pro/Pro Max), we consistently get 4K 30 fps recordings—even when the device should support 4K 60 fps on the wide-angle camera.
What we observe
We configure the session for .hd4K3840x2160.
We iterate through AVCaptureDevice.formats and select formats that:
have 3840×2160 resolution
support ≥60 fps (videoSupportedFrameRateRanges)
On some Pro devices, this format search returns no results, even though:
The Camera app records 4K60 fine.
External references list the wide camera as 4K60 capable.
The fallback becomes the device's default 4K30 format, so final files are 3840×2160 @ 30 fps.
This happens immediately on app launch (not after heating), so not thermal-related.
What we’ve tried
Force selecting .builtInWideAngleCamera instead of dual/triple cameras.
Disabling HDR (videoHDREnabled = false).
Disabling low-light boost.
Allowing 59.94 fps formats (in case exact 60.0 isn’t exposed).
Logging all videoSupportedFrameRateRanges per format.
What we’re seeing in logs
On affected Pro devices, the capture device reports only 4K formats with maxFrameRate ≈ 30 fps, despite the hardware being able to do 4K60.
Main question
Has anyone encountered cases where 4K60 formats are available in the Camera app but not exposed through AVFoundation, especially on Pro models or multi-camera devices?
Could HEVC/HDR capability or multi-camera constraints be preventing certain formats from appearing?
Are there known conditions where 4K60 formats are hidden unless specific device configuration is applied?
Any guidance on reliably locking 4K60 on iPhone Pro models via AVFoundation would be hugely appreciated.
Facing an issue with audio playback using AVPlayerViewController in iOS application. We are using the native player to play recorded audio files.
When the AVPlayerViewController appears, the native user interface is displayed correctly, including the playback controls and the volume slider.
However, when the user interacts with the volume slider
The slider UI moves and responds to touch events.
The actual audio output volume does not change. The audio continues playing at the initial volume level regardless of the slider position.
We initialize the player and present it modally using the following code:
AVPlayerViewController *avController = [[AVPlayerViewController alloc] init];
avController.player = [AVPlayer playerWithURL:videoURL];
// Setting initial volume
avController.player.volume = 1.0f;
avController.modalPresentationStyle = UIModalPresentationOverFullScreen;
avController.allowsPictureInPicturePlayback = NO;
// Present the controller
[self presentViewController:avController animated:YES completion:nil];
Has anyone tried to make an ILPD based AIME file?
When I try the resulting AIME switches to USDZ Mesh instead of saving the ILPD Data.
I made a CMIOExtension (a virtual camera) which generates its own output, for use in our in-house software testing. I wanted to make a video source with 29.97, 30, 59.94 and 60fps output.
To this end, I created a CMIOExtensionDeviceSource which creates a CMIOExtensionDevice with one CMIOExtensionStreamSource with various stream formats contained in [CMIOExtensionStreamFormat], including one with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1000, timescale: 30000) and another with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1001, timescale: 30000)
I've held off on the creation of the 59.94/60fps source for now until this problem is resolved.
my virtual camera works, it produces a signal, but when I examine its associated AVCaptureDevice in the debugger, I find
(lldb) po self.captureDevice?.formats[0].videoSupportedFrameRateRanges[0].maxFrameDuration
▿ Optional<CMTime>
▿ some : CMTime
- value : 1000000
- timescale : 30000000
▿ flags : CMTimeFlags
- rawValue : 1
- epoch : 0
I get the same value, 1000000/30000000, or exactly 30fps, for all the formats of my AVCaptureDevice.
Is there something I'm doing wrong, or do CMIOExtensionDevices always round the frame rates?
I can't force CoreMediaIO to produce frames at exactly my desired frame interval, but I'd like to ensure that the average frame rate is my desired rate. How can I do that? Frame emission is governed by a repeating DispatchSourceTimer with a repeat time specified in nanoseconds with the TimerFlags set to 'strict'.