When creating and activating AVPictureInPictureController on iPhone 11 running iOS 18.x, the main screen viewport unexpectedly changes from the native 375×812 logical points to 414×896 points (the size of a different device such as iPhone XR/11 Plus). Additionally, a separate UIWindow with zero CGRect is created. This causes UIKit layout calculations, especially related to the keyboard and safeAreaInsets, to malfunction and results in issues like keyboard clipping and general UI misalignment. The problem appears tied specifically to iOS 18+ behavior and does not manifest on earlier iOS versions or other devices consistently.
Steps to Reproduce:
Run the app on iPhone 11 with iOS 18.x (including the latest minor updates).
Instantiate and start an AVPictureInPictureController during app runtime.
Observe that UIScreen.main.bounds changes from 375×812 (native iPhone 11 size) to 414×896.
Notice a secondary UIWindow appears with frame CGRectZero.
Interact with text inputs that cause the keyboard to appear.
The keyboard is clipped or partially obscured, causing UI usability issues.
Layout elements dependent on UIScreen.main.bounds or safeAreaInsets behave incorrectly.
Expected Behavior:
UIScreen.main.bounds should not dynamically change upon PiP controller creation. The coordinate space and viewport should remain accurate to device native dimensions. The keyboard and UI layout should adjust properly with respect to the actual on-screen size and safe areas.
Actual Behavior:
UIScreen.main.bounds changes to 414×896 upon PiP controller creation on iPhone 11 iOS 18+.
A zero-rect secondary UIWindow is created, impacting layout queries.
Keyboard clipping and UI layout bugs occur.
The app viewport and coordinate space are inconsistent with device hardware.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
1, Devices
Apple TV HD tvOS 18.5 (22L572) Mode(A1625)
HomePod Gen1 OS18.5 (22L572)
2, Test cases:
2.1 UIViewController + AVPlayerLayer + AVPlayer
Result: Can not AirPlay to HomePod
Sample code for UIViewController + AVPlayer
2.1 AVPlayerViewController + AVPlayer
Result: Can AirPlay to HomePod
Sample code for AVPlayerViewController + AVPlayer
Background: For iOS, I have built a custom video record app that records video at the highest possible FPS (supposed to be around 240 FPS but more like ~200 in practice). When I save this video to the user's camera roll, I notice this dynamic playback speed bar. This bar will speed up or slow down the video.
My question: How can I save my video such that this playback speed bar is constantly slow or plays at real time speed?
For reference I have include the playback speed bar that I am talking about in the screenshot, you can find this in the photo app when you record slow motion video.
Topic:
Media Technologies
SubTopic:
Video
Hi, I’ve been experiencing a strange issue with video playback on my iPhone. While watching videos, the image will suddenly shift it becomes more greyish, then sometimes briefly goes black, and then returns to normal bright quality. This can happen multiple times during a single video.
This is not limited to the Photos app. I’ve seen it happen:
In the Photos app when playing videos I recorded myself
In Snapchat when watching videos sent by others
Occasionally in other social media apps as well
Additional details:
HDR Video is turned off
Apple ProRes is turned off
Tried both 4K 60fps and 4K 30fps
Camera format set to “Most Compatible”
Low Power Mode is off
Issue happens whether the phone is cool or warm
Doesn’t seem related to the video file itself the same file exported to another device looks fine all the way through
These exact same videos play back completely normally on my iPad with no brightness or contrast shifts at all
I’m currently on the iOS 17 public beta, but this issue was happening before I installed the beta as well, so it’s not beta-specific
It almost feels like the display or system is switching between different brightness/contrast profiles mid playback, regardless of the app.
Has anyone else experienced this, and is there a way to disable this behavior so the brightness and color stay consistent during video playback?
Thanks in advance!
When I try to send a DRM-protected video via Airplay to an Apple TV, the license request is made twice instead of once as it normally does on iOS.
We only allow one request per session for security reasons, this causes the second request to fail and the video won't play.
We've tested DRM-protected videos without token usage limits and it works, but this creates a security hole in our system.
Why does it request the license twice in function: func contentKeySession(_ session: AVContentKeySession, didProvide keyRequest: AVContentKeyRequest)?
Is there a way to prevent this?
Any one experience this bug, when playing a video the bluetooth headphones loose audio? my workaround is to select from one of the other audio outputs, and go back again and select the affected headphone.
Topic:
Media Technologies
SubTopic:
Video
Hi there,
I want to set the iphone camera to "S mode, or Shutter Priority" in camera terminology. Which is a semi-auto exposure model with shutter speed fixed, or set manually.
However, when setting the shutter speed manually, it disables the auto exposure.
So is there a way to keep the auto exposure on while restrict the shutter speed?
Also, I would like to keep a low frame rate, e.g. 30 fps. Would I be able to set shutter speed independent of frame rate?
Here's the code for setting up the camera
Best,
I'm writing some camera functionality that uses AVCaptureVideoDataOutput.
I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput.
My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread?
I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource.
I have setup similar to the following:
// Foreground thread logic
dispatch_queue_t queue = dispatch_queue_create("avf_camera_queue", nullptr);
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
setupInputDevice(captureSession); // Connects the AVCaptureDevice...
// Store some arbitrary data to be attached to the frame, stored on the foreground thread
FrameMetaData frameMetaData = ...;
MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc];
// Capture frameMetaData by reference in lambda
[sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }];
AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate
queue:queue];
[captureSession addOutput:captureVideoDataOutput];
[captureSession startRunning];
[captureSession stopRunning];
// Is it now safe to destroy frameMetaData, or do we need manual barrier?
And then in MySampleBufferDelegate:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Invokes the callback set above
FrameMetaData *frameMetaData = frameMetaDataGetter();
emitSampleBuffer(sampleBuffer, frameMetaData);
}
Hi,
I have an app that displays tens of short (<1mb) mp4 videos stored in a remote server in a vertical UICollectionView that has horizontally scrollable sections.
I'm caching all mp4 files on disk after downloading, and I also have a in-memory cache that holds a limited number (around 30) of players. The players I'm using are simple views that wrap an AVPlayerLayer and its AVPlayerItem, along with a few additional UI components.
The scrolling performance was good before iOS 26, but with the release of iOS 26, I noticed that there is significant stuttering during scrolling while creating players with a fileUrl. It happens even if use the same video file cached on disk for each cell for testing.
I also started getting this kind of log messages after the players are deinitialized:
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1107
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095
There's also another log message that I see occasionally, but I don't know what triggers it.
<< FigXPC >> signalled err=-16152 at <>:1683
Is there anyone else that experienced this kind of problem with the latest release?
Also, I'm wondering what's the best way to resolve the issue. I could increase the size of the memory cache to something large like 100, but I'm not sure if it is an acceptable solution because:
1- There will be 100 player instance in memory at all times.
2- There will still be stuttering during the initial loading of the videos from the web.
Any help is appreciated!
Good day.
A video I created via iOS AVAssetWriter with the following settings:
let videoWriterInput = AVAssetWriterInput(
mediaType: .video,
outputSettings: [
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoWidthKey: 1080, AVVideoHeightKey: 1920,
AVVideoCompressionPropertiesKey: [
AVVideoAverageBitRateKey: 2_000_000,
AVVideoMaxKeyFrameIntervalKey: 30
],
]
)
let audioWriterInput = AVAssetWriterInput(
mediaType: .audio,
outputSettings: [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100,
AVEncoderBitRateKey: 128000
]
)
When It is split into fMP4 HLS format using ffmpeg, the video is unable to be played in iOS with the following error:
CoreMediaErrorDomain error -12848
However, the video is played normally in Android, Browser HLS players, and also VLC Media Player.
Please assist. Thank you.
Topic:
Media Technologies
SubTopic:
Video
Hi, I downloaded and ran https://developer.apple.com/documentation/realitykit/rendering-stereoscopic-video-with-realitykit
and noticed that memory usage grows linearly.
I replaced the sample video with a different 8k side by side video, and the app crashed almost immediately due to memory leak.
it looks like the culprit is from makeMutablePixelBuffer() function and the allocated pixelBuffers are not recycled after being used.
screenshot is from a physical device.
Our app plays TS files on an iPhone.
The app fragments the TS files, creates an M3U8 playlist, converts them to HLS(HTTP Live Streaming), and then uses AVPlayer to play the video content.
On a device running iOS 26, after starting playback and seeking, restarting playback causes the video and audio to be out of sync (by about 2-3 seconds depending on the situation).
This also occurs on iPadOS/macOS 26.
This issue was not observed prior to iOS 18.
We are trying to fix this issue on the app side, but we have the following questions:
The behavior of AVPlayer is different between iOS 26 and previous versions. Has there been any change that could be considered? Or is it a bug?
We tried pausing before seeking, but it didn’t seem to have any effect. Are there any APIs or workarounds that can improve this?
We would appreciate it if you could tell us any other helpful documents or URLs.
Currently, I am using the Broadcast UploadExtension function to obtain samplebuffer data through APP Group and IPC (based on the local Unix Domain Socket) The screen recording data transmission method of the domain socket is transmitted to the APP. However, when the APP goes back to the background to view videos in the album or other audio and video, the data transmission stops and the APP cannot obtain the screen recording data. I would like to ask how to solve this problem. I suspect that the system has suspended the extended screen recording
Topic:
Media Technologies
SubTopic:
Video
Hi,
trying to wrap my head around Xcode's FXPlug. I already sell Final Cut Pro titles for a company. These titles were built in motion.
However, they want me to move them to an app and I'm looking for any help on how to accomplish this
*What the app should do is:
Allow users with an active subscription to our website the ability to access titles within FCPX and if they are not an active subscriber, for access to be denied.
Topic:
Media Technologies
SubTopic:
Video
Tags:
Professional Video Applications
MetalFX
wwdc2022-10103
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?
I have an intra-oral video device that's supported by Apple's AVCaptureDevice. I can use the AV classes to connect to the device and get video. However, this device has a button that's used to acquire still images from the video stream. I can't use the IOUSBDeviceInterface to do an asynchronous read, because the Apple driver has the device opened exclusively. How do I go about receiving the button event in this scenario? I know which pipe to read, based on a bus analyzer when I run this on Windows, I just need to know how to access that pipe when the device is opened by another process.
Hello,
First, some version and software details:
Software: iOS 18.1
Hardware: iPhone 14 Pro Max and later
Xcode: 16.0
Summary: AVAssetReader is not concatenating a video at the beginning of the output video. The output video should contain a scene of me introducing the content, followed by a blue screen with AVSpeechSynthesizer reading out a text that I pasted above the "Generate Video" button.
Details:
Now, let's talk about the app.
Basically, I’m developing an app that generates a video with the following features:
My app will create an output video that is split into an opening scene followed by a fully blue screen.
The opening scene will be taken from a video I choose from my gallery.
I will read the opening video using AVAssetReader as usual.
After the opening scene, I will use the content of a text read by AVSpeechSynthesizer.write().
After the opening scene, the synthesized audio will start playing while the blue screen is displayed.
All of this is already defined in the attached project.
Each project file has a comment at the beginning introducing its content.
How to test:
Write something in the field above the "Generate Video" button. For example, type "Hello, world!"
Then, press the "Library" button and select a video from the gallery, about 30 seconds long.
That’s it. Press the "Generate Video" button.
The result I’ve experienced is a crash or failure to generate the video.
Practical example of what I want to achieve:
Suppose I record a 30-second video where I say, "I’m going to tell you the story of Snow White."
Then, I paste the "Snow White" story into the field above the "Generate Video" button.
The output video should contain me saying, "I’m going to tell you the story of Snow White."
After that, the AVSpeechSynthesizer will read the story I pasted, while displaying a blue screen.
I look forward to a solution.
Thank you very much!
convertToCMSampleBuffer.swift
convertToPixelBuffer.swift
createInputs.swift
createVideo.swift
test.swift
saveVideo.swift
TestApp.swift
editingVideo.swift
sampleReaderProvider.swift
misc.swift
sampleProvider.swift
I am developing an iOS app that uses YOLOv8 for object detection and aims to detect objects at 60 FPS using the UltraWide camera. My goal is to process every frame within captureOutput and utilize the detected data (such as coordinates) for each one.
I have a question regarding how background thread processing behaves in this scenario. Does the size of the YOLO model (n, s, m, etc.) or the weight of the operations inside captureOutput affect the number of frames that can be successfully processed?
Specifically, I would like to know if all frames will be processed sequentially with a delay due to heavy processing in the background, or if some frames will be dropped and not processed at all. Any insights on how to handle this would be greatly appreciated.
Thank you!
xcode 16:
VT_EXPORT void
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) API_AVAILABLE(macos(10.8), ios(16.0), tvos(16.0), visionos(1.0)) API_UNAVAILABLE(watchos);
xcode 15:
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) VT_AVAILABLE_STARTING(10_8);
Hello, I will use AVFoundation's AVAssetWriter and AVPlayer for H.264 and H.265 encoding and decoding in my app. It will be used commercially, so I would like to know if I need to pay any licensing fees for H.264 and H.265 encoding and decoding.
Topic:
Media Technologies
SubTopic:
Video