(This only started happening as of Xcode 26.)
I know macOS and watchOS don't support this property, but all other platforms do (did?) up until I upgraded Xcode. Now when I compile I get this:
Value of type 'AVPlayerItem' has no member 'externalMetadata'
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Good day.
A video I created via iOS AVAssetWriter with the following settings:
let videoWriterInput = AVAssetWriterInput(
mediaType: .video,
outputSettings: [
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoWidthKey: 1080, AVVideoHeightKey: 1920,
AVVideoCompressionPropertiesKey: [
AVVideoAverageBitRateKey: 2_000_000,
AVVideoMaxKeyFrameIntervalKey: 30
],
]
)
let audioWriterInput = AVAssetWriterInput(
mediaType: .audio,
outputSettings: [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100,
AVEncoderBitRateKey: 128000
]
)
When It is split into fMP4 HLS format using ffmpeg, the video is unable to be played in iOS with the following error:
CoreMediaErrorDomain error -12848
However, the video is played normally in Android, Browser HLS players, and also VLC Media Player.
Please assist. Thank you.
Topic:
Media Technologies
SubTopic:
Video
I'm having a crash on an app that plays videos when the users activates close captions.
I was able to replicate the issue on an empty project. The crash happens when the AVPlayerLayer is used to instantiate an AVPictureInPictureController
These are the example project where I tested the crash:
struct ContentView: View {
var body: some View {
VStack {
VideoPlaylistView()
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
.background(Color.black.ignoresSafeArea())
}
}
class VideoPlaylistViewModel: ObservableObject {
// Test with other videos
var player: AVPlayer? = AVPlayer(url: URL(string:"https://d2ufudlfb4rsg4.cloudfront.net/newsnation/WIpkLz23h/adaptive/WIpkLz23h_master.m3u8")!)
}
struct VideoPlaylistView: View {
@StateObject var viewModel = VideoPlaylistViewModel()
var body: some View {
ScrollView {
VideoCellView(player: viewModel.player)
.onAppear {
viewModel.player?.play()
}
}
.scrollTargetBehavior(.paging)
.ignoresSafeArea()
}
}
struct VideoCellView: View {
let player: AVPlayer?
@State var isCCEnabled: Bool = false
var body: some View {
ZStack {
PlayerView(player: player)
.accessibilityIdentifier("Player View")
}
.containerRelativeFrame([.horizontal, .vertical])
.overlay(alignment: .bottom) {
Button {
player?.currentItem?.asset.loadMediaSelectionGroup(for: .legible) { group,error in
if let group {
let option = !isCCEnabled ? group.options.first : nil
player?.currentItem?.select(option, in: group)
isCCEnabled.toggle()
}
}
} label: {
Text("Close Captions")
.font(.subheadline)
.foregroundStyle(isCCEnabled ? .red : .primary)
.buttonStyle(.bordered)
.padding(8)
.background(Color.blue.opacity(0.75))
}
.padding(.bottom, 48)
.accessibilityIdentifier("Button Close Captions")
}
}
}
import Foundation
import UIKit
import SwiftUI
import AVFoundation
import AVKit
struct PlayerView: UIViewRepresentable {
let player: AVPlayer?
func updateUIView(_ uiView: UIView, context: UIViewRepresentableContext<PlayerView>) {
}
func makeUIView(context: Context) -> UIView {
let view = PlayerUIView()
view.playerLayer.player = player
view.layer.addSublayer(view.playerLayer)
view.layer.backgroundColor = UIColor.red.cgColor
view.pipController = AVPictureInPictureController(playerLayer: view.playerLayer)
view.pipController?.requiresLinearPlayback = true
view.pipController?.canStartPictureInPictureAutomaticallyFromInline = true
view.pipController?.delegate = view
return view
}
}
class PlayerUIView: UIView, AVPictureInPictureControllerDelegate {
let playerLayer = AVPlayerLayer()
var pipController: AVPictureInPictureController?
override init(frame: CGRect) {
super.init(frame: frame)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer.frame = bounds
playerLayer.backgroundColor = UIColor.green.cgColor
}
func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: any Error) {
print("Error starting Picture in Picture: \(error.localizedDescription)")
}
}
class AppDelegate: NSObject, UIApplicationDelegate {
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -> Bool {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playback, mode: .moviePlayback)
try audioSession.setActive(true)
} catch {
print("ERR: \(error.localizedDescription)")
}
return true
}
}
UITest to make the app crash:
final class VideoPlaylistSampleUITests: XCTestCase {
func testCrashiOS26ToggleCloseCaptions() throws {
let app = XCUIApplication()
app.launch()
let videoPlayer = app.otherElements["Player View"]
XCTAssertTrue(videoPlayer.waitForExistence(timeout: 30))
let closeCaptionButton = app.buttons["Button Close Captions"]
for _ in 0..<2000 {
closeCaptionButton.tap()
}
}
}
I'm getting an error writing a ciImage as a heif image:
// Create CIImage directly from pixel buffer
let ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: [CIImageOption.properties: combinedMetadata])
// Write HEIC synchronously
do {
try ciContext.writeHEIFRepresentation(of: ciImage, to: url, format: .RGBA8, colorSpace: colorSpace)
The error I'm getting is:
Error Domain=CINonLocalizedDescriptionKey Code=3 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to write HEIC data to file., NSUnderlyingError=0x11b1a1ec0 {Error Domain=CINonLocalizedDescriptionKey Code=10 "(null)" UserInfo={CINonLocalizedDescriptionKey=failed to add image to the PhotoCompressionSession.}}}
Both
try ciContext.writeJPEGRepresentation(of: copiedCIImage, to: url, colorSpace: colorSpace, options: options)
and
try ciContext.writePNGRepresentation(of: copiedCIImage, to: url, format: .RGBA8, colorSpace: colorSpace)
work. I also verified that the code works with iOS 18.
Is there something new we need to do for Heif images?
Thanks in advance
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Mobile Core Services
Photos and Imaging
Core Image
Hi,
I have an app that displays tens of short (<1mb) mp4 videos stored in a remote server in a vertical UICollectionView that has horizontally scrollable sections.
I'm caching all mp4 files on disk after downloading, and I also have a in-memory cache that holds a limited number (around 30) of players. The players I'm using are simple views that wrap an AVPlayerLayer and its AVPlayerItem, along with a few additional UI components.
The scrolling performance was good before iOS 26, but with the release of iOS 26, I noticed that there is significant stuttering during scrolling while creating players with a fileUrl. It happens even if use the same video file cached on disk for each cell for testing.
I also started getting this kind of log messages after the players are deinitialized:
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1107
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095
There's also another log message that I see occasionally, but I don't know what triggers it.
<< FigXPC >> signalled err=-16152 at <>:1683
Is there anyone else that experienced this kind of problem with the latest release?
Also, I'm wondering what's the best way to resolve the issue. I could increase the size of the memory cache to something large like 100, but I'm not sure if it is an acceptable solution because:
1- There will be 100 player instance in memory at all times.
2- There will still be stuttering during the initial loading of the videos from the web.
Any help is appreciated!
I tested the accuracy of the depth map on iPhone 12, 13, 14, 15, and 16, and found that the variance of the depth map after iPhone 12 is significantly greater than that of iPhone 12.
Enabling depth filtering will cause the depth data to be affected by the previous frame, adding more unnecessary noise, especially when the phone is moving.
This is not friendly for high-precision reconstruction. I tried to add depth map smoothing in post-processing to solve the problem of large depth map deviation, but the performance is still poor.
Is there any depth map smoothing solutions already announced by Apple?
mac os 系统版本:26.0 (25A354)
Xcode版本:Version 26.0 (17A324)
项目编译报错
`SwiftExplicitDependencyCompileModuleFromInterface arm64 /Users/zhz/Library/Developer/Xcode/DerivedData/ModuleCache.noindex/AssetsLibrary-HTIJ05N58KN3.swiftmodule
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS26.0.sdk/usr/lib/swift/AssetsLibrary.swiftmodule/arm64e-apple-ios.swiftinterface:10:25: error: 'ALAssetsLibrary' is unavailable in iOS: Use PHPhotoLibrary from the Photos framework instead
8 | public import _StringProcessing
9 | public import _SwiftConcurrencyShims
10 | extension AssetsLibrary.ALAssetsLibrary {
| `- error: 'ALAssetsLibrary' is unavailable in iOS: Use PHPhotoLibrary from the Photos framework instead
11 | #if compiler(>=5.3) && $NonescapableTypes
12 | @available(iOS, introduced: 9.0, deprecated: 9.0, obsoleted: 26.0)
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS26.0.sdk/System/Library/Frameworks/AssetsLibrary.framework/Headers/ALAssetsLibrary.h:80:12: note: 'ALAssetsLibrary' was obsoleted in iOS 26.0
78 |
79 | OS_EXPORT AL_DEPRECATED(4, "Use PHPhotoLibrary from the Photos framework instead")
80 | @interface ALAssetsLibrary : NSObject {
| `- note: 'ALAssetsLibrary' was obsoleted in iOS 26.0
81 | @package
82 | id _internal;
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS26.0.sdk/usr/lib/swift/AssetsLibrary.swiftmodule/arm64e-apple-ios.swiftinterface:1:1: error: failed to build module 'AssetsLibrary'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 6.2 effective-5.10 (swiftlang-6.2.0.17.14 clang-1700.3.17.1)', while this compiler is 'Apple Swift version 6.2 effective-5.10 (swiftlang-6.2.0.19.9 clang-1700.3.19.1)'). Please select a toolchain which matches the SDK.
Hi,
In the iOS13 and macOS Catalina release notes it says:
Metal CIKernel instances now support arguments with arbitrarily structured data.
I've been trying to use this functionality in a CIKernel with mixed results. I'm particularly interested in passing data in the form of a dynamically sized array. It seems to work up to a certain size. Beyond the threshold excessive data is discarded and the kernel becomes unstable. I assume there is some kind of memory alignment issue going on, but I've tried various types in my array and always get a similar result.
I have not found any documentation or sample code regarding this. It would be great to know how this is intended to work and what the limitations are.
In the forums there are two similar unanswered questions about data arguments, so I'm sure there are a few out there with similar issues.
Thanks!
Michael
On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely.
Any ideas what could be going wrong?
Here's a minimum code example to demonstrate:
#include <AudioToolbox/AudioToolbox.h>
#include <stdint.h>
#define RENDER_BUFFER_COUNT 2
#define RENDER_FRAMES_PER_BUFFER 128
// mono linear PCM audio data at 48kHz
#define RENDER_SAMPLE_RATE 48000
#define RENDER_CHANNEL_COUNT 1
#define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32))
void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount)
{
static bool isInverted = false;
float scalar = isInverted ? -1.f : 1.f;
for (uint32_t frame = 0; frame < frameCount; ++frame)
{
for (uint32_t channel = 0; channel < channelCount; ++channel)
{
// series of ramps, alternating up and down.
outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount);
}
}
isInverted = !isInverted;
}
AudioStreamBasicDescription coreAudioDesc = { 0 };
AudioQueueRef coreAudioQueue = NULL;
AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL };
void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer)
{
// 0's here indicate no fancy packet magic
AudioQueueEnqueueBuffer(queue, buffer, 0, 0);
}
int main(void)
{
const UInt32 BytesPerSample = sizeof(float);
coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE;
coreAudioDesc.mFormatID = kAudioFormatLinearPCM;
coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample;
coreAudioDesc.mFramesPerPacket = 1;
coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample;
coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT;
coreAudioDesc.mBitsPerChannel = BytesPerSample * 8;
coreAudioQueue = NULL;
OSStatus result;
// most of the 0 and NULL params here are for compressed sound formats etc.
result = AudioQueueNewOutput(&coreAudioDesc, &coreAudioCallback, NULL, 0, 0, 0, &coreAudioQueue);
if (result != noErr)
{
assert(false == "AudioQueueNewOutput failed!");
abort();
}
for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
{
uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER;
result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &(coreAudioBuffers[i]));
if (result != noErr)
{
assert(false == "AudioQueueAllocateBuffer failed!");
abort();
}
}
for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
{
RenderAudioSaw(coreAudioBuffers[i]->mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT);
coreAudioBuffers[i]->mAudioDataByteSize = coreAudioBuffers[i]->mAudioDataBytesCapacity;
AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0);
}
AudioQueueStart(coreAudioQueue, NULL);
sleep(10); // some time to hear the audio
AudioQueueStop(coreAudioQueue, true);
AudioQueueDispose(coreAudioQueue, true);
return 0;
}
What options do I have if I don't want to use Blackmagic's Camera ProDock as the external Sync Hardware, but instead I want to create my own USB-C hardware accessory which would show up as an AVExternalSyncDevice on the iPhone 17 Pro?
Which protocol does my USB-C device have to implement to show up as an eligible clock device in AVExternalSyncDevice.DiscoverySession?
Where can I find the documentation of the Genlock feature of the iPhone 17 Pro? How does it work and how can I use it in my app?
Queria saber quando lança o IOS 26 oficialmente
I wanted to know when iOS 26 will be officially released.
Topic:
Media Technologies
SubTopic:
General
My Environment:
Device: Mac (Apple Silicon, arm64)
OS: macOS 15.6.1
Description:
I'm developing a music app and have encountered an issue where I cannot update the playbackState in MPNowPlayingInfoCenter after my app loses audio focus to another app. Even though my app correctly calls [MPNowPlayingInfoCenter defaultCenter].playbackState = .paused, the system's Now Playing UI (Control Center, Lock Screen, AirPods controls) does not reflect this change. The UI remains stuck until the app that currently holds audio focus also changes its playback state.
I've observed this same behavior in other third-party music apps from the App Store, which suggests it might be a system-level issue.
Steps to Reproduce:
Use two most popular music apps in Chinese app Store (NeteaseCloud music and QQ music) (let's call them App A and App B):
Start playback in App A.
Start playback in App B. (App B now has audio focus, and App A is still playing).
Attempt to pause App A via the system's Control Center or its own UI.
Observed Behavior: App A's audio stream stops, but in the system's Now Playing controls, App A still appears to be playing. The progress bar continues to advance, and the pause button becomes unresponsive.
If you then pause App B, the Now Playing UI for App A immediately corrects itself and displays the proper "paused" state.
My Questions:
Is there a specific procedure required to update MPNowPlayingInfoCenter when an app is not the current "Now Playing" application?
Is this a known issue or expected behavior in macOS?
Are there any official workarounds or solutions to ensure the UI updates correctly?
Hello, I want to know if there are any restrictions with MusicKit to be used in a mobile app to be able to manipulate audio with an EQ on tracks coming from Apple Music, without modifying the actual track structure/data of course, just the audio output.
Hello,
I'm trying to determine the best/recommended AVAudioSession configuration (i.e category, mode, and options) for the following use-case.
Essentially, I'd like to switch between periods of playing an audio file and then recognizing speech. The audio file is typically speech and I don't intend for playback and speech recognition to occur simultaneously. I'd like for the user to sill be able to interact with Siri and I'd like for it to work with CarPlay where navigation prompts can occur.
I would assume the category to use is 'playAndRecord', but I'm not sure if it's better to just set that once for the entire lifecycle, or set to 'playback' for audio file playback and then switch to 'playAndRecord' for speech recognition . I'm also not sure on the best 'mode' and 'options' to set. Any suggestions would be appreciated.
Thanks.
Hello,
I am trying to access the Apple Music Feed API, but I am recieving a 401 Unauthorized error message whenever I try to access it.
I have tried using my own code to generate a JWT and directly call the API (which can call the standard Apple Music API successfully).
> GET /v1/feed/song/latest HTTP/2
> Host: api.media.apple.com
> user-agent: insomnia/2023.5.8
> authorization: Bearer [REDACTED]
> accept: */*
< HTTP/2 401
< content-type: application/json; charset=utf-8
< content-length: 0
< x-apple-jingle-correlation-key: AV5IOHBNM2UUJVOFQ4HZ2TGF6Q
< x-daiquiri-instance: daiquiri:10001:daiquiri-all-shared-ext-7bb7c9b9bb-r459v:7987:25RELEASE91:daiquiri-amp-kubernetes-shared-ext-ak8s-prod-pv4-amp-daiquiri-ingress-prod
and also the Apple provided Python example code, which gives me authentication errors too.
$ python3 ./apple_music_feed_example.py --key-id NMBH[...] --team-id 3TNZ[...] --secret-key-file-path "/Users/foxt/Documents/am-feed/NMBH[...].p8" --out-dir .
running....
INFO:__main__:Sending requests to https://api.media.apple.com
INFO:__main__:Getting the latest export for feed artist
Exception: Authentication Failed. Did you provide the correct team id, key id, and p8 file?
Does this API need to be enabled on my account separately from the main Apple Music API? The documentation reads to me as if anyone with an Apple Developer Programme membership can use this API and I did not see any information regarding any other requirements
Hi everyone,
I noticed that Apple recently added a few new beta sample codes related to video encoding:
Encoding video for low-latency conferencing
Encoding video for live streaming
While experimenting with H.264 encoding, I came across some questions regarding certain configurations:
When I enable kVTVideoEncoderSpecification_EnableLowLatencyRateControl, is it still possible to use kVTCompressionPropertyKey_VariableBitRate? In my tests, I get an error.
It also seems that kVTVideoEncoderSpecification_EnableLowLatencyRateControl cannot be used together with kVTCompressionPropertyKey_ConstantBitRate when encoding H264. Is that expected?
When using kVTCompressionPropertyKey_ConstantBitRate with kVTCompressionPropertyKey_MaxKeyFrameInterval set to 2, the encoder outputs only keyframes, and the frame size keeps increasing, which doesn’t seem like the intended behavior.
Regarding the following code from the sample:
let byteLimit = (Double(bitrate) / 8) * 1.5 as CFNumber
let secLimit = Double(1.0) as CFNumber
let limitsArray = [ byteLimit, secLimit ] as CFArray // Each 1 second limit byte as bitrate
err = VTSessionSetProperty(session, key: kVTCompressionPropertyKey_DataRateLimits, value: limitsArray)
This DataRateLimits setting doesn’t seem to have any effect in my tests. Whether I set it or not, the values remain unchanged.
Since the documentation on developer.apple.com/documentation
doesn’t clearly explain these cases, I’d like to ask if anyone has insights or recommendations about the proper usage of these settings.
Thanks in advance!
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it:
dlsym cannot find symbol g_dwILResult in CFBundle etc..
I used Xcode 16.4 to build the plugin.
Has anybody come across the same or a similar message?
Best,
Achillefs
Axart Labs
Hello,
Environment
macOS 15.6.1 / Xcode 26 beta 7 / iOS 26 Beta 9
In a simple AVFoundation video-playback sample, I’m seeing different behavior between iOS 18 and iOS 26 regarding AVPlayerItem.didPlayToEndTimeNotification.
I’ve attached a minimal sample below. Please replace videoURL with a valid short video URL.
Repro steps
Tap “Play” to start playback and let the video finish.
The AVPlayerItem.didPlayToEndTimeNotification registered with NotificationCenter should fire, and you should see Play finished. in the console.
Without relaunching, tap “Play” again. This is where the issue arises.
Observed behavior
On iOS 18 and earlier: The video does not play again (it does not restart from the beginning), but AVPlayerItem.didPlayToEndTimeNotification is posted and Play finished. appears in the console. The same happens every time you press “Play”.
On iOS 26: Pressing “Play” does not post AVPlayerItem.didPlayToEndTimeNotification. The code path that prints Play finished. is never called (the callback enclosing that line is not invoked again).
Building the same program with Xcode 16.4 and running it on an iOS 26 beta device shows the same phenomenon, which suggests there has been a behavioral change for AVPlayerItem.didPlayToEndTimeNotification on iOS 26. I couldn’t find any mention of this in the release notes or API Reference.
Because the semantics around AVPlayerItem.didPlayToEndTimeNotification appear to differ, we’re forced to adjust our logic. If there is a way to achieve the iOS 18–style behavior on iOS 26, I would appreciate guidance.
Alternatively, if this change is intentional, could you share the reasoning? Is iOS 26 the correct behavior from Apple’s perspective and iOS 18 (and earlier) behavior considered incorrect? Any official clarification would be extremely helpful.
import UIKit
import AVFoundation
final class ViewController: UIViewController {
private let videoURL = URL(string: "https://......mp4")!
private var player: AVPlayer?
private var playerItem: AVPlayerItem?
private var playerLayer: AVPlayerLayer?
private var observeForComplete: NSObjectProtocol?
// UI
private let playerContainerView = UIView()
private let playButton = UIButton(type: .system)
private let stopButton = UIButton(type: .system)
private let replayButton = UIButton(type: .system)
deinit {
if let observeForComplete {
NotificationCenter.default.removeObserver(observeForComplete)
}
}
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBackground
setupUI()
setupPlayer()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
playerLayer?.frame = playerContainerView.bounds
}
// MARK: - Setup
private func setupUI() {
playerContainerView.translatesAutoresizingMaskIntoConstraints = false
playerContainerView.backgroundColor = .black
view.addSubview(playerContainerView)
// Buttons
playButton.setTitle("Play", for: .normal)
stopButton.setTitle("Pause", for: .normal)
replayButton.setTitle("RePlay", for: .normal)
[playButton, stopButton, replayButton].forEach {
$0.titleLabel?.font = .systemFont(ofSize: 16, weight: .semibold)
$0.translatesAutoresizingMaskIntoConstraints = false
$0.contentEdgeInsets = UIEdgeInsets(top: 10, left: 16, bottom: 10, right: 16)
}
let stack = UIStackView(arrangedSubviews: [playButton, stopButton, replayButton])
stack.axis = .horizontal
stack.spacing = 16
stack.alignment = .center
stack.distribution = .equalCentering
stack.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(stack)
NSLayoutConstraint.activate([
playerContainerView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20),
playerContainerView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
playerContainerView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
playerContainerView.heightAnchor.constraint(equalToConstant: 200),
stack.topAnchor.constraint(equalTo: playerContainerView.bottomAnchor, constant: 20),
stack.centerXAnchor.constraint(equalTo: view.centerXAnchor)
])
// Action
playButton.addTarget(self, action: #selector(didTapPlay), for: .touchUpInside)
stopButton.addTarget(self, action: #selector(didTapStop), for: .touchUpInside)
replayButton.addTarget(self, action: #selector(didTapReplayFromStart), for: .touchUpInside)
}
private func setupPlayer() {
// AVURLAsset -> AVPlayerItem → AVPlayer
let asset = AVURLAsset(url: videoURL)
let item = AVPlayerItem(asset: asset)
self.playerItem = item
let player = AVPlayer(playerItem: item)
player.automaticallyWaitsToMinimizeStalling = true
self.player = player
let layer = AVPlayerLayer(player: player)
layer.videoGravity = .resizeAspect
playerContainerView.layer.addSublayer(layer)
layer.frame = playerContainerView.bounds
self.playerLayer = layer
// Notification
if let observeForComplete {
NotificationCenter.default.removeObserver(observeForComplete)
}
if let playerItem {
observeForComplete = NotificationCenter.default.addObserver(
forName: AVPlayerItem.didPlayToEndTimeNotification,
object: playerItem,
queue: .main
) { [weak self] _ in
guard self != nil else { return }
Task { @MainActor in
print("Play finished.")
}
}
}
}
// MARK: - Actions
@objc private func didTapPlay() {
player?.play()
}
@objc private func didTapStop() {
player?.pause()
}
// RePlay
@objc private func didTapReplayFromStart() {
player?.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero) { [weak self] _ in
self?.player?.play()
}
}
}
I would greatly appreciate an official response from Apple engineering on whether this is an intentional change, a regression, or an API contract clarification, and what the recommended approach is going forward. Thank you.
Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues.
However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct.
After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect.
I created a VTCompressionSession using the following configuration.
var newSession: VTCompressionSession!
var status = VTCompressionSessionCreate(
allocator: kCFAllocatorDefault,
width: Int32(frameSize.width),
height: Int32(frameSize.height),
codecType: kCMVideoCodecType_HEVC,
encoderSpecification: nil,
imageBufferAttributes: nil,
compressedDataAllocator: nil,
outputCallback: nil,
refcon: nil,
compressionSessionOut: &newSession
)
try check(status, VideoToolboxErrorDomain)
let properties: [CFString: Any] = [
kVTCompressionPropertyKey_AllowFrameReordering: false,
kVTCompressionPropertyKey_AllowTemporalCompression: false,
kVTCompressionPropertyKey_RealTime: false,
kVTCompressionPropertyKey_MaximizePowerEfficiency: false,
kVTCompressionPropertyKey_ProfileLevel: profileLevel,
kVTCompressionPropertyKey_Quality: quality.rawValue,
]
status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary)
try check(status, VideoToolboxErrorDomain) {
VTCompressionSessionInvalidate(newSession)
}
Then use the following code to encode each Grid of the image.
let status = VTCompressionSessionEncodeFrame(
session,
imageBuffer: buffer,
presentationTimeStamp: presentationTimeStamp,
duration: frameDuration,
frameProperties: nil,
infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in
try check(status, VideoToolboxErrorDomain)
if let sampleBuffer {
let encodedImage = try self.encodedImage(from: sampleBuffer)
// handle encodedImage
}
}
try check(status, VideoToolboxErrorDomain)
If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding.
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.