Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues.
However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct.
After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect.
I created a VTCompressionSession using the following configuration.
var newSession: VTCompressionSession!
var status = VTCompressionSessionCreate(
allocator: kCFAllocatorDefault,
width: Int32(frameSize.width),
height: Int32(frameSize.height),
codecType: kCMVideoCodecType_HEVC,
encoderSpecification: nil,
imageBufferAttributes: nil,
compressedDataAllocator: nil,
outputCallback: nil,
refcon: nil,
compressionSessionOut: &newSession
)
try check(status, VideoToolboxErrorDomain)
let properties: [CFString: Any] = [
kVTCompressionPropertyKey_AllowFrameReordering: false,
kVTCompressionPropertyKey_AllowTemporalCompression: false,
kVTCompressionPropertyKey_RealTime: false,
kVTCompressionPropertyKey_MaximizePowerEfficiency: false,
kVTCompressionPropertyKey_ProfileLevel: profileLevel,
kVTCompressionPropertyKey_Quality: quality.rawValue,
]
status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary)
try check(status, VideoToolboxErrorDomain) {
VTCompressionSessionInvalidate(newSession)
}
Then use the following code to encode each Grid of the image.
let status = VTCompressionSessionEncodeFrame(
session,
imageBuffer: buffer,
presentationTimeStamp: presentationTimeStamp,
duration: frameDuration,
frameProperties: nil,
infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in
try check(status, VideoToolboxErrorDomain)
if let sampleBuffer {
let encodedImage = try self.encodedImage(from: sampleBuffer)
// handle encodedImage
}
}
try check(status, VideoToolboxErrorDomain)
If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding.
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
Environment
macOS 15.6.1 / Xcode 26 beta 7 / iOS 26 Beta 9
In a simple AVFoundation video-playback sample, I’m seeing different behavior between iOS 18 and iOS 26 regarding AVPlayerItem.didPlayToEndTimeNotification.
I’ve attached a minimal sample below. Please replace videoURL with a valid short video URL.
Repro steps
Tap “Play” to start playback and let the video finish.
The AVPlayerItem.didPlayToEndTimeNotification registered with NotificationCenter should fire, and you should see Play finished. in the console.
Without relaunching, tap “Play” again. This is where the issue arises.
Observed behavior
On iOS 18 and earlier: The video does not play again (it does not restart from the beginning), but AVPlayerItem.didPlayToEndTimeNotification is posted and Play finished. appears in the console. The same happens every time you press “Play”.
On iOS 26: Pressing “Play” does not post AVPlayerItem.didPlayToEndTimeNotification. The code path that prints Play finished. is never called (the callback enclosing that line is not invoked again).
Building the same program with Xcode 16.4 and running it on an iOS 26 beta device shows the same phenomenon, which suggests there has been a behavioral change for AVPlayerItem.didPlayToEndTimeNotification on iOS 26. I couldn’t find any mention of this in the release notes or API Reference.
Because the semantics around AVPlayerItem.didPlayToEndTimeNotification appear to differ, we’re forced to adjust our logic. If there is a way to achieve the iOS 18–style behavior on iOS 26, I would appreciate guidance.
Alternatively, if this change is intentional, could you share the reasoning? Is iOS 26 the correct behavior from Apple’s perspective and iOS 18 (and earlier) behavior considered incorrect? Any official clarification would be extremely helpful.
import UIKit
import AVFoundation
final class ViewController: UIViewController {
private let videoURL = URL(string: "https://......mp4")!
private var player: AVPlayer?
private var playerItem: AVPlayerItem?
private var playerLayer: AVPlayerLayer?
private var observeForComplete: NSObjectProtocol?
// UI
private let playerContainerView = UIView()
private let playButton = UIButton(type: .system)
private let stopButton = UIButton(type: .system)
private let replayButton = UIButton(type: .system)
deinit {
if let observeForComplete {
NotificationCenter.default.removeObserver(observeForComplete)
}
}
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBackground
setupUI()
setupPlayer()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
playerLayer?.frame = playerContainerView.bounds
}
// MARK: - Setup
private func setupUI() {
playerContainerView.translatesAutoresizingMaskIntoConstraints = false
playerContainerView.backgroundColor = .black
view.addSubview(playerContainerView)
// Buttons
playButton.setTitle("Play", for: .normal)
stopButton.setTitle("Pause", for: .normal)
replayButton.setTitle("RePlay", for: .normal)
[playButton, stopButton, replayButton].forEach {
$0.titleLabel?.font = .systemFont(ofSize: 16, weight: .semibold)
$0.translatesAutoresizingMaskIntoConstraints = false
$0.contentEdgeInsets = UIEdgeInsets(top: 10, left: 16, bottom: 10, right: 16)
}
let stack = UIStackView(arrangedSubviews: [playButton, stopButton, replayButton])
stack.axis = .horizontal
stack.spacing = 16
stack.alignment = .center
stack.distribution = .equalCentering
stack.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(stack)
NSLayoutConstraint.activate([
playerContainerView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20),
playerContainerView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
playerContainerView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
playerContainerView.heightAnchor.constraint(equalToConstant: 200),
stack.topAnchor.constraint(equalTo: playerContainerView.bottomAnchor, constant: 20),
stack.centerXAnchor.constraint(equalTo: view.centerXAnchor)
])
// Action
playButton.addTarget(self, action: #selector(didTapPlay), for: .touchUpInside)
stopButton.addTarget(self, action: #selector(didTapStop), for: .touchUpInside)
replayButton.addTarget(self, action: #selector(didTapReplayFromStart), for: .touchUpInside)
}
private func setupPlayer() {
// AVURLAsset -> AVPlayerItem → AVPlayer
let asset = AVURLAsset(url: videoURL)
let item = AVPlayerItem(asset: asset)
self.playerItem = item
let player = AVPlayer(playerItem: item)
player.automaticallyWaitsToMinimizeStalling = true
self.player = player
let layer = AVPlayerLayer(player: player)
layer.videoGravity = .resizeAspect
playerContainerView.layer.addSublayer(layer)
layer.frame = playerContainerView.bounds
self.playerLayer = layer
// Notification
if let observeForComplete {
NotificationCenter.default.removeObserver(observeForComplete)
}
if let playerItem {
observeForComplete = NotificationCenter.default.addObserver(
forName: AVPlayerItem.didPlayToEndTimeNotification,
object: playerItem,
queue: .main
) { [weak self] _ in
guard self != nil else { return }
Task { @MainActor in
print("Play finished.")
}
}
}
}
// MARK: - Actions
@objc private func didTapPlay() {
player?.play()
}
@objc private func didTapStop() {
player?.pause()
}
// RePlay
@objc private func didTapReplayFromStart() {
player?.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero) { [weak self] _ in
self?.player?.play()
}
}
}
I would greatly appreciate an official response from Apple engineering on whether this is an intentional change, a regression, or an API contract clarification, and what the recommended approach is going forward. Thank you.