If I make a request to https://api.music.apple.com/v1/storefronts/us with the proper developer jwt token in the header, I receive the a successful response with a list of store fronts. If I remove the token, I do get back a 401 error.
If I call any other catalog base query, I am getting back a 500 error.
For instance: https://api.music.apple.com/v1/catalog/us/albums/310730204
returns a 500 error with the body being
{"message":"An unexpected error occurred"}
I'm not sure what I can do to fix this. Please help.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Where can I find the documentation of the Genlock feature of the iPhone 17 Pro? How does it work and how can I use it in my app?
I'm using this library: https://github.com/Yummypets/YPImagePicker to capture photos.
I've modified it slightly, and I'm using an older version.
When testing on my iPhone 16e, ios 26, whenever I take a photo, I get the following two error messages:
<<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281)
These error messages appear, but as far as I can tell, the photo comes through OK, and I can save the data no problem. I've even removed all my handling code to see if it was something I was doing.
I don't really want to ship with these errors showing, but I also have no idea what can be causing this error to appear. chatgpt was not helpful diagnosing this.
Does anyone know what can cause this error
Is there a way I can see the source code to figure out if there's something I'm doing wrong here?
It really seems like this is an internal apple error, or else I would have expected more details on the error relating to the code I've written. Any clues would be appreciated!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi everyone,
We’re currently developing a music-based app using MusicKit, and we recently noticed that iOS 26 beta introduces a new “Automix” feature in the Apple Music app. This enables seamless DJ-style transitions between songs—beyond the standard crossfade functionality.
We’re trying to understand:
Will this Automix feature be accessible to third-party apps that use MusicKit?
If not available in the initial iOS 26 release, is there a plan to expose it through public APIs in a future update?
Is there any technical documentation, WWDC session, or roadmap info regarding Automix support via MusicKit?
This functionality would be a significant enhancement for our app, especially for intelligent audio transitions and curated playlists.
Thanks.
After upgrading to iOS 18.4, I'm no longer able to establish an AirPlay v1 connection to an audio system. The symptom is that the AirPlay route picker just spins when trying to connect to an audio system. It eventually gives up.
I tested this on an iPhone 14, connecting to a HomePod, AirPort express, AppleTV and a Wiim Pro. If I try connecting with AirPlay v2, ex: using Apple Music, the connection succeeds and audio can be played.
I'm the developer of an app that plays audio over AirPlay while also recording. My app has to use AirPlay v1 because AvAudioSession doesn't allow the policy .longFormAudio when the category is .playAndRecord. This issue is a real pain as it means my app is suddenly broken for many thousands of users.
Is anyone else seeing this issue? Any suggestions for a workaround?
Hi,
After updating to iOS 26, our app is experiencing playback failures with AVPlayer. The same code and streams work fine on iOS 18 and earlier.
Error:
Domain [CoreMediaErrorDomain]
Code [-15628]
Description [The operation couldn’t be completed.]
Underlying Error Domain [(null)]
Code [0]
Description [(null)]
Environment:
iOS version: iOS 26
Stream type: HLS (m3u8) with segment (.ts) files
Observed behaviour:
We don’t have concrete steps to reproduce the issue, but so far, we have observed that this error tends to occur under low network conditions.
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following:
This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit).
The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2.
Those images are rendered correctly in Photos and Preview on all platforms.
When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly.
The issue only occurs when loading the image via Core Image.
When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result.
Mac (AppKit):
Catalyst:
Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst.
We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way.
Or might there be a better workaround?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
Core Image
Core Graphics
Hello,
Environment
macOS 15.6.1 / Xcode 26 beta 7 / iOS 26 Beta 9
In a simple AVFoundation video-playback sample, I’m seeing different behavior between iOS 18 and iOS 26 regarding AVPlayerItem.didPlayToEndTimeNotification.
I’ve attached a minimal sample below. Please replace videoURL with a valid short video URL.
Repro steps
Tap “Play” to start playback and let the video finish.
The AVPlayerItem.didPlayToEndTimeNotification registered with NotificationCenter should fire, and you should see Play finished. in the console.
Without relaunching, tap “Play” again. This is where the issue arises.
Observed behavior
On iOS 18 and earlier: The video does not play again (it does not restart from the beginning), but AVPlayerItem.didPlayToEndTimeNotification is posted and Play finished. appears in the console. The same happens every time you press “Play”.
On iOS 26: Pressing “Play” does not post AVPlayerItem.didPlayToEndTimeNotification. The code path that prints Play finished. is never called (the callback enclosing that line is not invoked again).
Building the same program with Xcode 16.4 and running it on an iOS 26 beta device shows the same phenomenon, which suggests there has been a behavioral change for AVPlayerItem.didPlayToEndTimeNotification on iOS 26. I couldn’t find any mention of this in the release notes or API Reference.
Because the semantics around AVPlayerItem.didPlayToEndTimeNotification appear to differ, we’re forced to adjust our logic. If there is a way to achieve the iOS 18–style behavior on iOS 26, I would appreciate guidance.
Alternatively, if this change is intentional, could you share the reasoning? Is iOS 26 the correct behavior from Apple’s perspective and iOS 18 (and earlier) behavior considered incorrect? Any official clarification would be extremely helpful.
import UIKit
import AVFoundation
final class ViewController: UIViewController {
private let videoURL = URL(string: "https://......mp4")!
private var player: AVPlayer?
private var playerItem: AVPlayerItem?
private var playerLayer: AVPlayerLayer?
private var observeForComplete: NSObjectProtocol?
// UI
private let playerContainerView = UIView()
private let playButton = UIButton(type: .system)
private let stopButton = UIButton(type: .system)
private let replayButton = UIButton(type: .system)
deinit {
if let observeForComplete {
NotificationCenter.default.removeObserver(observeForComplete)
}
}
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBackground
setupUI()
setupPlayer()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
playerLayer?.frame = playerContainerView.bounds
}
// MARK: - Setup
private func setupUI() {
playerContainerView.translatesAutoresizingMaskIntoConstraints = false
playerContainerView.backgroundColor = .black
view.addSubview(playerContainerView)
// Buttons
playButton.setTitle("Play", for: .normal)
stopButton.setTitle("Pause", for: .normal)
replayButton.setTitle("RePlay", for: .normal)
[playButton, stopButton, replayButton].forEach {
$0.titleLabel?.font = .systemFont(ofSize: 16, weight: .semibold)
$0.translatesAutoresizingMaskIntoConstraints = false
$0.contentEdgeInsets = UIEdgeInsets(top: 10, left: 16, bottom: 10, right: 16)
}
let stack = UIStackView(arrangedSubviews: [playButton, stopButton, replayButton])
stack.axis = .horizontal
stack.spacing = 16
stack.alignment = .center
stack.distribution = .equalCentering
stack.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(stack)
NSLayoutConstraint.activate([
playerContainerView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20),
playerContainerView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
playerContainerView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
playerContainerView.heightAnchor.constraint(equalToConstant: 200),
stack.topAnchor.constraint(equalTo: playerContainerView.bottomAnchor, constant: 20),
stack.centerXAnchor.constraint(equalTo: view.centerXAnchor)
])
// Action
playButton.addTarget(self, action: #selector(didTapPlay), for: .touchUpInside)
stopButton.addTarget(self, action: #selector(didTapStop), for: .touchUpInside)
replayButton.addTarget(self, action: #selector(didTapReplayFromStart), for: .touchUpInside)
}
private func setupPlayer() {
// AVURLAsset -> AVPlayerItem → AVPlayer
let asset = AVURLAsset(url: videoURL)
let item = AVPlayerItem(asset: asset)
self.playerItem = item
let player = AVPlayer(playerItem: item)
player.automaticallyWaitsToMinimizeStalling = true
self.player = player
let layer = AVPlayerLayer(player: player)
layer.videoGravity = .resizeAspect
playerContainerView.layer.addSublayer(layer)
layer.frame = playerContainerView.bounds
self.playerLayer = layer
// Notification
if let observeForComplete {
NotificationCenter.default.removeObserver(observeForComplete)
}
if let playerItem {
observeForComplete = NotificationCenter.default.addObserver(
forName: AVPlayerItem.didPlayToEndTimeNotification,
object: playerItem,
queue: .main
) { [weak self] _ in
guard self != nil else { return }
Task { @MainActor in
print("Play finished.")
}
}
}
}
// MARK: - Actions
@objc private func didTapPlay() {
player?.play()
}
@objc private func didTapStop() {
player?.pause()
}
// RePlay
@objc private func didTapReplayFromStart() {
player?.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero) { [weak self] _ in
self?.player?.play()
}
}
}
I would greatly appreciate an official response from Apple engineering on whether this is an intentional change, a regression, or an API contract clarification, and what the recommended approach is going forward. Thank you.
We are getting reports from customers that they are not able to play videos in our app after updating their phones to iOS18.3.1.
(Further checking indicates that it happens on all iOS18 versions. It suddenly started occurring from February 18th, 2025)
When checking logs we see that playback is failing due to CoreMediaErrorDomain error -42709.
This is an undocumented error code and hence we do not know the cause of the playback issue.
Does anyone know what this error code means and how the app should handle it?
Reported as FB16638501.
Topic:
Media Technologies
SubTopic:
Streaming
Hi everyone,
I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback, using ApplicationMusicPlayer. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing.
I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already:
• Display detailed scrolling waveforms of Apple Music songs
• Scratch, loop or time-stretch those tracks in real time
That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement.
My questions:
Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content?
If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access?
Where can I find official documentation or a point of contact for this kind of request?
I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated!
Thanks in advance.
Hi, Apple's engineer.
Hoping that you can reply to this one.
We're developing a Text-to-Speak app. Everything went well until the IOS got upgraded to 18.
AVSpeechSynthesisVoice(language: "zh-CN") is running well under IOS 16 AND IOS 17. It speaks Mandarin correctly.
In IOS 18, we noticed that Siri's Language setting interrupted the performance of AVSpeechSynthesisVoice. It plays Cantonese instead of Mandarin.
Buggy language setting in Siri that affects the AVSpeechSynthesisVoice :
Chinese (Cantonese - China mainland)
Chinese (Cantonese -Hong Kong)
I have a crash related to playing video in AVPlayerViewController and AVQueuePlayer. I download the video locally from the network and then initialize it using AVAsset and AVPlayerItem. Can't reproduce locally, but crashes occur from firebase crashlytics only for users starting with iOS 18.4.0 with this trace:
Crashed: com.apple.avkit.playerControllerBackgroundQueue
0 libobjc.A.dylib 0x1458 objc_retain + 16
1 libobjc.A.dylib 0x1458 objc_retain_x0 + 16
2 AVKit 0x12afdc __77-[AVPlayerController currentEnabledAssetTrackForMediaType:completionHandler:]_block_invoke + 108
3 libdispatch.dylib 0x1aac _dispatch_call_block_and_release + 32
4 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
5 libdispatch.dylib 0x6560 _dispatch_continuation_pop + 596
6 libdispatch.dylib 0x5bd4 _dispatch_async_redirect_invoke + 580
7 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364
8 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156
9 libsystem_pthread.dylib 0x4624 _pthread_wqthread + 232
10 libsystem_pthread.dylib 0x19f8 start_wqthread + 8
A recent WWDC session "Learn about Apple Immersive Video technologies" showed a Apple Spatial Audio Format Panner plugin for Pro Tools. The presenter stated that it's available on a per-user license.
Where can users access this?
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected.
Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers)
Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers)
When using paired input and output devices:
The setup works as expected.
Example: MacBook Pro microphone → MacBook Pro speakers.
When using mismatched devices:
AVAudioEngine setup fails during aggregate device construction.
Example: AirPods microphone → MacBook Pro speakers.
Error logs indicate a channel count mismatch.
Here are the partial logs. Due to the content limit, I cannot post the entire logs.
AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875)
AUVPAggregate.cpp:1036 err=-10875
AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875
AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
AggregateDevice.mm:182 error fetching default pair
AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702]
AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID
AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702]
AggregateDevice.mm:182 error fetching default pair
AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts
AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U)
Is it possible to use voice processing with different input/output devices?
If yes, are there any specific configurations required to handle mismatched devices?
How can we resolve channel count mismatch errors during aggregate device construction?
Are there settings or API adjustments to enforce compatibility between input/output devices?
Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices?
For instance, can we force an intermediate channel configuration or downmix input/output formats?
Our multimedia application Boinx FotoMagico displays media files of various kinds with a Metal rendering engine. At the moment we still use .bgra8Unorm pixel format and sRGB color space and only render in SDR, which is increasingly a problem, as much of the video content is HDR nowadays (e.g. videos shot on an iPhone). For that reason we would like to switch to EDR rendering with .rgba16Float pixel format and extendedLinearDisplayP3 color space.
We have already worked out how to do this for HDR image files, but still have a technical problem when rendering HDR video files. We are using AVFoundation to get the video frames as CVPixelBuffers and convert them to MTLTexture using a CVMetalTextureCache. MTLTextures are then further processed in various compute shaders before being rendered to screen. However the pixel values in the texture are not what we expected. Video frames appear too bright/overexposed.
In WWDC21 session "Explore HDR rendering with EDR" Ken Greenebaum mentioned:
“AVFoundation does not presently decode HDR formats, such as HDR10, to EDR. Consequently, these need to be adapted for use with EDR rendering. This conversion is straightforward and involves two steps. First, converting to linear light by applying the inverse transfer function. And second, dividing by the medium's reference white.”
https://developer.apple.com/videos/play/wwdc2021/10161?time=1498
However, the session does not explain, how to get or calculate the correct value for "reference white". We could not find any relevant info on the web. This is why we need DTS assistance. We need the code that calculates the correct value for reference white for any kind of video, whether it is SDR or HDR, and regardless of codec and encoding. I assume that Ken Greenebaum is the best Apple engineer to ask in this case, because he recorded most of the EDR related WWDC sessions in recent years?
We have written a small test app that renders a short sample video (HLG encoding). The window contains two views. The upper view uses an AVPlayerLayer and renders the video natively just like QuickTime Player. The video content looks correct here. BTW, the window background is SDR white, so that bright EDR pixels can be clearly identified, e.g. the clouds just above the mountains in the upper left corner of the sample video. You may need to lower display brightness a bit if these clouds do not appear brighter than the white window background.
The bottom view uses a CAMetalLayer and low-level Metal rendering. The CVPixelBuffers we receive from AVFoundation still need to be scaled down so that SDR reference white reaches pixel value 1.0. Entering a value of 9.0 to 10.0 for reference white in the text field makes it look about right on my Studio Display. But that is just experimental for this sample video file. We need code to calculate the correct value for reference white for any kind of video file!
We have a couple of questions:
SDR videos should probably use 1.0 as reference white, as their encoded pixel values can already be used as is? Is this assumption correct?
Different video encoding of HDR video (HLG, PQ, etc) will probably lead to different values for reference white?
Is the value for reference white constant throughout a video, or can it vary over time, either scene by scene, or even frame by frame?
If it can vary, does the CVPixelBuffer of the current video frame contain all the necessary metadata to calculate the correct value?
Does the NSScreen.maximumExtendedDynamicRangeColorComponentValue also influence the reference white value?
The attached sample project is structured in a way that the only piece of code that needs to be modified is the ViewController.sdrReferenceWhiteValue() function. Please read the comments and the #warning in this function. This is where the code for calculating the reference white value should be inserted.
Here is the download link for the sample project:
https://www.dropbox.com/scl/fi/4w5gmftav5xhbixu9u6pb/HDRMetalTest.zip?rlkey=n8cm02soux3rx03vplgo6h1lm&dl=0
Topic:
Media Technologies
SubTopic:
Video
Hi,
I have been working on a project that enables users to listen to their favorite music using a streaming service, which so far was Spotify. The app had a programmable 3D/2D interface with the ability to connect to devices in your home and have them react to music. As of September 2024, Spotify decomissioned their Audio Analysis API. I have seen other posts mention playing Apple Music through AVFoundation, which would break DRM and so it’s not supported. However, the Spotify Audio Analysis API does not allow for a full frequency reconstruction. It is entirely temporal data on beats, kicks, loudness, and timbre changes, which themselves are operators on the spectral data from the FFT. It would be very useful for the developer community if we get the ability to do this and it will probably Apple Music among developers and those who use their apps a lot more.
Would love to hear your thoughts about this and Happy New Year!
Hi team,
In the Apple Music Feed datasets, we've noticed some unexpected values in the song and album tables.
The primaryartists column from either song or album may contain a "non-default" artist name such as the katakana name shown in the example below:
select id, name, namedefault, primaryartists from amf_song where id = '1698723329'
id | name | namedefault | primaryartists
----------------------------------------
1698723329 | {default=California} | California | [{id=1264818718, name=チャペル・ローン}]
select * from amf_artist where id = '1264818718'
id | name | namedefault | namepronunciation |
----------------------------------------------
1264818718 | {default=Chappell Roan, ja=チャペル・ローン} | Chappell Roan | {ja=チャペルローン} |
Shouldn't the primaryartists column be showing the namedefault instead of the Japanese language version?
When can we expect this bug to resolved?
Thanks,
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?
Hi all,
we are in the business of scanning documents and barcodes with the camera system of mobile devices. Since there is a wide variety of use cases, from scanning tiniest barcodes and small business cards to scanning barcodes or large documents from far distances we preferably rely on the triple camera devices, if available, with automatic constituent device switching.
This approach used to be working perfectly fine. Depending on the zoom level (we prefer to use an initial zoom value of 2.0) and the focusing distance the iPhone Pro models switched through the different camera systems at light speed: from ultra-wide to wide, tele and back. No issues at all.
Unfortunately the new iPhone 16 Pro models behave very different when it comes to constituent device switching based on focus distance. The switching is slow and sometimes it does not happen at all when the focusing distance changes. Especially when aiming for a at a distant object for a longer time and then aiming at a very close object that is maybe 2" away. The iPhone 15 Pro here always switches immediately to the ultra-wide camera, while the iPhone 16 Pro takes at least 2-3 seconds, in rare cases up to 10 seconds and sometimes forever to switch to the ultra-wide camera.
Of course we assumed that our code is responsible for these issues. So we experimented with restricting the devices and so on. Then we stripped more and more configuration code but nothing we tried improved the situation.
So we ended up writing a minimal example app that demonstrates the problem. You can find the code below. Execute it on various iPhones and aim at far distance (> 10 feet) and then quickly to very close distance (<5 inches).
Here is a list of devices and our test results:
iPhone 15 Pro, iOS 17.6: very fast and reliable switching
iPhone 15 Pro, iOS 18.1: very fast and reliable switching
iPhone 13 Pro Max, iOS 15.3: very fast and reliable switching
iPhone 16 (dual-wide camera), iOS 18.1: very fast and reliable switching
iPhone 16 Pro, iOS 18.1: slow switching, unreliable
iPhone 16 Pro Max, iOS 18.1: slow switching, unreliable
Questions:
Does anyone else have seen this issue? And possibly found a workaround?
Is this behaviour intended on iPhone 16 Pro models? Can we somehow improve the switching speed?
Further the iPhone 16 Pro models also show a jumping preview in the preview layer when they switch the constituent active device. Not dramatic, but compared to the other phones it looks like a glitch.
Thank you very much!
Kind regards,
Sebastian
import UIKit
import AVFoundation
class ViewController: UIViewController {
var captureSession : AVCaptureSession!
var captureDevice : AVCaptureDevice!
var captureInput : AVCaptureInput!
var previewLayer : AVCaptureVideoPreviewLayer!
var activePrimaryConstituentToken: NSKeyValueObservation?
var zoomToken: NSKeyValueObservation?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
checkPermissions()
setupAndStartCaptureSession()
}
func checkPermissions() {
let cameraAuthStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
switch cameraAuthStatus {
case .authorized:
return
case .denied:
abort()
case .notDetermined:
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler:
{ (authorized) in
if(!authorized){
abort()
}
})
case .restricted:
abort()
@unknown default:
fatalError()
}
}
func setupAndStartCaptureSession() {
DispatchQueue.global(qos: .userInitiated).async{
self.captureSession = AVCaptureSession()
self.captureSession.beginConfiguration()
if self.captureSession.canSetSessionPreset(.photo) {
self.captureSession.sessionPreset = .photo
}
self.captureSession.automaticallyConfiguresCaptureDeviceForWideColor = true
self.setupInputs()
DispatchQueue.main.async {
self.setupPreviewLayer()
}
self.captureSession.commitConfiguration()
self.captureSession.startRunning()
self.activePrimaryConstituentToken = self.captureDevice.observe(\.activePrimaryConstituent, options: [.new], changeHandler: { (device, change) in
let type = device.activePrimaryConstituent!.deviceType.rawValue
print("Device type: \(type)")
})
self.zoomToken = self.captureDevice.observe(\.videoZoomFactor, options: [.new], changeHandler: { (device, change) in
let zoom = device.videoZoomFactor
print("Zoom: \(zoom)")
})
let switchZoomFactor = 2.0
DispatchQueue.main.async {
self.setZoom(CGFloat(switchZoomFactor), animated: false)
}
}
}
func setupInputs() {
if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) {
captureDevice = device
} else {
fatalError("no back camera")
}
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {
fatalError("could not create input device from back camera")
}
if !captureSession.canAddInput(input) {
fatalError("could not add back camera input to capture session")
}
captureInput = input
captureSession.addInput(input)
}
func setupPreviewLayer() {
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = self.view.layer.frame
}
func setZoom(_ value: CGFloat, animated: Bool) {
guard let device = captureDevice else { return }
let maxZoom: CGFloat = captureDevice.maxAvailableVideoZoomFactor
let minZoom: CGFloat = captureDevice.minAvailableVideoZoomFactor
let zoomValue = max(min(value, maxZoom), minZoom)
let deltaZoom = Float(abs(zoomValue - device.videoZoomFactor))
do {
try device.lockForConfiguration()
if animated {
device.ramp(toVideoZoomFactor: zoomValue, withRate: max(deltaZoom * 50.0, 50.0))
} else {
device.videoZoomFactor = zoomValue
}
device.unlockForConfiguration()
} catch {
return
}
}
}
We move to another streaming service and need to deliver a ASK, .PEM &key, and CRT to enable DRM. Now the issue is that we don't have that information anymore.
Most logical would be to revoke the current certificate and create a new one. Unfortunately for Fairplay Streaming Certificates there is no revoke button.
We asked developer support who isn't able to help. We then did a request to revoke as described in article 2.7 of the Apple Developer Program License Agreement. They can only do this when the certificate is compromised.
So now we are stuck. Anyone out there who had the same issue and found a solution?
Your help is much appreciated.