Hi,Since today, we are no more able to do DELETE/PUT request on the Apple Music API.So, we can't update a playlist details, delete a playlist, delete tracks in playlist, delete tracks in library...Old methods allowed are now returning only an HTTP Code 403.Why this change in the Apple Music API ? We can hope that will be back soon ?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I need to know what is a unique identifier of a MIDI device (source/destination). Important note: I want to get the same ID when a device is reconnected (unplugged and then plugged again).
The main candidate is kMIDIPropertyUniqueID property. But I don't know if it meets the requirement above or not. Additional question: is it always available for any endpoint?
Also there is kMIDIPropertyDeviceID property. What about it?
And one more option is just MIDIEndpointRef returned by MIDIGetSource or MIDIGetDestination.
So what is the proper way to get ID which persists between device reconnections?
I'd like to find out: Can backgrounded apps record audio?
In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything.
However I'm not familiar with the current state of affairs.
With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone?
Thanks.
Topic:
Media Technologies
SubTopic:
Audio
Hello,
As far as I know and in all of my testing there is no way for a user or a developer to change the frame rate of the video output on iPadOS. If you connect an iPad via a USB Hub or a USB to HDMI Adaptor and then connect it to an external monitor it will output at 59.94fps.
I have a video app where a user monitors live video at 25fps and 30fps, they often output to an external display and there are times when the external display will stutter due to the mismatch in frame rate, ie. using 25fps and outputting at 59.94fps.
I thought it was impossible to change the video output frame rate, then in V3.1 of the Blackmagic Camera App I saw an interesting change in their release notes:
‘Support for HDMI Monitoring at Sensor Rate and Resolution’
This means there is some way to modify it, not sure if this is done via a Private API that Apple has allowed Blackmagic to use. If so, how can we access this or is there a way to enable this that is undocumented?
Thanks!
Hello, I am developing a custom player SDK using AVPlayer to support HLS and LL-HLS live streaming. I have some questions about the internal logic of AVPlayer regarding ABR, as this information is not explicitly covered in the documentation.
ABR Switching Logic: Does AVPlayer trigger bitrate switching primarily based on stall occurrences (buffer starvation)? I am curious if the switching logic is reactive to stalls or if it proactively switches to prevent them based on throughput estimation.
Developer Controls for ABR: To influence or control the ABR selection, are preferredPeakBitRate and preferredForwardBufferDuration the only properties available to developers? Are there any other recommended APIs to assist with ABR decisions?
Thank you for your help.
Hello,
We are experiencing on some occasions a wrong behavior with PDFDocument method:
func page(at index: Int) -> PDFPage?
With certain PDF files, this method returns the wrong PDFPage.
This occurs on iOS 18.3, 18.5 and 18.6.2 (an maybe on other versions).
Try this PDF for instance (page 81 is returned when index = 2):
https://drive.google.com/open?id=1MHm2wjfsbWB8OiRmARUMmvODYxp4DIqP&usp=drive_fs
Also, I mention that this doesn't occur systematically with this PDF. When making a copy of this file we don't observe the issue.
Could this be linked some kind of internal cache issue ?
Hi everyone, does anybody have any resources I could check out regarding the 48->12mp binning behavior on supported sensors? I know the 48mp sensor on iPhone can automatically bin pixels for better low light performance. But not sure how to reliably make this happen in practice.
On iPhone 14 Pro+ with a 48MP sensor, I want the best of both worlds for ProRAW:
∙ Bright light: 48MP full resolution
∙ Low light: 12MP pixel-binned for better noise
`photoOutput.maxPhotoDimensions = CMVideoDimensions(width: 8064, height: 6048)
let settings = AVCapturePhotoSettings(rawPixelFormatType: proRawFormat, processedFormat: [...])
settings.photoQualityPrioritization = .quality
// NOT setting settings.maxPhotoDimensions — always get 12MP`
When I omit maxPhotoDimensions, iOS always returns 12MP regardless of lighting. When I set it to 48MP, I always get 48MP.
Is there an API to let iOS automatically choose the optimal resolution based on conditions, or should I detect low light myself (via device.iso / exposureDuration) and set maxPhotoDimensions accordingly?
Any help or direction would be much appreciated!
Hi everyone,
I'm developing a camera application that requires precise, predictable control over the focus system. I'm encountering unexpected behavior with face-driven autofocus in continuous autofocus mode.
Issue:
When using AVCaptureDevice.FocusMode.continuousAutoFocus, the system continues to prioritize faces for focus even after attempting to disable face-driven autofocus with:
device.automaticallyAdjustsFaceDrivenAutoFocusEnabled = false
device.isFaceDrivenAutoFocusEnabled = false
Observations:
The behavior is inconsistent across different scenes.
In well-lit/properly exposed scenes: focus persistently locks onto faces, ignoring my configuration.
.
In underexposed scenes: the intended focus behavior is more consistently respected.
Hi Apple Developer Forums,
I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage.
What’s slow (important detail)
I’m measuring the time for:
let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint)
let ciImage = rawFilter.outputImage
This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup).
Observed behavior
First time accessing CIRAWFilter.outputImage: ~3 seconds
Second time (same app session, similar RAW): ~3 milliseconds
So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.).
Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern.
Warm-up attempts using bundled DNG
I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo:
Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s
Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms
This helps, but it’s still far from the steady-state ~3ms.
Warm-up by capturing a real RAW (works, but concerns)
The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing.
However:
In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX.
I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded.
Questions
Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)?
Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo?
Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter?
Attachments
Figure 1: First RAW capture with no warm-up (~3s outputImage time)
Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms)
Thanks for any guidance or experience sharing!
I am trying to use the new SpeechAnalyzer framework in my Mac app, and am running into an issue for some languages.
When I call AssetInstallationRequest.downloadAndInstall() for some languages, it throws an error:
Error Domain=SFSpeechErrorDomain Code=1 "transcription.ar asset not found after attempted download."
The ".ar" appears to be the language code, which in this case was Arabic.
When I call AssetInventory.status(forModules:) before attempting the download, it is giving me a status of "downloading" (perhaps from an earlier attempt?). If this language was completely unsupported, I would expect it to return a status of "unsupported", so I'm not sure what's going on here.
For other languages (Polish, for example) SpeechTranscriber.supportedLocale(equivalentTo:) is returning nil, so that seems like a clearly unsupported language. But I can't tell if the languages I'm trying, like Arabic, are supported and something is going wrong, or if this error represents something I can work around.
Here's the relevant section of code. The error is thrown from downloadAndInstall(), so I never even get as far as setting up the SpeechAnalyzer itself.
private func setUpAnalyzer() async throws {
guard let sourceLanguage else {
throw Error.languageNotSpecified
}
guard let locale = await SpeechTranscriber.supportedLocale(equivalentTo: Locale(identifier: sourceLanguage.rawValue)) else {
throw Error.unsupportedLanguage
}
let transcriber = SpeechTranscriber(locale: locale, preset: .progressiveTranscription)
self.transcriber = transcriber
let reservedLocales = await AssetInventory.reservedLocales
if !reservedLocales.contains(locale) && reservedLocales.count == AssetInventory.maximumReservedLocales {
if let oldest = reservedLocales.last {
await AssetInventory.release(reservedLocale: oldest)
}
}
do {
let status = await AssetInventory.status(forModules: [transcriber])
print("status: \(status)")
if let installationRequest = try await AssetInventory.assetInstallationRequest(supporting: [transcriber]) {
try await installationRequest.downloadAndInstall()
}
}
...
We have the application 'ADS Smart', a companion application for our ADS Dashcam. We offer a feature that lets users stream the live footage of the dashcam cameras through the app. Currently, we are experiencing a time delay of 30+ seconds to see the live stream, i.e the first frame of the live footage is taking around 30+ seconds to display in the app. We are using the MobileVLCKit library to stream the videos in the app.
The current flow of the code,
Flutter triggers the native playback via a method channel
The Dart side calls the iOS method channel <identifier_name>/ts_player with method playTSFromURL passing:
url(e.g rtsp://.... for live),
playerId
viewId (stable ID used to host native UI)
showControls
optional localIp
AppDelegate receives the call and prepares networking
Entry point: AppDelegate.tsChannel handler for "playTSFromURL" in AppDelegate.swift.
It resolves the Wi‑Fi interface and local IP if possible:
Sets VLC_SOURCE_ADDRESS to the Wi‑Fi IP (when available) to prefer Wi‑Fi for the stream.
Uses NWPathMonitor and direct interface inspection to find the Wi‑Fi interface (e.g., en0) and IP.
Kicks off best-effort route priming to the dashcam IP/ports (non-blocking), see establishWiFiRoutePriority.
AppDelegate chooses the right player implementation
createPlayerForURL(_: ) decides:
RTSP(rasp://..) --> use VLCKit-backed player (class TSStreamPlayer -> TSStreamPlayer class provides a VLC-backed video player for iOS, handling playback of Transport Stream(TS) URLs with strict main-thread UI updates, view safety, and stream management, using MobileVLCKit)
.ts files --> use VLCKit-based player for playing already recorded videos in the app.
If the selected player supports extras (e.g. TSStreamPlayerExtras), it sets
LocalIP (if resolved)
Wi-fi interface name
AppDeletegate creates the native 'platform view' container and overlay
platformView(for:parent:showControls:):
Creates a container UIView attached to the Flutter root view
Adds a dedicated child videoHost[viewId]-the host UIView for rendering video.
If showControls == true, adds TSPlayerControlsOverlay over the video and wires overlay callbacks back to the Flutter via controlChannel (/player_controls)
If showControls == false, adds a minimal back button and wires it to onGalleryBack.
The player starts playback inside the host view
Class player.playTSFromURL(urlString, in:host){ success, error in...} on the main thread.
For RTSP/TS streams: this is handled by TSStreamPlayer(VLCKit).
Success/failure is reported back to Flutter
The completion closure invoked in step 5 returns true on first real playback or an error message on failure.
The method channel result responds:
true --> Flutter knows playback started
FlutterError -> Flutter can show an error
Stopping and cleanup
"stop" on tsChannel stops and disposes the player(s).
"removePlatformView" removes the overlay, back button, the host, and the container, and disposes any remaining players.
I am attaching the logs of the app while running.
The actual issue happening is that when the iOS device is connected to the dashcam's Wi-Fi, for the app's live streaming-related information, the iOS is using Mobile Data even though the wifi is the main communication channel.
The iOS device takes approximately 30 seconds to display the first frame of live footage in the app. Despite being connected to the dashcam’s Wi-Fi, the iOS device sets the value of ES (en0) to Wi-Fi after multiple attempts, causing the live footage to appear in the app after this delay.
So, how can we set up the configuration to display the live footage from the dashcam cameras within just 2 to 3 seconds in the iOS device?
ios.txt
I'm writing a simple app for iOS and I'd like to be able to do some text to speech in it. I have a basic audio manager class with a "speak" function:
import Foundation
import AVFoundation
class AudioManager {
static let shared = AudioManager()
var audioPlayer: AVAudioPlayer?
var isPlaying: Bool {
return audioPlayer?.isPlaying ?? false
}
var playbackPosition: TimeInterval = 0
func playSound(named name: String) {
guard let url = Bundle.main.url(forResource: name, withExtension: "mp3") else {
print("Sound file not found")
return
}
do {
if audioPlayer == nil || !isPlaying {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.currentTime = playbackPosition
audioPlayer?.prepareToPlay()
audioPlayer?.play()
} else {
print("Sound is already playing")
}
} catch {
print("Error playing sound: \(error.localizedDescription)")
}
}
func stopSound() {
if let player = audioPlayer {
playbackPosition = player.currentTime
player.stop()
}
}
func speak(text: String) {
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: text)
utterance.voice = AVSpeechSynthesisVoice(language: "en-GB")
synthesizer.speak(utterance)
}
}
And my app shows text in a ScrollView:
ScrollView {
Text(self.description)
.padding()
.foregroundColor(.black)
.font(.headline)
.background(Color.gray.opacity(0))
}.onAppear {
AudioManager.shared.speak(text: self.description)
}
However, the text doesn't get read out (in the simulator). I see some output in the console:
Error fetching voices: Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "Invalid container metadata for _UnkeyedDecodingContainer, found keyedGraphEncodingNodeID", underlyingError: nil)). Using fallback voices.
I'm probably doing something wrong here, but not sure what.
Topic:
Media Technologies
SubTopic:
Audio
I am working on Screen Record function in Apple Vision Pro, when I use broadcast upload extension, after I click record button, the XCode console show the exception:
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
we create and config the project as flow:
Create a Apple Vision Project.
Create a Broadcast Upload Extension Target.
Add App Group for Project Target and Extension Target, both use the same identifier.
Add "Main Camera Access", "Passthrough in Screen Capture" Capabilities for all targets.
Add "NSScreenCaptureUsageDescription", "NSMicrophoneUsageDescription" in Plist.
Add record button in view
Run debug in Apple Vision Pro device, after click record button, throw the exception.
I’m writing to report a serious usability regression in the iOS 26 Photos app. Folders can still be created and albums can still be assigned to them, but folders can no longer be opened to view the albums they contain. A container that cannot be opened is not a container, and this breaks a fundamental information architecture model that has existed in Photos for well over a decade.
This change disproportionately harms users who maintain large, intentional photo libraries—travel archives, projects, professional work, or long-term personal documentation—where hierarchy and ordering are essential. Search and automated surfacing are not substitutes for deliberate structure. Removing the ability to browse folder → album hierarchy on iOS strips users of control while still exposing the UI for folder creation, which is internally inconsistent.
If this behavior is intentional, it should be clearly documented and the folder UI removed to avoid misleading users. If it is not intentional, it needs urgent correction. At minimum, iOS should retain parity with macOS Photos for basic navigation of folders and albums. This is not a niche request; it is a regression in core functionality.
Topic:
Media Technologies
SubTopic:
Photos & Camera
The sysEx struct in the MIDIUniversalMessage struct has a channel member but the System Exclusive (7-Bit) Message doesn't have a channel field.
The System Exclusive (7-Bit) Message has a # of bytes field but the sysEx struct doesn't have a nrOfBytes, byteCount or bytesUsed member.
It looks like the channel member of the sysEx struct contains the number of used bytes.
Is this a mistake in the header or did I misunderstand something?
Recently, after the update of 26.3 Mac OS (Tahoe), the ordering of my music app has been horrible at best - music disappearing, tracks not aligning with albums (even if the albums are different years).
It's created quite a problem, because the disappearing tracks issue seems to be replicating to iOS devices as well (although track numbering and album association seem to be stable). Has anyone else heard of this issue?
Topic:
Media Technologies
SubTopic:
Audio
Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem.
I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
Has anyone tried to make an ILPD based AIME file?
When I try the resulting AIME switches to USDZ Mesh instead of saving the ILPD Data.
I'm seeing crashes in _MPRemoteCommandEventDispatch on iOS 26.x devices in 3 apps. According to Bugsnag logs they are:
NSInternalInconsistencyException: event dispatch <_MPRemoteCommandEventDispatch: <MPRemoteCommandEvent: 0x11c049500 commandID=THV0 command=<MPRemoteCommand: 0x109ad1ea0 type=Play (0) enabled=YES handlers=[0x109b6a310]> sourceID=(null) ([HostedRoutingSessionDataSource] handleControlSendingCommand<2W5E>)> state:201> deallocated without calling continuation
I attached a log from Xcode organizer matching Bugsnag crash.
mpr_remote_command_event.crash
When I set the brakpoint on the -[_MPRemoteCommandEventDispatch dealloc] I can see it it's hit every time I tap play or pause on locked screen play button.
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000002370420cc __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e975c810 pthread_kill + 268 (pthread.c:1721)
2 libsystem_c.dylib 0x0000000198f8ff64 abort + 124 (abort.c:122)
3 libc++abi.dylib 0x000000018a7cf808 __abort_message + 132 (abort_message.cpp:66)
4 libc++abi.dylib 0x000000018a7be484 demangling_terminate_handler() + 304 (cxa_default_handlers.cpp:76)
5 libobjc.A.dylib 0x000000018a6cff78 _objc_terminate() + 156 (objc-exception.mm:496)
6 xxxxxxxxxxxxxx 0x00000001003a7db8 CPPExceptionTerminate() + 416 (BSG_KSCrashSentry_CPPException.mm:156)
7 libc++abi.dylib 0x000000018a7cebdc std::__terminate(void (*)()) + 16 (cxa_handlers.cpp:59)
8 libc++abi.dylib 0x000000018a7ceb80 std::terminate() + 108 (cxa_handlers.cpp:88)
9 CoreFoundation 0x000000018d7341c4 __CFRunLoopPerCalloutARPEnd + 256 (CFRunLoop.c:769)
10 CoreFoundation 0x000000018d70bb5c __CFRunLoopRun + 1976 (CFRunLoop.c:3179)
11 CoreFoundation 0x000000018d70aa6c _CFRunLoopRunSpecificWithOptions + 532 (CFRunLoop.c:3462)
12 GraphicsServices 0x000000022e31c498 GSEventRunModal + 120 (GSEvent.c:2049)
13 UIKitCore 0x00000001930ceba4 -[UIApplication _run] + 792 (UIApplication.m:3902)
14 UIKitCore 0x0000000193077a78 UIApplicationMain + 336 (UIApplication.m:5577)
15 xxxxxxxxxxxxxx 0x00000001000c0134 main + 308 (main.swift:15)
16 dyld 0x000000018a722e28 start + 7116 (dyldMain.cpp:1477)
Is the crash happening when the app is being terminated?
Thank you!
I'm using the new SpeechAnalyzer framework to detect certain commands and want to improve accuracy by giving context. Seems like AnalysisContext is the solution for this, but couldn't find any usage example. So I want to make sure that I'm doing it right or not.
let context = AnalysisContext()
context.contextualStrings = [
AnalysisContext.ContextualStringsTag("commands"): [
"set speed level",
"set jump level",
"increase speed",
"decrease speed",
...
],
AnalysisContext.ContextualStringsTag("vocabulary"): [
"speed", "jump", ...
]
]
try await analyzer.setContext(context)
With this implementation, it still gives outputs like "Set some speed level", "It's speed level", etc.
Also, is it possible to make it expect number after those commands, in order to eliminate results like "set some speed level to" (instead of two).