AVAudioFormat has no Swift concurrency annotations but the documentation states "Instances of this class are immutable."
This made me always assume it was safe to pass AVAudioFormat instances around. Is this the case? If so can it be marked as Sendable? Am I missing something?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I would like to use macro-mode for the custom camera using AVCaptureDevice in my project. This feature might help to automatically adjust and switch between lenses to get a close up clear image. It looks like this feature is not available and there are no open apis to achieve macro mode from Apple. Is there a way to get this functionality in the custom camera without losing the image quality. Please let me know if this is possible.
Thanks you,
Adil Thamarasseri
I am building an app for MacOS and I am trying to implement the code to add songs to a library playlist (which is added below). The issue I am having is that if I use Music Kit to load a users library playlists, the ID for the playlist (which is just a string of numbers) does not work with the Add tracks to a Library Playlist endpoint of Apple Music API. If I retrieve the playlists from the Apple Music API and use that playlist ID (which is different than the id I get from MusicKit) my code works fine and adds the song to the playlist. The problem is that when getting a users library playlists from Apple Music API is that it does not give me all of the library playlists that I get when using Music Kit and it also does not give me Artwork for playlists that have the collage of album covers, so I would prefer to use Music Kit to get the playlists.
I have also tested trying to retrieve a single playlist using the Apple Music API with the playlist Id from Music Kit and it does not work. I get the error that the resource cannot be found. Since this is a macOs app I cannot use MusicKit to add songs to library playlists.
Does anyone know a way to resolve this? Or a possible workaround? Ideally I want to use MusicKit to get the library playlists and have some way to use the playlist Id and add songs to that playlist. Below is my code for adding a song to a playlist using the Apple Music API, which works correctly only if I originally get the library playlist's id value from a playlist retrieved from the Apple Music API.
Also, does anyone know why the playlist Id's are not universal and are different when using Music Kit and Apple Music API? For songs and tracks it does not seem to matter if I use music kit or Apple Music API, the Id's are in the correct format for Apple Music API to use and work with my code. Thanks everyone for any and all help!
func addToPlaylist(songs: [Track], playlist: Playlist, alert: Binding<AlertItem?>) async {
let tracks = AppleMusicPlaylistPostRequestBody(data: songs.compactMap {
AppleMusicPlaylistPostRequestItem(id: $0.id.rawValue, type: "songs") // or "library-songs"
})
let playlistID = playlist.id
// Build the request URL for adding a song to a playlist
guard let url = URL(string: "https://api.music.apple.com/v1/me/library/playlists/\(playlistID)/tracks") else {
alert.wrappedValue = AlertItem(title: "Error", message: "Invalid URL for the playlist.")
return
}
// Authorization Header
guard let musicUserToken = try? await MusicUserTokenProvider().getUserMusicToken() else {
alert.wrappedValue = AlertItem(title: "Error", message: "Unable to retrieve Music User Token.")
return
}
do {
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.setValue("Bearer \(musicUserToken)", forHTTPHeaderField: "Authorization")
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
let encoder = JSONEncoder()
let data = try encoder.encode(tracks)
request.httpBody = data
let musicRequest = MusicDataRequest(urlRequest: request)
let musicRequestResponse = try await musicRequest.response()
// Check if the request was successful (status 201)
if musicRequestResponse.urlResponse.statusCode == 201 {
alert.wrappedValue = AlertItem(title: "Success", message: "Song successfully added to the playlist.")
} else {
print("Status Code: \(musicRequestResponse.urlResponse.statusCode)")
print("Response Data: \(String(data: musicRequestResponse.data, encoding: .utf8) ?? "No Data")")
// Attempt to decode the error response into the AppleMusicErrorResponse model
if let appleMusicError = try? JSONDecoder().decode(AppleMusicErrorResponse.self, from: musicRequestResponse.data) {
let errorMessage = appleMusicError.errors.first?.detail ?? "Unknown error occurred."
alert.wrappedValue = AlertItem(title: "Error", message: errorMessage)
} else {
alert.wrappedValue = AlertItem(title: "Error", message: "Failed to add song to the playlist.")
}
}
} catch {
alert.wrappedValue = AlertItem(title: "Error", message: "Network error: \(error.localizedDescription)")
}
}
Following WWDC 2023 "Support HDR images in your app", I'm trying to save 48-megapixel ProRAWs (taken on an iPhone 14 Pro Max) as HDR HEICs to the Photo Library. After processing the ProRAW file using CIRAWFilter, whether I use CIContext.heif10Representation() or convert to a CGImage, then UIImage, and use UIImage.heicData(), I get photos that behave oddly in the Photo Library. They appear too dark, and visibly brighten when first viewed, but more problematic is that the photos brighten a great deal more when you edit them with the Photos editor. This is the behavior when using the itur_2100_PQ color space, but itur_2100_HLG behaves similarly, except that it gets dramatically darker when edited. This behavior occurs whether CIRAWFilter.extendedDynamicRangeAmount is set to 0.0, or 2.0, or not set at all.
So what am I doing wrong? Here is a minimal iOS app -- well, just the ContentView -- that demonstrates the issue. You also need a .dng ProRAW file included in the project directory named test.dng. I'd love to include such a file, but I can't.
Be prepared for a multi-second wait when you save the photo.
import SwiftUI
import Photos
struct ContentView: View {
let context = CIContext()
let hdrColorSpace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)!
var body: some View {
VStack(spacing: 100) {
Button("Save Photo From CGImage/UIImage") {
savePhotoFromUIImage()
}
Button("Save Photo From CIImage") {
savePhotoDirectFromCIImage()
}
}.padding(60)
}
//convert RAW with CIRAWFilter to CIImage, then convert to CGImage, then UIImage, then HEIF
private func savePhotoFromUIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
guard let outputCGImage = context.createCGImage(ciImage, from: ciImage.extent, format: .RGB10, colorSpace: hdrColorSpace) else { return }
let uiImage = UIImage(cgImage: outputCGImage)
if let heicData = uiImage.heicData() {
saveHEIFPhotoToLibrary(imageData: heicData)
} else {
print("Failed to convert UIImage to HEIC")
}
}
}
//convert RAW with CIRAWFilter to CIImage, then to HEIF
private func savePhotoDirectFromCIImage() {
if let ciImage = processRAW(url: Bundle.main.url(forResource:"test", withExtension: "dng")!) {
do {
let heif = try context.heif10Representation(of: ciImage, colorSpace: hdrColorSpace)
saveHEIFPhotoToLibrary(imageData: heif)
} catch {
print("Failed to get HEIF representation from CIContext")
}
}
}
private func processRAW(url: URL) -> CIImage? {
guard let coreRawFilter = CIRAWFilter(imageURL: url) else { return nil }
coreRawFilter.extendedDynamicRangeAmount = 2.0 //the issue persists whether this is not set, or set to 0, or set to, say, 2.0
guard let ciImage = coreRawFilter.outputImage else { return nil }
return ciImage
}
private func saveHEIFPhotoToLibrary(imageData: Data) {
PHPhotoLibrary.shared().performChanges({
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
creationRequest.addResource(with: .photo, data: imageData, options: options)
}) { success, error in
if let error = error {
print("Error saving photo: \(error.localizedDescription)")
} else {
print("Photo saved.")
}
}
}
}
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Photos and Imaging
Core Graphics
Core Image
EDR
Hi Apple Developer Forums,
I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage.
What’s slow (important detail)
I’m measuring the time for:
let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint)
let ciImage = rawFilter.outputImage
This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup).
Observed behavior
First time accessing CIRAWFilter.outputImage: ~3 seconds
Second time (same app session, similar RAW): ~3 milliseconds
So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.).
Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern.
Warm-up attempts using bundled DNG
I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo:
Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s
Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms
This helps, but it’s still far from the steady-state ~3ms.
Warm-up by capturing a real RAW (works, but concerns)
The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing.
However:
In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX.
I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded.
Questions
Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)?
Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo?
Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter?
Attachments
Figure 1: First RAW capture with no warm-up (~3s outputImage time)
Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms)
Thanks for any guidance or experience sharing!
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Topic:
Media Technologies
SubTopic:
Audio
Is there any way we can detect the status of the Show When Muted and Show on Skip Back device settings in code ?
Hello,
As far as I know and in all of my testing there is no way for a user or a developer to change the frame rate of the video output on iPadOS. If you connect an iPad via a USB Hub or a USB to HDMI Adaptor and then connect it to an external monitor it will output at 59.94fps.
I have a video app where a user monitors live video at 25fps and 30fps, they often output to an external display and there are times when the external display will stutter due to the mismatch in frame rate, ie. using 25fps and outputting at 59.94fps.
I thought it was impossible to change the video output frame rate, then in V3.1 of the Blackmagic Camera App I saw an interesting change in their release notes:
‘Support for HDMI Monitoring at Sensor Rate and Resolution’
This means there is some way to modify it, not sure if this is done via a Private API that Apple has allowed Blackmagic to use. If so, how can we access this or is there a way to enable this that is undocumented?
Thanks!
I noticed that Instagram and iMessage support receiving shared lyrics from Apple Music. Specifically, when users long-press lyrics, a sheet pops up showing iMessage and Instagram. Clicking on either app generates a beautifully formatted lyrics image.
I've looked through MusicKit documentation but couldn't find any related APIs.
How can I implement this functionality in my app?
Hello,
I'm investigating an issue with LL-HLS playback using AVPlayer, specifically during DVR Live seeking (seeking to a past time).
I noticed that in certain seeking scenarios, AVPlayer sends a Blocking Playlist Reload request that includes the _HLS_msn parameter but is missing the _HLS_part parameter.
While I understand this is compliant with the HLS spec, I would like to know the specific criteria AVPlayer uses to decide when to drop the _HLS_part parameter. Does AVPlayer intentionally omit the part info when it determines that loading a specific partial segment is unnecessary during a seek operation?
Clarification on this behavior would help us greatly in debugging our stream delivery.
Thanks in advance.
It's been an ask for a few years and I'm wondering if there are any plans, or whether the '26 SDKs/Tools allow Apple Music to work in the simulator? I develop for the Vision Pro so the usual 'fix' of running on the device is a bit of a hard ask.
At the very least a small sample library that works in the simulator would be welcome (similar to how photos works)
Cheers
Safari is supposed to support animated AVIF images since version 16, but the ones I've tested perform very poorly, even on an M4 Mac Mini running Sequoia 15.1.1.
I believe Safari delegates decoding to the operating system itself, so this issue also happens in Live Preview in the finder, when I try to preview a file.
Sample file here: https://s3.us-west-2.amazonaws.com/cdn.paintera.org/test/sample.avif
322KB file, 5 seconds long, 12fps
This plays perfectly on Chrome on Mac OS, but is slow and laggy on Safari and Live Preview (it takes about 6.5 seconds to finish the 5 second video).
Does anyone know how to fix this or workaround this issue?
Topic:
Media Technologies
SubTopic:
Video
HI Guys,
I'm using Shazamkit in my IOS app and successfully capturing the currently playing track details, when using the devices (iPhone) built-in mic.
When I test with AirPods though, my app cannot both send the output to through the AirPods and capture that same output with the AirPods mic, for Shazamkit recognition.
I believe this must be possible, because the Shazamkit widget on IOS can do this.
Is it restricted in some way for third party apps?
If not, I'd appreciate some guidance on how to achieve this in Swift code.
Thanks in advance.
Hello!
I've two mics connected to a USB-hub. The USB-hub is then connected to my iPad. Both mics are part of the audio session's list of available inputs.
The problem is that regardless of which mic I select in my app (using setPreferredInput() on the audio session), the audio keeps coming from the mic that was last connected to the USB-hub.
Anyone that knows if this is a limitation in iPadOS/iOS?
Topic:
Media Technologies
SubTopic:
Audio
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds). I am sure they are correct.
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks
How does a third party developer go about supporting the new Enhanced Dialogue option for video apps in tvOS 18?
If an app is using the standard AVPlayerViewController, I had assumed it would be a simple-ish matter of building against the tvOS 18 SDK but apparently not, the options don't appear, not even greyed out.
Before you post —Camera doesn't work on the Simulator— that's no longer true. I've made a solution that makes the Simulator believe there's an actual hardware device connected, allowing users to stream the macOS camera to the iOS Simulator (see for more info RocketSim's documentation: https://docs.rocketsim.app/features/hzQMSrSga7BGWvxdNVdwYs/simulator-camera-support/58tQ5jvevLNSnyUEA7VgAv)
Now, it works for VNDocumentCameraViewController, but when I try opening DataScannerViewController, I directly run into:
Failed to start scanning: The operation couldn’t be completed. (VisionKit.DataScannerViewController.ScanningUnavailable error 0.)
My question:
How does this view controller determine whether scanning is available?
Is there a certain capability the available AVCaptureDevice's need to support maybe?
Any direction would be helpful for me to make this work for developers, making them build apps faster!
The presentation "create audio drivers with DriverKit" from WWDC 2021 demonstrates how to use a dext to implement a virtual audio driver. It also says " If a virtual audio driver or device is all that is needed, the audio server plug-in driver model should continue to be used".
Indeed, in AudioDriverKit/AudioDriverKitTypes.h, there is no IOUserAudioTransportType Virtual, although CoreAudio/AudioHardwareBase.h includes kAudioDeviceTransportTypeVirtual.
For one of our products, we require virtual devices to implement a software loopback "cable". We've implemented this using the "traditional" HAL plugin, and as a proof-of-concept, also using a dext. In the dext, I tried setting the transport type to 'virt', which seems to only have the effect of changing the icon shown in Audio Midi Setup.
HAL plugins require an installer, and the installer has to kill coreaudiod in a post-install script. You have to turn off SIP to debug them. Just like AudioDriverKit drivers, they are out-of-process and run in a process not owned by the hosting app. Our HAL plugin's interface is property based; we had to write a lot of boiler-plate code to implement required properties. Writing an AudioDriverKit driver is in most respects easier - a lot of the scaffolding is implemented in the base driver, which we only alter where required. Debugging and installation is much easier.
The dext works just fine, as far as we can ascertain, just as well as a HAL plugin.
So, my question is - is the advice to use a HAL plugin for a virtual device still correct in 2025? And if so, what's the objection? We'd really prefer to ship the AudioDriverKit virtual audio device.
Hey,
I have a camera app that captures a ProRaw photo and then runs a few Core Image filters before saving it to the device as a HEIC. However I'm finding that capturing at 48MP is rather slow. Testing a minimal pipeline on an iPhone 16 Pro:
Shutter press => file received in output: 1.2 ~ 1.6s
CIRawFilter created using photo file representation then rendered to context, without any filters: 0.8s ~ 1s
Saving to device ~0.15s
Is this the expected time for capturing processing? The native camera app seems to save the images within half a second. I'm using QualityPrioritization.balanced and the highest resolution available which is 48MP.
Would using the CIRawFilter with the pixelBuffer from the photo output be faster? I tried it but couldn't get it to output an image. Are there any other things I could try to speed this up? Is it possible to capture at 24MP instead?
Thanks,
Alex
I have the main app that saves preferences to UserDefaults.standard. So I have this one preference that the user is able to toggle - isRawOn
UserDefaults.standard.set(self.isRawOn, forKey: "isRawOn")
Now, I have LockedCameraCaptureExtension which is required know if that above setting on or off during launch. Also if it's toggled within the extension, the main app should know about it on the next launch.
The main app and the extension runs on separate containers and the preferences are not shared due to privacy reasons.
Apple mentions of using appContext of CameraCaptureIntent, but not sure how above scenario is possible through that....unless I am missing something.
Apple Reference
What I have for CameraCaptureIntent:
@available(iOS 18, *)
struct LaunchMyAppControlIntent: CameraCaptureIntent {
typealias AppContext = MyAppContext
static let title: LocalizedStringResource = "LaunchMyAppControlIntent"
static let description = IntentDescription("Capture photos with MyApp.")
@MainActor
func perform() async throws -> some IntentResult {
.result()
}
}
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
iOS
Photos and Imaging
PhotoKit
AVFoundation