Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

AVContentKeySession key renewal on Airplay
Our streaming app uses FairPlay-protected video streams, which previously worked fine when using AVAssetResourceLoaderDelegate to provide CKCs. Recently, we migrated to AVContentKeySession, and while everything works as expected during regular playback, we encountered an issue with AirPlay. Our CKC has a 120-second expiry, so we renew it by calling renewExpiringResponseData.. This trigger the didProvideRenewingContentKeyRequest delegate and we respond with updated CKC. However, when streaming via AirPlay, both video and audio freeze exactly after 120 seconds. To validate the issue, I tested with AVAssetResourceLoaderDelegate and found that I can reproduce the same freeze if I do not renew the key. This suggests that AirPlay is not accepting the renewed CKC when using AVContentKeySession. Additional Details: This issue occurs across different iOS versions and various AirPlay devices. The same content plays without issues when played directly on the device. The renewal process is successful, and segments continue to load, but playback remains frozen. Tried renewing the CKC bit early (100s). I also tried setting player.usesExternalPlaybackWhileExternalScreenIsActive = true, but the issue persists. We don't use persistentKey. Is there anything else that needs to be considered for proper key renewal when AirPlaying? Any help on how to fix this or confirmation if this is a known issue would be greatly appreciated.
4
2
599
Mar ’25
After playing an HDR video on iPhone for a while, the HDR effect disappears and the screen brightness decrease
When i use AVPlayer to obtain the video frame CVPixelBufferRef of an HDR video, and use AVSampleBufferDisplayLayer to display it on the screen, after a period of time, the HDR video content and screen gradually darken, losing the HDR effect. Steps to reproduce: Create an AVPlayer to loop an HDR video, specify the video frame format as kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange Create a timer to get the video frame CVPixelBufferRef at 30 frames per second Use AVSampleBufferDisplayLayer to display CVPixelBufferRef on the screen Don't operate the phone, wait for a period of time (such as 40 minutes), the HDR effect disappears and the screen darkens Note: You need to use an iPhone device, iOS 18.5 and below operating system You need to ensure that the HDR video is played in a loop, that is, to ensure that the screen continues to display HDR content, wait for a period of time, depending on different devices, you need to wait for 20-40 minutes. In the iPhone Photos app,the same problem will occur after playing HDR video in a loop for a long time Expected Results: When rendering HDR content for a long time, it is guaranteed that there is always an HDR effect, and the HDR content and screen will not be darkened. Current Results: After about 20-40 minutes, the HDR effect disappears and the screen darkens.
4
0
611
Jul ’25
Failure of AudioUnitSetProperty when using MacCatalyst (works on macOS)
I was trying to set custom audio output device for a generated audio on macCatalyst. While using let status = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout.size)) kAudioOutputUnitProperty_CurrentDevice is invalid, and status = -10879, indicating an error. STEPS TO REPRODUCE Set Run Destination to MacOS and run the program. "AudioUnitSetProperty: 0" should be printed, indicating it works fine. Set Run Destination to Mac Catalyst and run the program. "Error setting output device: -10879" should be printed, indicating an error.
4
1
661
Mar ’25
Is AVPlayerViewController not supported on macCatalyst?
When building an application that can be built on iOS using macCatalyst, a link error like the one below will occur. Undefined symbol: OBJC_CLASS$_AVPlayerViewController The AVPlayerViewController documentation seems to support macCatalyst, but what is the reality? [AVPlayerViewController](https://developer.apple.com/documentation/avkit/avplayerviewcontroller? language=objc) Each version of the environment is as follows. Xcode 16.2 macOS deployment target: macOS 10.15 iOS deployment target: iOS 13.0 Thank you for your support.
4
0
421
Mar ’25
WideFOV - APMP - Stereo
Does anyone have a template of an Apple Projected Media Profile Format Description or a File of a Stereo wideFOV video? Use case I have 2 compatible cameras that I stereo sync and I want to move the projection information from the compatible video to the Spatial video that combines them. Every version I can come up with crashes the AVP and when viewing as Spatial in Tahoe I just get a black screen.
4
0
177
Jun ’25
donate INPlayMediaIntent to systerm, but not show in control center
I donate INPlayMediaIntent to systerm(donate success), but not show in control center My code is as follows let mediaItems = mediaItems.map { $0.inMediaItem } let intent = if #available(iOS 13.0, *) { INPlayMediaIntent(mediaItems: mediaItems, mediaContainer: nil, playShuffled: false, playbackRepeatMode: .none, resumePlayback: true, playbackQueueLocation: .now, playbackSpeed: nil, mediaSearch: nil) } else { INPlayMediaIntent(mediaItems: mediaItems, mediaContainer: nil, playShuffled: false, playbackRepeatMode: .none, resumePlayback: true) } intent.suggestedInvocationPhrase = "播放音乐" let interaction = INInteraction(intent: intent, response: nil) interaction.donate { error in if let error = error { print("Intent 捐赠失败: \(error.localizedDescription)") } else { print("Intent 捐赠成功 ✅") } }
4
0
382
2d
How to reduce CMSampleBuffer volume
Hello, Basically, I am reading and writing an asset. To simplify, I am just reading the asset and rewriting it into an output video without any modifications. However, I want to add a fade-out effect to the last three seconds of the output video. I don’t know how to do this. So far, before adding the CMSampleBuffer to the output video, I tried reducing its volume using an extension on CMSampleBuffer. In the extension, I passed 0.4 for testing, aiming to reduce the video's overall volume by 60%. My question is: How can I directly adjust the volume of a CMSampleBuffer? Here is the extension: extension CMSampleBuffer { func adjustVolume(by factor: Float) -> CMSampleBuffer? { guard let blockBuffer = CMSampleBufferGetDataBuffer(self) else { return nil } var length = 0 var dataPointer: UnsafeMutablePointer<Int8>? guard CMBlockBufferGetDataPointer(blockBuffer, atOffset: 0, lengthAtOffsetOut: nil, totalLengthOut: &length, dataPointerOut: &dataPointer) == kCMBlockBufferNoErr else { return nil } guard let dataPointer = dataPointer else { return nil } let sampleCount = length / MemoryLayout<Int16>.size dataPointer.withMemoryRebound(to: Int16.self, capacity: sampleCount) { pointer in for i in 0..<sampleCount { let sample = Float(pointer[i]) pointer[i] = Int16(sample * factor) } } return self } }
4
0
404
May ’25
MPNowPlayingInfoCenter nowPlayingInfo throttled
Hello, I have been running into issues with setting nowPlayingInfo information, specifically updating information for CarPlay and the CPNowPlayingTemplate. When I start playback for an item, I see lock screen information update as expected, along with the CarPlay now playing information. However, the playing items are books with collections of tracks. When I select a new track(chapter) within the book, I set the MPMediaItemPropertyTitle to the new chapter name. This change is reflected correctly on the lock screen, but almost never appears correctly on the CarPlay CPNowPlayingTemplate. The previous chapter title remains set and never updates. I see "Application exceeded audio metadata throttle limit." in the debug console fairly frequently. From that a I figured that I need to minimize updates to the nowPlayingInfo dictionary. What I did: I store the metadata dictionary in a local dictionary and only set values in the main nowPlayingInfo dictionary when they are different from the current value. I kick off the nowPlayingInfo update via a task that initially sleeps for around 2 seconds (not a final value, just for my current testing). If a previous Task is active, it gets cancelled, so that only one update can happen within that time window. Neither of these things have been sufficient. I can switch between different titles entirely and the information updates (including cover art). But when I switch chapters within a title, the MPMediaItemPropertyTitle continues to get dropped. I know the value is getting set, because it updates on the lock screen correctly. In total, I have 12 keys I update for info, though with the above changes, usually 2-4 of them actually get updated with high frequency. I am running out of ideas to satisfy the throttling thresholds to accurately display metadata. I could use some advice. Thanks.
4
1
150
May ’25
PhotosPicker obtains the result information of the selected video
In the AVP project, a selector pops up, only wanting to filter spatial videos. When selecting the material of one of the spatial videos, the selection result returns empty. How can we obtain the video selected by the user and get the path and the URL of the file The code is as follows: PhotosPicker(selection: $selectedItem, matching: .videos) { Text("Choose a spatial photo or video") } func loadTransferable(from imageSelection: PhotosPickerItem) -> Progress { return imageSelection.loadTransferable(type: URL.self) { result in DispatchQueue.main.async { // guard imageSelection == self.imageSelection else { return } print("加载成功的图片集合:(result)") switch result { case .success(let url?): self.selectSpatialVideoURL = url print("获取视频链接:(url)") case .success(nil): break // Handle the success case with an empty value. case .failure(let error): print("spatial错误:(error)") // Handle the failure case with the provided error. } } } }
4
0
126
May ’25
SpeechTranscriber not supported
I've tried SpeechTranscriber with a lot of my devices (from iPhone 12 series ~ iPhone 17 series) without issues. However, SpeechTranscriber.isAvailable value is false for my iPhone 11 Pro. https://developer.apple.com/documentation/speech/speechtranscriber/isavailable I'am curious why the iPhone 11 Pro device is not supported. Are all iPhone 11 series not supported intentionally? Or is there any problem with my specific device? I've also checked the supportedLocales, and the value is an empty array. https://developer.apple.com/documentation/speech/speechtranscriber/supportedlocales
4
0
593
4w
Why is AVAudioEngine input giving all zero samples?
I am trying to get access to raw audio samples from mic. I've written a simple example application that writes the values to a text file. Below is my sample application. All the input samples from the buffers connected to the input tap is zero. What am I doing wrong? I did add the Privacy - Microphone Usage Description key to my application target properties and I am allowing microphone access when the application launches. I do find it strange that I have to provide permission every time even though in Settings > Privacy, my application is listed as one of the applications allowed to access the microphone. class AudioRecorder { private let audioEngine = AVAudioEngine() private var fileHandle: FileHandle? func startRecording() { let inputNode = audioEngine.inputNode let audioFormat: AVAudioFormat #if os(iOS) let hardwareSampleRate = AVAudioSession.sharedInstance().sampleRate audioFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: 1)! #elseif os(macOS) audioFormat = inputNode.inputFormat(forBus: 0) // Use input node's current format #endif setupTextFile() inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { [weak self] buffer, _ in self!.processAudioBuffer(buffer: buffer) } do { try audioEngine.start() print("Recording started with format: \(audioFormat)") } catch { print("Failed to start audio engine: \(error.localizedDescription)") } } func stopRecording() { audioEngine.stop() audioEngine.inputNode.removeTap(onBus: 0) print("Recording stopped.") } private func setupTextFile() { let tempDir = FileManager.default.temporaryDirectory let textFileURL = tempDir.appendingPathComponent("audioData.txt") FileManager.default.createFile(atPath: textFileURL.path, contents: nil, attributes: nil) fileHandle = try? FileHandle(forWritingTo: textFileURL) } private func processAudioBuffer(buffer: AVAudioPCMBuffer) { guard let channelData = buffer.floatChannelData else { return } let channelSamples = channelData[0] let frameLength = Int(buffer.frameLength) var textData = "" var allZero = true for i in 0..<frameLength { let sample = channelSamples[i] if sample != 0 { allZero = false } textData += "\(sample)\n" } if allZero { print("Got \(frameLength) worth of audio data on \(buffer.stride) channels. All data is zero.") } else { print("Got \(frameLength) worth of audio data on \(buffer.stride) channels.") } // Write to file if let data = textData.data(using: .utf8) { fileHandle!.write(data) } } }
4
0
928
Jan ’25
MusicKit API returns 500 Internal Server Error despite valid JWT and setup
My app is properly configured with MusicKit. I've generated a JWT using my valid credentials (Team ID, Key ID, private key), and I’ve ensured the time settings are correct via NTP. When I call: https://api.music.apple.com/v1/catalog/jp/search?term=ado&amp;types=songs I consistently receive a 500 Internal Server Error. The JWT is generated using ES256 with valid iat and exp values. I’ve confirmed the token decodes properly using jwt.io, and it's passed via the Authorization: Bearer header. Things I’ve confirmed: Key ID, Team ID, private key are correct App ID is configured with MusicKit capability JWT is generated and signed correctly macOS time is synced via NTP Used both curl and Python to test — same result Is there anything else I should check on the Apple Developer Console (like App ID, Certificates, or provisioning profile)? Or could this be a backend issue on Apple’s side? Any guidance would be appreciated.
4
0
408
Nov ’25
"No signal" message when connecting LG tv via HDM
Hi everyone, I am currently on MacOS Tahoe (26.1), and for some weird reason my mac is not connecting via HDMI. To be accurate: it is connecting and the LG TV shows up in the Displays settings, but no image shows up in it, I have no idea why. This used to work as I've tried this cable before with the same exact tv. The cable is a basic Amazon Basics HDMI one. Allow me just to advanced this question a little: usually terminal commands are more advanced recommendations, whereas basic questions like "have you connected it right" are just a waste of time
4
0
664
Oct ’25
MPRemoteCommandCenter not updating play/pause button to proper state on iOS
So I'm using AVAudioEngine. When playing audio I become the 'now playing' app using MPNowPlayingInfoCenter/MPRemoteCommandCenter APIs. When configuring MPRemoteCommandCenter I add a play/pause command target via -addTargetWithHandler on the togglePlayPauseCommand property. Now I also have a play/pause button in my app's UI. When I pause playback from my app's UI (which means I'm the active app, I'm in the foreground), what I do is this: -I pause the AVAudioPlayerNode I'm using with AVAudioEngine. I do not, stop, reset, etc. the AVAudioEngine. I only pause the player node. My thought process here is that the user just pressed pause and it is very likely that he will hit 'play' to resume playback in the near future because My app is in the foreground and the user just hit the pause button. Now if my app moves to the background and if I receive a memory warning I presume it'd make sense to tear down the engine or pause it. Perhaps I'm wrong about this? So when I initially hit the play button from my app's UI I also activate my AVAudioSession. I do this in high priority NSOperation since the documentation warns that "we recommend that applications not activate their session from a thread where a long blocking operation will be problematic." So now I'm playing and I hit pause from my app's UI. Then I quickly bring up the "Now Playing" center and I see I'm the "Now Playing" app but the play-pause button is showing the pause icon instead of the play icon but I'm in the pause state. I do set MPNowPlayingInfoCenter's playbackState to MPNowPlayingPlaybackStatePaused when I pause. Not surprisingly this doesn't work. The documentation states this is for macOS only. So the only way to get MPRemoteCommandCenter to show the "play" image for the play-pause button is to deactivate my AVAudioSession when I pause playback? Since I change the active state of my audio session in a NSOperation because documentation recommends "we recommend that applications not activate their session from a thread where a long blocking operation will be problematic." the play-pause toggle in the remote command center won't immediately update since I'm doing it on another thread. IMO it feels kind of inappropriate for a play-pause button to wait on a NSOperation activating the audio session before updating its UI when I already know my play/paused state, it should update right away like the button in my app does. Wouldn't it be nicer to just use MPNowPlayingInfoCenter's playbackState property on iOS too? If I'm no the longer the now playing app/active audio session it doesn't matter since I'm not in the now playing UI, just ignore it? Also is it recommended that I deactivate my audio session explicitly every time the user pauses audio in my app (when I'm in the foreground)? Also when I do deactivate the audio session I get an error: AVAudioSessionErrorCodeIsBusy (but the button in the now playing center updates to the proper image). I do this : -(void)pause { [self.playerNode pause]; [self runOperationToDeactivateAudioSession]; // This does nothing on iOS: MPNowPlayingInfoCenter *nowPlayingCenter = [MPNowPlayingInfoCenter defaultCenter]; nowPlayingCenter.playbackState = MPNowPlayingPlaybackStatePaused; } So in -runOperationToDeactivateAudioSession I get the AVAudioSessionErrorCodeIsBusy. According to the documentation Starting in iOS 8, if the session has running I/Os at the time that deactivation is requested, the session will be deactivated, but the method will return NO and populate the NSError with the code property set to AVAudioSessionErrorCodeIsBusy to indicate the misuse of the API. So pausing the player node when pausing isn't enough to meet the deactivation criteria. I guess I have to pause or stop the audio engine. I could probably wait until I receive a scene went to background notification or something before deactivating my audio session (which is async, so the button may not update to the correct image in time). This seems like a lot of code to have to write to get a play-pause toggle to update, especially in iPad-multi window scene environment. What's the recommended approach? Should I pause the AudioEngine instead of the player node always? Should I always explicitly deactivate my audio session when the user pauses playback from my app's UI even if I'm in the foreground? I personally like the idea of just being able to set [MPNowPlayingInfoCenter defaultCenter].playbackState = MPNowPlayingPlaybackStatePaused; But maybe that's because that would just make things easier on me. This does feels overcomplicated though. If anyone can share some tips on how I should handle this, I'd appreciate it.
4
0
688
Feb ’25
TV A1625 Using 3× More CPU After tvOS 26 Update
Hi everyone, After updating my Apple TV HD (model A1625) to tvOS 26, I’ve noticed a significant spike in CPU usage—up to 3× higher than before the update. Go from around 40% to 120% Model: Apple TV HD (A1625) tvOS Version: 26 (stable release) and beta version of 26.1, App downgrade stream due to lack of cpu power If anyone else is experiencing this, please share your findings or workarounds. Would love to hear from Apple engineers or other developers if this is a known regression or if there’s a recommended fix. Thanks!
4
0
191
Oct ’25
Missing Depth Frames When Recording with AVCaptureVideoDataOutputSampleBufferDelegate/AVCaptureDataOutputSynchronizerDelegate and AVAssetWriter
I’ve tried both AVCaptureVideoDataOutputSampleBufferDelegate (captureOutput) and AVCaptureDataOutputSynchronizerDelegate (dataOutputSynchronizer), but the number of depth frames and saved timestamps is significantly lower than the number of frames in the .mp4 file written by AVAssetWriter. In my code, I save: Timestamps for each frame to a metadata file Depth frames to a binary file Video to an .mp4 file If I record a 4-second video at 30fps, the .mp4 file correctly plays for 4 seconds, but the number of stored timestamps and depth frames is much lower—around 70 frames instead of the expected 120. Does anyone know why this mismatch happens? func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { // Read all outputs guard let syncedDepthData: AVCaptureSynchronizedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData, let syncedVideoData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { // only work on synced pairs return } if syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped { return } let depthData = syncedDepthData.depthData let depthPixelBuffer = depthData.depthDataMap let sampleBuffer = syncedVideoData.sampleBuffer guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer), let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else { return } addToPreviewStream?(CIImage(cvPixelBuffer: videoPixelBuffer)) if !canWrite() { return } // Extract the presentation timestamp (PTS) from the sample buffer let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) //sessionAtSourceTime is the first buffer we will write to the file if self.sessionAtSourceTime == nil { //Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time) guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return } self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp) } if self.videoWriterInput!.isReadyForMoreMediaData { self.videoWriterInput!.append(sampleBuffer) self.videoTimestamps.append( Timestamp( frame: videoTimestamps.count, value: timestamp.value, timescale: timestamp.timescale ) ) let ddm = depthData.depthDataMap depthCapture.addDepthData(pixelBuffer: ddm, timestamp: timestamp) } }
3
0
447
Feb ’25