After upgrading to iOS 18.4, I'm no longer able to establish an AirPlay v1 connection to an audio system.  The symptom is that the AirPlay route picker just spins when trying to connect to an audio system.  It eventually gives up.
I tested this on an iPhone 14, connecting to a HomePod, AirPort express, AppleTV and a Wiim Pro.  If I try connecting with AirPlay v2, ex: using Apple Music, the connection succeeds and audio can be played.
I'm the developer of an app that plays audio over AirPlay while also recording.  My app has to use AirPlay v1 because AvAudioSession doesn't allow the policy .longFormAudio when the category is .playAndRecord.  This issue is a real pain as it means my app is suddenly broken for many thousands of users.
Is anyone else seeing this issue?  Any suggestions for a workaround?
                    
                  
                Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
          Post
Replies
Boosts
Views
Activity
                    
                      I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
                    
                  
                
                    
                      Hello !
I am working on an app connected to an external streamer .
I would like to display current playing song on the Lock Screen.
I tried to update the information in MPNowPlayingInfoCenter but I need to play a sound on my iPhone for the control to be displayed .
Is there a way to do it without playing a sound?
If not, playing a silent sound would be the only solution ? validated by Apple ? :-/
Thank you
Frederic
                    
                  
                
                    
                      In an m3u8 manifest, audio EXT-X-MEDIA tags usually contain CHANNELS tag containing the audio channels count like so:
#EXT-X-MEDIA:TYPE=AUDIO,URI="audio_clear_eng_stereo.m3u8",GROUP-ID="default-audio-group",LANGUAGE="en",NAME="stream_5",AUTOSELECT=YES,CHANNELS="2"
Is it possible to get this info from AVPlayer, AVMediaSelectionOption or some related API?
                    
                  
                
                    
                      Hello,
I have an existing AUv3 instrument plugin.  In the plug in, users can access files (audio files, song projects) via a UIDocumentPickerViewController
In Logic Pro, (and some other hosts, but not all), the document picker is unable to receive touches, while a keyboard case is attached to the iPad.
Removing the case (this is an Apple brand iPad case) allows the interactions to resume and allows me to pick files in the usual way.
One of my users reports this non-responsive behavior occurs even after disconnecting their keyboard.
I have fiddled with entitlements all day, and have determined that is not the issue, since the keyboard disconnection appears to fix it every time for me.
Here is my, very boilerplate, presentation code  :
guard let type = UTType("com.my.type") else {
                    return
                }
                
                let fileBrowser = UIDocumentPickerViewController(forOpeningContentTypes: [type])
                fileBrowser.overrideUserInterfaceStyle = .dark
                fileBrowser.delegate = self
                fileBrowser.directoryURL = myFileFolderURL()
            
        self.present(fileBrowser, animated: true) {
                    
                  
                
                    
                      Hello, I am trying to get the new iPhone 16 pro to achieve 4k 120fps encoding when we are getting the video feed from the default, wide angle camera on the back. We are using the apple API to capture the individual frames from the camera as they are processed and we get them in this callback:
// this is the main callback function to handle video frames captured
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
We are then taking these frames as they come in and encoding them using VideoToolBox. After they are encoded, they are added to a ring buffer so we can access them after they have been encoded.
The problem is that when we are encoding these frames on an iPhone 16 Pro, we are only reaching 80-90fps instead of 120fps. We have removed as much processing as we can. We get some small attributes about the frame when it comes in, encode the frame, and then add it to our ring buffer.
I have attached a sample project that is broken down as much as possible to the basic task of encoding 4k 120fps footage. Inside the sample app, there is an fps and pps display showing how many frames we are encoding per second. FPS represents how many frames we are coming in per second from the camera, and PPS represents how many frames we are processing (encoding) per second.
Link to sample project: https://github.com/jake-fishtech/EncoderPerformance
Thanks you for any help or suggestions.
                    
                  
                
                    
                      The media services used for HLS streaming in an AVPlayer seem to crash if your segments are too large.
Anything over 20Mbps seems to cause a crash. I have tried adjusting the segment length to 1 second also and it didn't help.
I am remuxing Dolby Vision and HDR video and want to avoid transcoding and losing any metadata. However the segments are too large.
Is there a workaround for this? Otherwise it seems AVFoundation is not suited to high bitrate HLS and I should be using MPV or similar.
                    
                  
                
                    
                      I'm trying to implement airplay into my app. I can successfully playback sound and trigger the airplay selector sheet. If the target device is a Bluetooth only device I can connect with no problem and stream the audio to the Bluetooth device, but if the audio device is a airplay specific device like a HomePod or an Apple TV when I select it, I get a spinning icon, indicating that it is trying to connect, and eventually it times out and stops without connecting.
I don't believe it is an AirPlay audio issue because if I go to a different app, for example a podcast app and select my HomePods for output, and then switch back to my app. My audio will correctly stream to the HomePod. Not only that, I have it so that my icon will change color to indicate that it is connected via airplay and it is correctly indicating that it is connected via AirPlay. But I cannot then disconnect it using the Airplay selector.
The issue appears to be in the AirPlay selection side, which I have spent several days attempting to troubleshoot mostly using ChatGPT to suggest code different than what I have to maybe work around the issue. Mostly it is focused on the audio player section, but it doesn't seem like that is really the route that is the problem.
                    
                  
                
                    
                      I’ve never had a problem with any update before but as soon as I updated to 18.3 update my camera decided to start blurring when it’s in 1x & 2x, I use my camera daily for work and this is unacceptable. I’m wondering if anyone else is having this issue, it’s really frustrating..
                    
                  
                
                    
                      Case-ID: 10075936
PLATFORM AND VERSION
iOS
Development environment: Xcode Xcode15, macOS macOS 14.5
Run-time configuration: iOS iOS18.0.1
DESCRIPTION OF PROBLEM
Our customer experienced an one-way audio issue when switching from the built-in microphone to AirPods Pro (model: A2084, version: 6F21) during a VoIP call. The issue occurred when the customer's voice could not be heard by the other party, but the customer could hear the other party's voice.
STEPS TO REPRODUCE
Here are the details:
After the issue occurred, subsequent VoIP calls experienced the same issue when using AirPods Pro, but the issue did not occur when using the built-in microphone. The issue could only be resolved by restarting the system, and killing the app did not work.
Log and code analysis:
In WebRTC, it listens for AVAudioSessionRouteChangeNotification. In the above scenario, when webrtc receives the route change notification, it will print the audio session configuration information. At this point, the input channel count was 0, which was abnormal:
[Webrtc] (RTCLogging.mm:33): (audio_device_ios.mm:535 HandleValidRouteChange): RTC_OBJC_TYPE(RTCAudioSession):
{
category: AVAudioSessionCategoryPlayAndRecord
categoryOptions: 128
mode: AVAudioSessionModeVoiceChat
isActive: 1
sampleRate: 48000.00
IOBufferDuration: 0.020000
outputNumberOfChannels: 2
inputNumberOfChannels: 0
outputLatency: 0.021500
inputLatency: 0.005000
outputVolume: 0.600000
isPreferredSpeaker: 0
isCallkit: 0
}
If app tries to call API, setPreferredInputNumberOfChannels at this point, it will fail with an error code of -50:
setConfiguration:active:shouldSetActive:error:]): Failed to set preferred input number of channels(1): The operation couldn’t be completed. (OSStatus error -50.)
Our questions:
When AVAudioSession is active, the category and mode are as expected. Why is the input channel count 0?
Assuming that the AVAudioSession state is abnormal at this point, why does killing the app not resolve the issue, and why does the system need to be restarted to resolve the issue?
Is it possible that the category and mode of the AVAudioSession fetched by the app is currently wrong? Does it need to be reset again each time the callkit is started if the category and mode fetched are the same as the values to be set?
                    
                  
                
                    
                      I've established proper authorization for general music api calls, but when I use that same authorization to retrieve metadata for the latest music feeds (see https://developer.apple.com/documentation/applemusicfeed/requesting-a-feed-export), I get a 401 Unauthorized error.
As per the documentation, I'm simply issuing a GET against https://api.media.apple.com/v1/feed/album/latest.
Are there different entitlements needed for the Music Feed API?
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		General
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      I am creating an app that decodes H.265 elementary streams on iOS.
I use VideoToolBox to decode from H.265 to NV12.
The decoded data is enqueued in the CMSampleBufferDisplayLayer as a CMSampleBuffer.
However, nothing is displayed in the VideoPlayerView. It remains black.
The decoding in VideoToolBox is successful. I confirmed this by saving the NV12 data in the CMSampleBuffer to a file and displaying it using a tool.
Why is nothing displayed in the VideoPlayerView?
I can provide other source code as well.
//
//  ContentView.swift
//  H265Decoder
//
//  Created by Kohshin Tokunaga on 2025/02/15.
//
import SwiftUI
struct ContentView: View {
    var body: some View {
        VStack {
            Text("H.265 Player (temp.h265)")
                .font(.headline)
            VideoPlayerView()
                .frame(width: 360, height: 640) // Adjust or make it responsive for iOS
        }
        .padding()
    }
}
#Preview {
    ContentView()
}
//
//  VideoPlayerView.swift
//  H265Decoder
//
//  Created by Kohshin Tokunaga on 2025/02/15.
//
import SwiftUI
import AVFoundation
struct VideoPlayerView: UIViewRepresentable {
    
    // Return an H265Player as the coordinator, and start playback there.
    func makeCoordinator() -> H265Player {
        H265Player()
    }
    
    func makeUIView(context: Context) -> UIView {
        let uiView = UIView(frame: .zero)
        
        // Base layer for attaching sublayers
        uiView.backgroundColor = .black // Screen background color (for iOS)
        
        // Create the display layer and add it to uiView.layer
        let displayLayer = context.coordinator.displayLayer
        displayLayer.frame = uiView.bounds
        displayLayer.backgroundColor = UIColor.clear.cgColor
        
        uiView.layer.addSublayer(displayLayer)
        
        // Start playback
        context.coordinator.startPlayback()
        
        return uiView
    }
    
    func updateUIView(_ uiView: UIView, context: Context) {
        // Reset the frame of the AVSampleBufferDisplayLayer when the view's size changes.
        let displayLayer = context.coordinator.displayLayer
        displayLayer.frame = uiView.layer.bounds
        
        // Optionally update the layer's background color, etc.
        uiView.backgroundColor = .black
        displayLayer.backgroundColor = UIColor.clear.cgColor
        
        // Flush transactions if necessary
        CATransaction.flush()
    }
}
//
//  H265Player.swift
//  H265Decoder
//
//  Created by Kohshin Tokunaga on 2025/02/15.
//
import Foundation
import AVFoundation
import CoreMedia
class H265Player: NSObject, VideoDecoderDelegate {
    
    let displayLayer = AVSampleBufferDisplayLayer()
    private var decoder: H265Decoder?
    
    override init() {
        super.init()
        
        // Initial configuration for the display layer
        displayLayer.videoGravity = .resizeAspect
        
        // Initialize the decoder (delegate = self)
        decoder = H265Decoder(delegate: self)
        
        // For simple playback, set isBaseline to true
        decoder?.isBaseline = true
    }
    
    func startPlayback() {
        // Load the file "cars_320x240.h265"
        guard let url = Bundle.main.url(forResource: "temp2", withExtension: "h265") else {
            print("File not found")
            return
        }
        do {
            let data = try Data(contentsOf: url)
            // Set FPS and video size as needed
            let packet = VideoPacket(data: data,
                                     type: .h265,
                                     fps: 30,
                                     videoSize: CGSize(width: 1080, height: 1920))
            
            // Decode as a single packet
            decoder?.decodeOnePacket(packet)
            
        } catch {
            print("Failed to load file: \(error)")
        }
    }
    
    // MARK: - VideoDecoderDelegate
    func decodeOutput(video: CMSampleBuffer) {
        // When decoding is complete, send the output to AVSampleBufferDisplayLayer
        displayLayer.enqueue(video)
    }
    
    func decodeOutput(error: DecodeError) {
        print("Decoding error: \(error)")
    }
}
                    
                  
                
                    
                      Hey there, I'm trying to display all user's albums using the MediaPlayer library. I'm getting many albums returning nil, but I know artwork exists because they show up in the default Music app. There doesn't seem to be much rhyme or reason for what shows up and what doesn't. All downloaded albums display artwork, but some cloud album artwork displays as well. Here's the code I'm using to debug this.
let query = MPMediaQuery.albums()
if let albumCollections = query.collections {
  albums = albumCollections
}
for album in albums {
  let artwork = album.representativeItem?.artwork
  print(artwork,  artwork?.image(at: CGSize(width: 100, height: 100)))
  }
Any help would be greatly appreciated. Thanks!
                    
                  
                
                    
                      Hello,
I'm trying to receive parquet files using the example that provided in documentation. I've done all required steps but receive constantly error 500 with "Upstream Service Error". By looking into the issues list, seems this error exists for months. Is it possible to get it working?
                    
                  
                
                    
                      Movies taken with Android phones store their location metadata (and probably others) in ways that are ignored by Apple's ecosystem (QuickTime Player, Photos.app).
I am considering creating a Spotlight importer so that this metadata is available to the sytem. But I have a couple of questions:
Can a Spotlight importer add new data (like location) to the data that the standard importer already captured? Or would the new importer need to take over the whole data gathering? If so, would macOS allow that?
Would that Spotlight importer be somehow used by e.g. Photos.app and QT Player to capture the location? Or would this end up in Spotlight "knowing" the location but Photos.app ignoring it?
If so, maybe there is something more broadly useful than a Spotlight importer?
                    
                  
                
                    
                      the problem is when using HLS live stream with AVPlayer on iOS/ tvOS the player chooses first highest bandwidth then slowly steps down to lowest (within 1-3min) and eventually steps up again then repeats to step down.
the AVPlayer error log sends events:
errorStatusCode: -12888, errorDomain: Optional("CoreMediaErrorDomain"), errorComment: Optional("The operation couldn't be completed. (CoreMediaErrorDomain error -12888 - Playlist File unchanged for longer than 1.5 * target duration
we use standard segments in CMAF format, 2sec duration
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:147065903
#EXT-X-MAP:URI="video_1_4660000_init.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2"
#EXT-X-PROGRAM-DATE-TIME:2025-04-30T12:51:07
#EXTINF:2.000,
video_1_4660000_t17460174670001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2
#EXTINF:2.000,
video_1_4660000_t17460174690001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2
#EXTINF:2.000,
video_1_4660000_t17460174710001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2
when using 6sec segments the player stays stable at highest bandwidth.
is there a way to avoid this error? in AVPlayer or HLS configuration?
                    
                  
                
                    
                      I have a crash related to playing video in AVPlayerViewController and AVQueuePlayer. I download the video locally from the network and then initialize it using AVAsset and AVPlayerItem. Can't reproduce locally, but crashes occur from firebase crashlytics only for users starting with iOS 18.4.0 with this trace:
Crashed: com.apple.avkit.playerControllerBackgroundQueue
0 libobjc.A.dylib 0x1458 objc_retain + 16
1 libobjc.A.dylib 0x1458 objc_retain_x0 + 16
2 AVKit 0x12afdc __77-[AVPlayerController currentEnabledAssetTrackForMediaType:completionHandler:]_block_invoke + 108
3 libdispatch.dylib 0x1aac _dispatch_call_block_and_release + 32
4 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
5 libdispatch.dylib 0x6560 _dispatch_continuation_pop + 596
6 libdispatch.dylib 0x5bd4 _dispatch_async_redirect_invoke + 580
7 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364
8 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156
9 libsystem_pthread.dylib 0x4624 _pthread_wqthread + 232
10 libsystem_pthread.dylib 0x19f8 start_wqthread + 8
                    
                  
                
                    
                      Hey, I'm building a camera app and I want to use the captured HDRGainMap along side the photo to do some processing with a CIFilter chain. How can this be done? I can't find any documentation any where on this, only on how to access the HDRGainMap from an existing HEIC file, which I have done successfully. For this I'm doing something like the following:
let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap)
let gainDict = NSDictionary(dictionary: gainmap)
let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data
let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription]
let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata]
However I'm not sure what the approach is with a AVCapturePhoto output from a AVCaptureDevice.
Thanks!
                    
                  
                
                    
                      Hi,
our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings.
While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate.
Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this?
Thanks!
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		Audio
		
  	
                  
                
              
              
                Tags:
              
              
  
  
    
      
      
      
        
          
            System Configuration
          
        
        
      
      
    
      
      
      
        
          
            Core Audio
          
        
        
      
      
    
      
      
      
        
          
            Inter-process communication
          
        
        
      
      
    
      
      
      
        
          
            Service Management
          
        
        
      
      
    
  
  
              
                
                
              
            
          
                    
                      I found this phenomenon, and it can be reproduced stably.
If I use a triple-camera to take a photo, if the picture is moving, or I move the phone, let's assume it moves horizontally, when I aim at an object, I press the shutter, which is called time T. At this time, the picture in the viewfinder is T0, and the photo produced is about T+100ms.
If I use a single-camera to take a photo, use the same speed to move the phone to move the picture, and press the shutter when aiming at the same object, the photo produced is about T+400ms later.
Let me describe the problem I encountered in another way.
Suppose a pile of cards are placed horizontally on the table, and the cards are written with numbers from left to right, 0,1,2,3,4,5,6...
Now aim the camera at the number 0, and then move to the right at a uniform speed. The numbers pass through the camera's viewfinder and continue to increase. When aiming at the number 5, press the shutter.
If it is a triple-camera, the photo obtained will probably show 6, while if it is taken with a single-camera, the photo obtained will be about 9.
This means the triple camera can capture photos faster, but why is this the case? Any explanation?