Hi everyone,
I’m testing audio recording on an iPhone 15 Plus using AVFoundation.
Here’s a simplified version of my setup:
let settings: [String: Any] = [
    AVFormatIDKey: Int(kAudioFormatLinearPCM),
    AVSampleRateKey: 8000,
    AVNumberOfChannelsKey: 1,
    AVLinearPCMBitDepthKey: 16,
    AVLinearPCMIsFloatKey: false
]
audioRecorder = try AVAudioRecorder(url: fileURL, settings: settings)
audioRecorder?.record()
When I check the recorded file’s sample rate, it logs:
Actual sample rate: 8000.0
However, when I inspect the hardware sample rate:
try session.setCategory(.playAndRecord, mode: .default)
try session.setActive(true)
print("Hardware sample rate:", session.sampleRate)
I consistently get:
`Hardware sample rate: 48000.0
My questions are:
Is the iPhone mic actually capturing at 8 kHz, or is it recording at 48 kHz and then downsampling to 8 kHz internally?
Is there any way to force the hardware to record natively at 8 kHz?
If not, what’s the recommended approach for telephony-quality audio (true 8 kHz) on iOS devices?
Thanks in advance for your guidance!
                    
                  
                Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
          Post
Replies
Boosts
Views
Activity
                    
                      I'm writing some camera functionality that uses AVCaptureVideoDataOutput.
I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput.
My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread?
I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource.
I have setup similar to the following:
// Foreground thread logic
dispatch_queue_t queue = dispatch_queue_create("avf_camera_queue", nullptr);
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
setupInputDevice(captureSession); // Connects the AVCaptureDevice...
// Store some arbitrary data to be attached to the frame, stored on the foreground thread
FrameMetaData frameMetaData = ...; 
MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc];
// Capture frameMetaData by reference in lambda
[sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }];
AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate
                                          queue:queue];
[captureSession addOutput:captureVideoDataOutput];
[captureSession startRunning];
[captureSession stopRunning];
// Is it now safe to destroy frameMetaData, or do we need manual barrier?
And then in MySampleBufferDelegate:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
        didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
               fromConnection:(AVCaptureConnection *)connection
{
    // Invokes the callback set above
    FrameMetaData *frameMetaData = frameMetaDataGetter();
    
    emitSampleBuffer(sampleBuffer, frameMetaData);
}
                    
                  
                
                    
                      Hi, I believe I've found a potential error in the sample code on the documentation page for creating and using a process tap with an aggregate device. The issue is in the section explaining how to add a tap to the aggregate device. I have already filed a Feedback Assistant ticket on this (ID: FB17411663) but haven't heard back for months.
Capturing system audio with Core Audio taps
The sample code for modifying the kAudioAggregateDevicePropertyTapList incorrectly uses the tapID as the target AudioObjectID when calling AudioObjectSetPropertyData.
// (Code to get the list and potentially modify listAsArray)
if var listAsArray = list as? [CFString] {
    // ... (modification logic) ...
    // Set the list back on the aggregate device. <--- The comment is correct
    list = listAsArray as CFArray
    _ = withUnsafeMutablePointer(to: &list) { list in
        // INCORRECT: This call uses tapID as the target object.
        AudioObjectSetPropertyData(tapID, &propertyAddress, 0, nil, propertySize, list)
    }
}
The kAudioAggregateDevicePropertyTapList is a property that belongs to the aggregate device, not the individual tap. Therefore, to set this property, the AudioObjectSetPropertyData function must target the AudioObjectID of the aggregate device itself. Using tapID as the first argument is logically incorrect for this operation and will not update the aggregate device as intended.
Furthermore, the preceding AudioObjectGetPropertyData call to fetch the list also appears to use the incorrect tapID as its target in the sample.
The AudioObjectID for both getting and setting this property should be the ID of the aggregate device.
_ = AudioObjectGetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, &propertySize, &list)
_ = AudioObjectSetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, propertySize, newList)
Thank you!
                    
                  
                
                    
                      I have implemented fetching Apple Music preview songs using a Swift framework integrated into a Unity app.
My requirement is to fetch full tracks from a user’s Apple Music library and play them inside Unity.
To do this, I understand that I need to handle authentication, generate a Developer Token, and then obtain a Music User Token to access the user’s Apple Music content.
Currently, I have an Individual Apple Developer account (not Organization).
Based on my research, it seems that:
With an Individual account, I can implement this functionality and even upload builds to TestFlight for internal testing.
However, when releasing the app publicly on the App Store, full-track playback may be restricted for Individual accounts and allowed only for Organization accounts.
👉 Can you confirm if this understanding is correct?
👉 Specifically, is it possible for an Individual account to fetch and play full-length tracks from a subscribed Apple Music user’s library (at least for internal/TestFlight testing)?
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		General
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      I’m running HomePod OS 26 on two HomePod minis and OS 18.6 on main HomePod (original)
I’ve enabled Crossfade in the Home app.
I’m playing Apple Music directly in the HomePod mini.
Crossfade just doesn’t work on any HomePod.
I can understand it not working on the HomePod - but why isn’t it working on the minis running OS 26?
I’ve tried disabling and enabling Crossfade, rebooting HomePods etc but nothing?!
                    
                  
                
                    
                      How can media resources in my app be recommended to the system media control center, just like TikTok in the picture
                    
                  
                
                    
                      When changing a camera's exposure, AVFoundation provides a callback which offers the timestamp of the first frame captured with the new exposure duration: AVCaptureDevice.setExposureModeCustom(duration:, iso:, completionHandler:).
I want to get a similar callback when changing frame duration.
After setting AVCaptureDevice.activeVideoMinFrameDuration or AVCaptureDevice.activeVideoMinFrameDuration to a new value, how can I compute the index or the timestamp of the first camera frame which was captured using the newly set frame duration?
                    
                  
                
                    
                      Hi everyone,
We are working on a prototype app for Apple Vision Pro that is similar in functionality to Omegle or Chatroulette, but exclusively for Vision Pro owners.
The core idea is:
– a matching system where one user connects to another through a virtual persona;
– real-time video and audio transmission;
– time limits for sessions with the ability to extend them;
– users can skip a match and move on to the next one.
We have explored WebRTC and Twilio, but unfortunately, they don’t fit our use case.
Question:
What alternative services or SDKs are available for implementing real-time video/audio communication on Vision Pro that would work with this scenario?
Has anyone encountered a similar challenge and can recommend which technologies or tools to use?
Thanks in advance!
                    
                  
                
                    
                      Hello,
I'm evaluating the Apple Music Feed dataset and I noticed that the total number of songs available in the feed is too small. As of today, the number of objects returned in each feed is:
51,198,712 albums
23,093,698 artists
173,235,315 songs
This gives an average of 3.38 songs per album which is quite low. Also, iterating on the data I see that there are albums referencing songs that don't exist in the songs feed. I would like to know:
Is the feed data incomplete?
If so, in what situations an object may be missing from the feed?
Thank you in advance!
                    
                  
                
                    
                      Dear Sirs,
I’ve written a virtual audio driver based on AudioDriverKit and running as dext in my MacOS app. Sometimes when waking up from a sleep state the recording side of my driver extension seems to hang and I don’t see any calls to my io_operation callback. Then the recording app like a DAW seems to hang when trying to start a recording. This doesn’t happen after short sleep states or after a complete new start of my MacBook.
I already opened a case in Feedback-Assistant on 5th of May (FB17503622) which also includes a sysdiagnose and a ktrace but I didn't get any feedback so far. Meanwhile some of our customers are getting angry and I'd like to know if there's anything I could do to fix this problem on my side.
We’re not sure whether this worked in previous MacOS versions, we think we didn’t observe this before 15.3.1 but at least since 15.3.1. we’ve seen this problem.
Best regards,
Johannes
                    
                  
                
                    
                      I have a feature requirement: to switch the writer for file writing every 5 minutes, and then quickly merge the last two files. How can I ensure that the merged file is seamlessly combined and that the audio and video information remains synchronized? Currently, the merged video has glitches, and the audio is also out of sync. If there are experts who can provide solutions in this area, I would be extremely grateful.
                    
                  
                
                    
                      Session player regions populate blank, with no sound media when tracks or regions are created.
                    
                  
                
                    
                      I'm creating Live Photos programmatically in my app using the Photos and AVFoundation frameworks. While the Live Photos work perfectly in the Photos app (long press shows motion), users cannot set them as motion wallpapers. The system shows "Motion not available" message.
Here's my approach for creating Live Photos:
// 1. Create video with required metadata
let writer = try AVAssetWriter(outputURL: videoURL, fileType: .mov)
let contentIdentifier = AVMutableMetadataItem()
contentIdentifier.identifier = .quickTimeMetadataContentIdentifier
contentIdentifier.value = assetIdentifier as NSString
writer.metadata = [contentIdentifier]
// Video settings: 882x1920, H.264, 30fps, 2 seconds
// Added still-image-time metadata at middle frame
// 2. Create HEIC image with asset identifier
var makerAppleDict: [String: Any] = [:]
makerAppleDict["17"] = assetIdentifier  // Required key for Live Photo
metadata[kCGImagePropertyMakerAppleDictionary as String] = makerAppleDict
// 3. Generate Live Photo
PHLivePhoto.request(
    withResourceFileURLs: [photoURL, videoURL],
    placeholderImage: nil,
    targetSize: .zero,
    contentMode: .aspectFit
) { livePhoto, info in
    // Success - Live Photo created
}
// 4. Save to Photos library
PHAssetCreationRequest.forAsset().addResource(with: .photo, fileURL: photoURL, options: nil)
PHAssetCreationRequest.forAsset().addResource(with: .pairedVideo, fileURL: videoURL, options: nil)
What I've Tried
Matching exact video specifications from Camera app (882x1920, H.264, 30fps)
Adding all documented metadata (content identifier, still-image-time)
Testing various video durations (1.5s, 2s, 3s)
Different image formats (HEIC, JPEG)
Comparing with exiftool against working Live Photos
Expected Behavior
Live Photos created programmatically should be eligible for motion wallpapers, just like those from the Camera app.
Actual Behavior
System shows "Motion not available" and only allows setting as static wallpaper.
Any insights or workarounds would be greatly appreciated. This is affecting our users who want to use their created content as wallpapers.
Questions
Are there additional undocumented requirements for Live Photos to be wallpaper-eligible?
Is this a deliberate restriction for third-party apps, or a bug?
Has anyone successfully created Live Photos that work as motion wallpapers?
Environment
iOS 17.0 - 18.1
Xcode 16.0
Tested on iPhone 16 Pro
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		Photos & Camera
		
  	
                  
                
              
              
                Tags:
              
              
  
  
    
      
      
      
        
          
            LivePhotosKit JS
          
        
        
      
      
    
      
      
      
        
          
            PhotoKit
          
        
        
      
      
    
      
      
      
        
          
            Core Image
          
        
        
      
      
    
      
      
      
        
          
            AVFoundation
          
        
        
      
      
    
  
  
              
                
                
              
            
          
                    
                      When making a call to https://api.music.apple.com/v1/me/library/artists to get a user's library artists, it returns the following (as an example):
[
  {
    id: 'r.FCwruQb',
    type: 'library-artists',
    href: '/v1/me/library/artists/r.FCwruQb?l=en-US',
    attributes: { name: 'A Great Big World' }
  },
  {
    id: 'r.7VSWOgj',
    type: 'library-artists',
    href: '/v1/me/library/artists/r.7VSWOgj?l=en-US',
    attributes: { name: 'Aaliyah' }
  },
  ...
]
If I try and use an artist id from that retuned data to look up additional information about the artist by calling https://api.music.apple.com/v1/catalog/us/artists/{id}, it fails.
User Library Artists don't seem to equal Catalog Artists.
It'd be great if there was a way to use these interchangeably. Am I missing something?
                    
                  
                
                    
                      I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable.
I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:).
The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below).
Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely.
Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same).
When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected.
Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else?
Can the drifting be stopped?
func onDisplayLinkUpdate(displayLink: CADisplayLink) {
    // Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease
    let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime()
}
                    
                  
                
                    
                      One thing I've noticed on tvOS 26 is that if you try to set the AVPlayerViewController customInfoViewControllers property while the Content Tabs are on screen, your  app will crash.
*** Terminating app due to uncaught exception 'UIViewControllerHierarchyInconsistency', reason: 'trying to add child view controller that is already presented: <AVInfoPanelViewController: 0x1030cdc00>'
*** First throw call stack:
(0x18a7167bc 0x189a77510 0x18a7166a8 0x1ab425658 0x1b2ee9d54 0x1b2efcd60 0x1b2eaf3f0 0x1080f744c 0x107e021a8 0x107e01b3c 0x18de41c14 0x18de41ba8 0x18de48d28 0x18ad9e358 0x101fac5f0 0x101fc6228 0x101fe7278 0x101fbc6fc 0x101fbc63c 0x18a67a2e0 0x18a679418 0x18a673b34 0x1937e4d5c 0x1abb36588 0x1abb3ae80 0x1aae9dec4 0x108610174 0x1086100e4 0x108615140 0x189abd4d0)
I've logged a feedback (FB19554461) but it's getting awfully late in the dev cycle. So I've been trying to think of a workaround.
The problem is that customInfoViewControllers is pretty declarative in nature. There are no properties or delegate methods I am aware of that let me know when they are displaying or not.
One trick I came up with was seeing if my custom info view controller's view was "visible" or not - I put that in quotes because it turns out it can be visible even when I think it's not, as when the transport bar is scrolled to the top my custom VC still has its top pixels showing, so it gets a viewDidAppear call. So, I tried to see if my view controllers view is completely visible, ie based on the results of the GGRect contains method. And that works! But the problem is it only accounts for my own custom info view controllers, and not the standard one that Apple provides. I can't think of a way at all to know whether that is showing.
Any ideas?
                    
                  
                
                    
                      Hi,
I've had a new deck installed in my car for about 1.5 weeks.
I'm having compatibility issues with my 15PM.
It happens both wired and wirelessly, I get the error "Accessory not supported by this device". It used to happen all the time, now it's 50/50. Sometimes it works.
I've removed and added Bluetooth multiple times on phone and deck, I bought a belkin usb-c to usb-a cable today and it seems to fix it but the problem comes back.
I've changed the setting "FaceID and passcode-allow access when locked-accessories."
The car stereo guy reckons it's definitely an issue with the phone not the deck, I'm inclined to believe him since the error states "by this device".
Any advice appreciated.
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		Audio
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      Hi,
I have just implemented an Audio Unit v3 host.
AgsAudioUnitPlugin *audio_unit_plugin;
AVAudioUnitComponentManager *audio_unit_component_manager;
NSArray<AVAudioUnitComponent *> *av_component_arr;
  
AudioComponentDescription description;
guint i, i_stop;
  
if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){
  return;
}
audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager];
/* effects */
description = (AudioComponentDescription) {0,};
  
description.componentType = kAudioUnitType_Effect;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
  
for(i = 0; i < i_stop; i++){
  ags_audio_unit_manager_load_component(audio_unit_manager,
    (gpointer) av_component_arr[i]);
  }
  
/* instruments */
description = (AudioComponentDescription) {0,};
  
description.componentType = kAudioUnitType_MusicDevice;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
  
for(i = 0; i < i_stop; i++){
  ags_audio_unit_manager_load_component(audio_unit_manager,
    (gpointer) av_component_arr[i]);
}
But this doesn't show me Audio Unit v2 plugins, why?
regards, Joël
                    
                  
                
                    
                      Hi,
I just started to develop audio unit hosting support in my application.
Offline rendering seems to work except that I hear no output, but why?
I suspect with the player goes something wrong.
I connect to CoreAudio in a different location in the code.
Here are some error messages I faced so far:
2025-08-14 19:42:04.132930+0200 com.gsequencer.GSequencer[34358:18611871] [avae]     AVAudioEngineGraph.mm:4668  Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:42:04.151171+0200 com.gsequencer.GSequencer[34358:18611871] [avae]     AVAudioEngineGraph.mm:4668  Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:43:08.344530+0200 com.gsequencer.GSequencer[34358:18614927]            AUAudioUnit.mm:1417  Cannot set maximumFramesToRender while render resources allocated.
2025-08-14 19:43:08.346583+0200 com.gsequencer.GSequencer[34358:18614927] [avae]            AVAEInternal.h:104   [AVAudioSequencer.mm:121:-[AVAudioSequencer(AVAudioSequencer_Player) startAndReturnError:]: (impl->Start()): error -10852
** (<unknown>:34358): WARNING **: 19:43:08.346: error during audio sequencer start - -10852
I have implemented an AVAudioEngine based AudioUnit host. Here I instantiate player and effect:
/* audio engine */
audio_engine = [[AVAudioEngine alloc] init];
  
fx_audio_unit_audio->audio_engine = (gpointer) audio_engine;
av_format = (AVAudioFormat *) fx_audio_unit_audio->av_format;
/* av audio player node */
av_audio_player_node = [[AVAudioPlayerNode alloc] init];
/* av audio unit */
av_audio_unit_effect = [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:[((AVAudioUnitComponent *) AGS_AUDIO_UNIT_PLUGIN(base_plugin)->component) audioComponentDescription]];
av_audio_unit = (AVAudioUnit *) av_audio_unit_effect;
  
fx_audio_unit_audio->av_audio_unit = av_audio_unit;
  
/* audio sequencer */
av_audio_sequencer = [[AVAudioSequencer alloc] initWithAudioEngine:audio_engine];
  
fx_audio_unit_audio->av_audio_sequencer = (gpointer) av_audio_sequencer;
/* output node */
[[AVAudioOutputNode alloc] init];
  
/* audio player and audio unit */
[audio_engine attachNode:av_audio_player_node];
[audio_engine attachNode:av_audio_unit];
[audio_engine connect:av_audio_player_node to:av_audio_unit format:av_format];
[audio_engine connect:av_audio_unit to:[audio_engine outputNode] format:av_format];
ns_error = NULL;
  
[audio_engine enableManualRenderingMode:AVAudioEngineManualRenderingModeOffline
   format:av_format
   maximumFrameCount:buffer_size error:&ns_error];
if(ns_error != NULL &&
   [ns_error code] != noErr){
  g_warning("enable manual rendering mode error - %d", [ns_error code]);
}
ns_error = NULL;
      
[[av_audio_unit AUAudioUnit] allocateRenderResourcesAndReturnError:&ns_error];
if(ns_error != NULL &&
   [ns_error code] != noErr){
  g_warning("Audio Unit allocate render resources returned error - ErrorCode %d", [ns_error code]);
}
Then I render in a dedicated thread.
ns_error = NULL;
[audio_engine startAndReturnError:&ns_error];
if(ns_error != NULL &&
   [ns_error code] != noErr){
  g_warning("error during audio engine start - %d", [ns_error code]);
}
  
[av_audio_sequencer prepareToPlay];
ns_error = NULL;
  
[av_audio_sequencer startAndReturnError:&ns_error];
if(ns_error != NULL &&
   [ns_error code] != noErr){
  g_warning("error during audio sequencer start - %d", [ns_error code]);
}
[av_audio_player_node play];
while(is_running){
  /* pre sync */
  /* IO buffers */
  av_output_buffer = (AVAudioPCMBuffer *) scope_data->av_output_buffer;
  av_input_buffer = (AVAudioPCMBuffer *) scope_data->av_input_buffer;
  /* fill input buffer */
  /* schedule av input buffer */
  frame_position = 0; // (gint64) ((note_offset * absolute_delay) + delay_counter) * buffer_size;
  av_audio_player_node = (AVAudioPlayerNode *) fx_audio_unit_audio->av_audio_player_node;
  AVAudioTime *av_audio_time = [[AVAudioTime alloc] initWithHostTime:frame_position sampleTime:frame_position atRate:((double) samplerate)];
  [av_audio_player_node scheduleBuffer:av_input_buffer atTime:av_audio_time options:0 completionHandler:nil];
	  
  /* render */
  ns_error = NULL;
	  
  status = [audio_engine renderOffline:AGS_FX_AUDIO_UNIT_AUDIO_FIXED_BUFFER_SIZE toBuffer:av_output_buffer error:&ns_error];
    
  if(ns_error != NULL &&
     [ns_error code] != noErr){
    g_warning("render offline error - %d", [ns_error code]);
  }
}
regards, Joël
                    
                  
                
                    
                      I am trying to use SpeechDetector Module in Speech framework along with SpeechTranscriber. and it is giving me an error
Cannot convert value of type 'SpeechDetector' to expected element type 'Array.ArrayLiteralElement' (aka 'any SpeechModule')
Below is how I am using it
                let speechDetector = Speech.SpeechDetector()
            
                let transcriber = SpeechTranscriber(locale: Locale.current,
                                                        transcriptionOptions: [],
                                                        reportingOptions: [.volatileResults],
                                                        attributeOptions: [.audioTimeRange])
                speechAnalyzer = try SpeechAnalyzer(modules: [transcriber,speechDetector])