am new to using Swift for a Mac Application. I am trying to control an external UVC-compliant camera focus and other capabilities. However, I'm having trouble with this and don't know where to start. I have downloaded an application from the App Store and it can control the focus and other capabilities.
I've tried IOKit but this seems to be complicated and this does not return any capabilities or control the camera.
I also tried AVfoundation and was able to open the camera, but using the following code did not work for me. as a device.isFocusPointOfInterestSupported returns false and without checking the app crashes.
@IBAction func focusChanged(_ sender: NSSlider) {
do {
guard let device = videoDevice else { return }
    try device.lockForConfiguration()
    
    // Check if focus mode and point of interest are supported
    if device.isFocusModeSupported(.locked) {
        device.focusMode = .locked
    }
    
    if device.isFocusPointOfInterestSupported {
        // Map the slider value (0.0 to 1.0) to the focus point's X coordinate
        let focusX = CGFloat(sender.doubleValue)
        let focusPoint = CGPoint(x: focusX, y: 0.5) // Y coordinate is typically 0.5 (centered vertically)
        device.focusPointOfInterest = focusPoint
    } else {
        print("Focus point of interest is not supported on this device.")
    }
    
    device.unlockForConfiguration()
    
    // Log focus settings
    print("Focus point: \(device.focusPointOfInterest)")
    print("Focus mode: \(device.focusMode.rawValue)")
} catch {
    print("Error adjusting focus: \(error)")
}
Any help or advice is much appreciated.
                    
                  
                Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
          Post
Replies
Boosts
Views
Activity
                    
                      Hello
We have an application that play some sound via the system sound APIs from the AudioToolbox framework.
AudioServicesCreateSystemSoundID(url as CFURL, &soundID)
AudioServicesPlaySystemSoundWithCompletion(soundID)
Our make sure that an active audio session is available  before playing the system sound. But when the device is connected to a BluetoothA2DP device. The sound are played on through the device speaker and not through the bluetooth A2DP device.
Our AudioSesison is configured with the following categories
 [.allowBluetooth, .defaultToSpeaker, .allowBluetoothA2DP]
Sound played from the AVAudioPlayer are played on the  allowBluetoothA2DP device with similar code.
Is this a bug in the AudioToolbox framework?
                    
                  
                
                    
                      I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback.
I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it.
Please help me resolve this issue. I have attached the code reference that I am currently using.
Thank you.
ViewController.swift
                    
                  
                
                    
                      when I played a local video(I downloaded it to the sandbox),KVO the AVPlayerItem status is AVPlayerItemStatusFailed and error is Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x3004137e0 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}}
why?
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		Video
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      使用AVSpeechUtterance实现iOS语音播报,选择语言为简体中文“zh-CN”,读取中文“袆”(hui 第一声)错误,读成了“祎”(yi 第一声),希望能优化。
                    
                  
                
                    
                      For an upcoming update of one of my apps, I’m facing an issue:
The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch.
However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32).
Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down??
I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result.
Grateful for any input on this 🙏
                    
                  
                
                    
                      Looking to output dv video to my JVC SR-VS30 video deck. I used to be able to do this, but with most firewire stuff being deprecated, I'm not sure how to go about this. I found this old developer sample code that seems to do exactly what I'd like. Surely this could be rolled or updated for current macOS?
https://developer.apple.com/library/archive/samplecode/SimpleVideoOut/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000809-Intro-DontLinkElementID_2
                    
                  
                
                    
                      Hi everyone, I just upgraded my iPhone to 18.1.1. I noticed the absence of the normal mode on my AirPods. I can't stand the noise cancellation mode for too long and the transparency one is overwhelming everytime a hair is brushing near by my AirPods. Why is that? I don't really see the progression here. Can you this be fixed in the next version? I don't want to buy another brand, I use them everyday for many hours.
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		Audio
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      Hi folks,
When doing HLS v6 live streaming with fmp4 chunks we noticed that when the encoder timestamps slightly drift and a #EXT-X-DISCONTINUITY tag is created in either the audio or video playlist (in an ABR setup), the tag is not correctly handled by the player leading to a broken playback containing black screen or no audio (depending on which playlist the tag is printed in).
We noticed that this is often true when the number of tags is odd between the playlists (eg. the audio playlist contains 1 tag and the video contains 2 tags will result in a black screen with audio).
By using the same "broken" source but using Shaka player instead won't break the playback at all.
Are there any possible fix (or upcoming) for AV Player?
                    
                  
                
                    
                      While customizing ImagePicker and using it, we find out that the metadata is not reflected normally and report it.
The situation is as follows.
The time or time zone of an image is changed in the Photos app.
Changing the time zone of an image with an actual capture date of 2024:11:08 08:27:44 → 2024:11:07 17:27:44
Image data is extracted from a PHAsset using PHImageManager.
The metadata is obtained from this image data.
The time zone information exposed in the Exif tag information does not reflect the time or time zone changed in the Photos app.
let asset: PHAsset = ...
....
let options = PHImageRequestOptions()
options.isSynchronous = true
options.version = .current
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.normalizedCropRect = .zero
options.isNetworkAccessAllowed = true
options.progressHandler = { progress, error, _, _ in }
PHImageManager.default().requestImageDataAndOrientation(for: asset, options: options) { imageData, uti, orientation, info in
    let cgImageSource = CGImageSourceCreateWithData(imageData! as CFData, nil)
    let properties = CGImageSourceCopyPropertiesAtIndex(cgImageSource!, 0, nil) as? Dictionary<String, Any>
    let exif = properties!["{Exif}"]
    let dictionary = exif as? Dictionary<String, Any>
}
Metadata Check
In this case, it is reflected in the creationDate of PHAsset, so it can be somewhat compensated by forcibly replacing the metadata.
However, because PHAsset does not include time zone information, when changing the time zone as well, it's impossible to calculate the correct time according to the time zone.
PHPicker
This issue is resolved when using the PHPickerResult provided by PHPicker.
extension PhotosPickerViewController: PHPickerViewControllerDelegate {
	  public func picker(_ picker: PHPickerViewController, 
										  didFinishPicking results: [PHPickerResult]) {
		  .....
	  
		  for result in results {
				  let identifier = UTType.image.identifier
          if result.itemProvider.hasItemConformingToTypeIdentifier(identifier) {
              result.itemProvider.loadDataRepresentation(forTypeIdentifier: identifier) { data, error in
                  guard let data = data,
                        let cgImageSource = CGImageSourceCreateWithData(data as CFData, nil),
                        let properties = CGImageSourceCopyPropertiesAtIndex(cgImageSource, 0, nil) as? Dictionary<String, Any>,
                        let exif = properties["{Exif}"],
                        let dictionary = exif as? Dictionary<String, Any>
                  else {
                      return
                  }
              }
          }
       }
	  }
}
Metadata Check
Question
I wonder why this happens, and if this is normal behavior.
Instead of the System Picker that Apple provides as a base, I wonder if there is any way I can supplement it in that situation if I use a customizer.
                    
                  
                
                    
                      Hi. I am working on an audio app for iOS. I have implemented UI and handling which allows the user to change playback rate of audio. When the user selects a different rate, I update the rate property on my AVQueuePlayer. This is working well on device.
When I use Airplay, it works for some devices and not for others. Some devices won't change playback rate and will always play at 1x speed.
Is this possibly a limitation of those 3rd-party devices? Or is there something I'm missing/should check? Would love to get playback rate changes working across all Airplay devices with our app.
Kind regards.
                    
                  
                
                    
                      // Here addObserver for routeChangeNotification
    func testAudioRoute() {
        // My app is an VoIP app, so I need to set "playAndRecord" and "allowBluetooth"
        try? AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.duckOthers, .allowBluetooth, .allowBluetoothA2DP])
        
        NotificationCenter.default.addObserver(self, selector: #selector(currentRouteChanged(noti:)), name: AVAudioSession.routeChangeNotification, object: nil)
    }
// Print the "availableInputs" once got a notification
 @objc func currentRouteChanged(noti: Notification) {
        let availableInputs = AVAudioSession.sharedInstance().availableInputs?.compactMap({ $0.portType }) ?? []
        let currentRouteInputs = AVAudioSession.sharedInstance().currentRoute.inputs.compactMap({ $0.portType })
        let currentRouteOutputs = AVAudioSession.sharedInstance().currentRoute.outputs.compactMap({ $0.portType })
        print("willtest: \navailableInputs=\(availableInputs), \ncurrentRouteInputs=\(currentRouteInputs), \ncurrentRouteOutputs=\(currentRouteOutputs)")
        
        /*
         When BT (Airpods pro 2) CONNECTTED: it will print like below when notification comes, this is correct.
         ----------------------------------------------------------
         willtest:
         availableInputs=[__C.AVAudioSessionPort(_rawValue: MicrophoneBuiltIn), __C.AVAudioSessionPort(_rawValue: BluetoothHFP)],
         currentRouteInputs=[],
         currentRouteOutputs=[__C.AVAudioSessionPort(_rawValue: BluetoothA2DPOutput)]
         ----------------------------------------------------------         
         
         When BT (Airpods pro 2) DISCONNECTTED: it will print like below when notification comes, this is wrong.
         ----------------------------------------------------------         
         availableInputs=[__C.AVAudioSessionPort(_rawValue: MicrophoneBuiltIn), __C.AVAudioSessionPort(_rawValue: BluetoothHFP)],   
         currentRouteInputs=[],
         currentRouteOutputs=[__C.AVAudioSessionPort(_rawValue: Speaker)]
         */
    }
So my question here is:
Why does the "availableInputs" still contain the "C.AVAudioSessionPort(_rawValue: BluetoothHFP)" item even though I have already disconnected the BT device? (Put AirPods in the case.)
BTW, if I tap the "Manual" button once I disconnected the BT, it also prints the "wrong" value for "availableInputs", and it will become normal after about 3~4 seconds.
                    
                  
                
                    
                      I see some demo show convert HDR video to SDR Pixelbuffer,such AVAssetReader、 AVVideoComposition 、AVComposition 、AVFoundation.
But In some cases,I want to render HDR Pixelbuffer and record video.
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([videoDevice isVideoHDRSupported]) {
    NSError *error = nil;
    if ([videoDevice lockForConfiguration:&error]) {
        videoDevice.automaticallyAdjustsVideoHDREnabled = NO;
        videoDevice.videoHDREnabled = YES; // 开启 HDR
        [videoDevice unlockForConfiguration];
    } else {
        NSLog(@"Error: %@", error.localizedDescription);
    }
}
Real-time processing of HDR data requires processing of video frame data (such as filters), ensuring that the processing chain supports 10-bit color depth and HDR metadata. And use imagesBuffer to object tracking, etc.
How to solve this problem?
                    
                  
                
                    
                      I'm trying to setup a listener for kAudioProcessPropertyIsRunningOutput but it's never triggered. I get calls for kAudioProcessPropertyIsRunning and kAudioProcessPropertyDevices but not for kAudioProcessPropertyIsRunningInput or kAudioProcessPropertyIsRunningOutput.
class MyDelegate: PropertyListenerDelegate {
    func propertiesChanged(properties: [AudioObjectPropertyAddress]) {
        print(properties)
    }
}
var myDelegate = MyDelegate()
var processes = try AudioHardwareSystem.shared.processes
for process in processes {
    process.delegates += [myDelegate]
    try process.addListener(forProperties: [AudioObjectPropertyAddress(mSelector: kAudioPropertyWildcardPropertyID, mScope: kAudioObjectPropertyScopeWildcard, mElement: kAudioObjectPropertyElementWildcard)])
}
Xcode 16.1
macOS 15.0.1
                    
                  
                
                    
                      I have an iPad app that I want to run on Apple Silicon macs.
Everything works fine except for VNDocumentCameraViewController. According to the docs this class is available on:
iOS 13.0+ iPadOS 13.0+ Mac Catalyst 13.1+ visionOS 1.0+
yet when I try using it I get Document camera is not available on my Mac Studio running macOS 15.2
Is this expected behaviour?
Thanks
                    
                  
                
                    
                      I’m experiencing an unusual audio issue with AirPods on macOS Sequoia while developing VoIP applications like Zoom and FaceTime.
When AirPods are connected, the other party’s voice sometimes sounds unnaturally stretched (approximately twice as long).
This problem can be temporarily fixed by switching the sound output settings from AirPods to speakers and then back to AirPods.
From our analysis, the issue appears to be related to the sample rate provided by AudioObjectGetPropertyData.
Here’s what we’ve observed:
When the issue occurs, the AudioStreamBasicDescription.sampleRate for AirPods is reported as 48000.
Under normal conditions, it’s reported as 24000.
It seems like the system is mistakenly returning a sample rate that doesn’t match the AirPods’ actual settings, perhaps defaulting to a system speaker value.
Once the output setting is toggled, the correct sampleRate (24000) is retrieved.
This discrepancy causes our application to transmit the audio stream at 48000, leading to the distorted playback.
Has anyone encountered a similar issue or knows how to resolve it?
                    
                  
                
                    
                      Hi. I work on an audio app for iOS which is successfully using the MPRemoteCommandCenter for commands like next, back, skip forward, skip backward etc.
I am trying to implement playback rate controls in my app (so that users can change the playback speed of audio to 0.5x or 2x for example).
While the above commands work, the changePlaybackRateCommand does not seem to. I have enabled the command, given it a target/handler and set supported rates. With the other commands, this caused the UI to change on lock screen, in command center etc, by adding the control for the command (a next button for the next command for example). However, it does not seem to do anything for the playback rate command.
I can implement my own "rate button" UI and rate change handling, but I'm wondering if this is a known bug within Apple? Looking online, it seems other people face the same issue and haven't been able to get this command to work. Why is this API provided if it doesn't seem to do anything? Is there something I'm missing?
Kind regards.
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		Audio
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      Hi,
I am in need to get the total number of parquet files that are present in the apple music feed api for songs, artists. As there is option for limit and offset. But limit is limited to 200 records and offset is uncertain.
How to get total number of parquet files number without quering apple music feed api mulitple times?
Need help regarding this. Thanks!
                    
                  
                
                    
                      I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin.
The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds.
When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating.
I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance!
P.S. I’ve already enabled background audio permissions.
The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
                    
                  
                
                    
                      I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference.
Here’s the relevant code snippet that throws the error:
private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) {
    let audioEngine = AVAudioEngine()
    let request = SFSpeechAudioBufferRecognitionRequest()
    request.shouldReportPartialResults = true
    
    let audioSession = AVAudioSession.sharedInstance()
    try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth])
    try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
    
    let inputNode = audioEngine.inputNode
    let recordingFormat = inputNode.outputFormat(forBus: 0)
    
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
        request.append(buffer)
    }
    
    audioEngine.prepare()
    try audioEngine.start()
    
    return (audioEngine, request)
}
The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed.
Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated!
The specific error is:
The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)