Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

Data Persistence of AVAssets
Hey, I am fairly new to working with AVFoundation etc. As far as I could research on my own, if I want to get metadata from let's say a .m4a audio file, I have to get the data and then create an AVAsset. My files are all on local servers and therefore I would not be able to just pass in the URL. The extraction of the metadata works fine - however those AVAssets create a huge overhead in storage consumption. To my knowledge the data instances of each audio file and AVAsset should only live inside the function I call to extract the metadata, however those data/AVAsset instances still live on on storage as I can clearly see that the app's file size increases by multiple Gigabytes (equal to the library size I test with). However, the only data that I purposefully save with SwiftData is the album artwork. Is this normal behavior for AVAssets or am I missing some detail? PS. If I forgot to mention something important, please ask. This is my first ever post, so I'm not too sure what is worth mentioning. Thank you in advance! Denis
1
0
166
2w
Help with CoreAudio Input level monitoring
I have spent the past 2 weeks diving into CoreAudio and have seemingly run into a wall... For my initial test, I am simply trying to create an AUGraph for monitoring input levels from a user chosen Audio Input Device (multi-channel in my case). I was not able to find any way to monitor input levels of a single AUHAL input device - so I decided to create a simple AUGraph for input level monitoring. Graph looks like: [AUHAL Input Device] -> [B1] -> [MatrixMixerAU] -> [B2] -> [AUHAL Output Device] Where B1 is an audio stream consisting of all the input channels available from the input device. The MatrixMixer has metering mode turned on, and level meters are read from the each submix of the MatrixMixer using kMatrixMixerParam_PostAveragePower. B2 is a stereo (2 channel) stream from the MatrixMixerAU to the default audio device - however, since I don't really want to pass audio through to an actual output, I have the volume muted on the MatrixMixerAU output channel. I tried using a GenericOutputAU instread of the default system output, however, the GenericOutputAU never seems to pull date from the ringBuffer (the graph renderProc is never called if a GenericOutputAU is used instead of AUHAL default output device). I have not been able to get this simple graph to work. I do not see any errors when creating the graph and initializing the graph, and I have verified that the inputProc is being called for filling up the ringBuffer - but when I read the level of the MatrixMixer, the levels are always -758 (silence). I have posted my demo project on github in hopes I can find someone with CoreAudio expertise to help with this problem. I am willing to move this to DTS Code Level support if there is someone in DTS with CoreAudio experience. Notes: My App is not sandboxed in this test I have tried with and without hardened runtime with Audio Input checked The multichannel audio device I am using for testing is the Audient iD14 USB-C audio device. It supports 12 input channels and 6 output channels. All input channels have been tested and are working in Ableton Live and Logic Pro. Of particular interest, is that I can't even get the Apple CAPlayThrough demo to work on my system. I see no errors when creating the graph, but all I hear is silence. The MatrixMixerTest from the Apple documentation archives does work - but note, that that demo does not use Audio Input devices - it reads audio into the graph from an audio file. Link to github project page. Diagram of AUGraph for initial test (code that is on github) Once I get audio input level metering to work, my plan is to implement something like in Phase 2 below - with the purpose of capturing a stereo input stream, mixing to mono, and sending to lowpass, bandpass, hipass AUs - and will again use MatrixMixer for level monitoring of the levels out of each filter. I have no plans on passthough audio (sending actual audio out to devices) - I am simple monitoring input levels Diagram of ultimate scope - rendering audio levels of a stereo to mono stream after passing through various filters
0
0
190
2w
Recording audio from a microphone using the AVFoundation framework does not work after reconnecting the microphone
There are different microphones that can be connected via a 3.5-inch jack or via USB or via Bluetooth, the behavior is the same. There is a code that gets access to the microphone (connected to the 3.5-inch audio jack) and starts an audio capture session. At the same time, the microphone use icon starts to be displayed. The capture of the audio device (microphone) continues for a few seconds, then the session stops, the microphone use icon disappears, then there is a pause of a few seconds, and then a second attempt is made to access the same microphone and start an audio capture session. At the same time, the microphone use icon is displayed again. After a few seconds, access to the microphone stops and the audio capture session stops, after which the microphone access icon disappears. Next, we will try to perform the same actions, but after the first stop of access to the microphone, we will try to pull the microphone plug out of the connector and insert it back before trying to start the second session. In this case, the second attempt to access begins, the running part of the program does not return errors, but the microphone access icon is not displayed, and this is the problem. After the program is completed and restarted, this icon is displayed again. This problem is only the tip of the iceberg, since it manifests itself in the fact that it is not possible to record sound from the audio microphone after reconnecting the microphone until the program is restarted. Is this normal behavior of the AVFoundation framework? Is it possible to somehow make it so that after reconnecting the microphone, access to it occurs correctly and the usage indicator is displayed? What additional actions should the programmer perform in this case? Is there a description of this behavior somewhere in the documentation? Below is the code to demonstrate the described behavior. I am also attaching an example of the microphone usage indicator icon. Computer description: MacBook Pro 13-inch 2020 Intel Core i7 macOS Sequoia 15.1. #include <chrono> #include <condition_variable> #include <iostream> #include <mutex> #include <thread> #include <AVFoundation/AVFoundation.h> #include <Foundation/NSString.h> #include <Foundation/NSURL.h> AVCaptureSession* m_captureSession = nullptr; AVCaptureDeviceInput* m_audioInput = nullptr; AVCaptureAudioDataOutput* m_audioOutput = nullptr; std::condition_variable conditionVariable; std::mutex mutex; bool responseToAccessRequestReceived = false; void receiveResponse() { std::lock_guard<std::mutex> lock(mutex); responseToAccessRequestReceived = true; conditionVariable.notify_one(); } void waitForResponse() { std::unique_lock<std::mutex> lock(mutex); conditionVariable.wait(lock, [] { return responseToAccessRequestReceived; }); } void requestPermissions() { responseToAccessRequestReceived = false; [AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted) { const auto status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio]; std::cout << "Request completion handler granted: " << (int)granted << ", status: " << status << std::endl; receiveResponse(); }]; waitForResponse(); } void timer(int timeSec) { for (auto timeRemaining = timeSec; timeRemaining > 0; --timeRemaining) { std::cout << "Timer, remaining time: " << timeRemaining << "s" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(1)); } } bool updateAudioInput() { [m_captureSession beginConfiguration]; if (m_audioOutput) { AVCaptureConnection *lastConnection = [m_audioOutput connectionWithMediaType:AVMediaTypeAudio]; [m_captureSession removeConnection:lastConnection]; } if (m_audioInput) { [m_captureSession removeInput:m_audioInput]; [m_audioInput release]; m_audioInput = nullptr; } AVCaptureDevice* audioInputDevice = [AVCaptureDevice deviceWithUniqueID: [NSString stringWithUTF8String: "BuiltInHeadphoneInputDevice"]]; if (!audioInputDevice) { std::cout << "Error input audio device creating" << std::endl; return false; } // m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:nil]; // NSError *error = nil; NSError *error = [[NSError alloc] init]; m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:&error]; if (error) { const auto code = [error code]; const auto domain = [error domain]; const char* domainC = domain ? [domain UTF8String] : nullptr; std::cout << code << " " << domainC << std::endl; } if (m_audioInput && [m_captureSession canAddInput:m_audioInput]) { [m_audioInput retain]; [m_captureSession addInput:m_audioInput]; } else { std::cout << "Failed to create audio device input" << std::endl; return false; } if (!m_audioOutput) { m_audioOutput = [[AVCaptureAudioDataOutput alloc] init]; if (m_audioOutput && [m_captureSession canAddOutput:m_audioOutput]) { [m_captureSession addOutput:m_audioOutput]; } else { std::cout << "Failed to add audio output" << std::endl; return false; } } [m_captureSession commitConfiguration]; return true; } void start() { std::cout << "Starting..." << std::endl; const bool updatingResult = updateAudioInput(); if (!updatingResult) { std::cout << "Error, while updating audio input" << std::endl; return; } [m_captureSession startRunning]; } void stop() { std::cout << "Stopping..." << std::endl; [m_captureSession stopRunning]; } int main() { requestPermissions(); m_captureSession = [[AVCaptureSession alloc] init]; start(); timer(5); stop(); timer(10); start(); timer(5); stop(); }
1
0
247
3w
Anyone know the output power of the headphone jack of a MacBook Pro for each percentage of volume?
Hello! I'm trying to create a headphone safety prototype to give warnings if I listen to music too loud, but inputing my headphone's impedance, sensitivity, and wanted SPL level, and all I need is just the data on the amount of power each percentage of volume outputs(I'm assuming the MacBook Pro has 1-100% volume scale). If anyone has this info, or can direct me to someone who has this info, that would be great! Also do I contact apple support for things like this? I'm not too sure... Thanks!!
0
0
137
3w
Capturing system audio no longer works with macOS Sequoia
Our capture application records system audio via HAL plugin, however, with the latest macOS 15 Sequoia, all audio buffer values are zero. I am attaching sample code that replicates the problem. Compile as a Command Line Tool application with Xcode. STEPS TO REPRODUCE Install BlackHole 2ch audio driver: https://existential.audio/blackhole/download/?code=1579271348 Start some system audio, e.g. YouTube. Compile and run the sample application. On macOS up to Sonoma, you will hear audio via loopback and see audio values in the debug/console window. On macOS Sequoia, you will not hear audio and the audio values are 0. #import <AVFoundation/AVFoundation.h> #import <CoreAudio/CoreAudio.h> #define BLACKHOLE_UID @"BlackHole2ch_UID" #define DEFAULT_OUTPUT_UID @"BuiltInSpeakerDevice" @interface AudioCaptureDelegate : NSObject <AVCaptureAudioDataOutputSampleBufferDelegate> @end void setDefaultAudioDevice(NSString *deviceUID); @implementation AudioCaptureDelegate // receive samples from CoreAudio/HAL driver and print amplitute values for testing // this is where samples would normally be copied and passed downstream for further processing which // is not needed in this simple sample application - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Access the audio data in the sample buffer CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer); if (!blockBuffer) { NSLog(@"No audio data in the sample buffer."); return; } size_t length; char *data; CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &length, &data); // Process the audio samples to calculate the average amplitude int16_t *samples = (int16_t *)data; size_t sampleCount = length / sizeof(int16_t); int64_t sum = 0; for (size_t i = 0; i < sampleCount; i++) { sum += abs(samples[i]); } // Calculate and log the average amplitude float averageAmplitude = (float)sum / sampleCount; NSLog(@"Average Amplitude: %f", averageAmplitude); } @end // set the default audio device to Blackhole while testing or speakers when done // called by main void setDefaultAudioDevice(NSString *deviceUID) { AudioObjectPropertyAddress address; AudioDeviceID deviceID = kAudioObjectUnknown; UInt32 size; CFStringRef uidString = (__bridge CFStringRef)deviceUID; // Gets the device corresponding to the given UID. AudioValueTranslation translation; translation.mInputData = &uidString; translation.mInputDataSize = sizeof(uidString); translation.mOutputData = &deviceID; translation.mOutputDataSize = sizeof(deviceID); size = sizeof(translation); address.mSelector = kAudioHardwarePropertyDeviceForUID; address.mScope = kAudioObjectPropertyScopeGlobal; //???? address.mElement = kAudioObjectPropertyElementMain; OSStatus status = AudioObjectGetPropertyData(kAudioObjectSystemObject, &address, 0, NULL, &size, &translation); if (status != noErr) { NSLog(@"Error: Could not retrieve audio device ID for UID %@. Status code: %d", deviceUID, (int)status); return; } AudioObjectPropertyAddress propertyAddress; propertyAddress.mSelector = kAudioHardwarePropertyDefaultOutputDevice; propertyAddress.mScope = kAudioObjectPropertyScopeGlobal; status = AudioObjectSetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, sizeof(AudioDeviceID), &deviceID); if (status == noErr) { NSLog(@"Default audio device set to %@", deviceUID); } else { NSLog(@"Failed to set default audio device: %d", status); } } // sets Blackhole device as default and configures it as AVCatureDeviceInput // sets the speakers as loopback so we can hear what is being captured // sets up queue to receive capture samples // runs session for 30 seconds, then restores speakers as default output int main(int argc, const char * argv[]) { @autoreleasepool { // Create the capture session AVCaptureSession *session = [[AVCaptureSession alloc] init]; // Select the audio device AVCaptureDevice *audioDevice = nil; NSString *audioDriverUID = nil; audioDriverUID = BLACKHOLE_UID; setDefaultAudioDevice(audioDriverUID); audioDevice = [AVCaptureDevice deviceWithUniqueID:audioDriverUID]; if (!audioDevice) { NSLog(@"Audio device %s not found!", [audioDriverUID UTF8String]); return -1; } else { NSLog(@"Using Audio device: %s", [audioDriverUID UTF8String]); } // Configure the audio input with the selected device (Blackhole) NSError *error = nil; AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error]; if (error || !audioInput) { NSLog(@"Failed to create audio input: %@", error); return -1; } [session addInput:audioInput]; // Configure the audio data output AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init]; AudioCaptureDelegate *delegate = [[AudioCaptureDelegate alloc] init]; dispatch_queue_t queue = dispatch_queue_create("AudioCaptureQueue", NULL); [audioOutput setSampleBufferDelegate:delegate queue:queue]; [session addOutput:audioOutput]; // Set audio settings NSDictionary *audioSettings = @{ AVFormatIDKey: @(kAudioFormatLinearPCM), AVSampleRateKey: @48000, AVNumberOfChannelsKey: @2, AVLinearPCMBitDepthKey: @16, AVLinearPCMIsFloatKey: @NO, AVLinearPCMIsNonInterleaved: @NO }; [audioOutput setAudioSettings:audioSettings]; AVCaptureAudioPreviewOutput * loopback_output = nil; loopback_output = [[AVCaptureAudioPreviewOutput alloc] init]; loopback_output.volume = 1.0; loopback_output.outputDeviceUniqueID = DEFAULT_OUTPUT_UID; [session addOutput:loopback_output]; const char *deviceID = loopback_output.outputDeviceUniqueID ? [loopback_output.outputDeviceUniqueID UTF8String] : "nil"; NSLog(@"session addOutput for preview/loopback: %s", deviceID); // Start the session [session startRunning]; NSLog(@"Capturing audio data for 30 seconds..."); [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:30.0]]; // Stop the session [session stopRunning]; NSLog(@"Capture session stopped."); setDefaultAudioDevice(DEFAULT_OUTPUT_UID); } return 0; }
4
0
286
3w
Play audio data coming from serial port
Hi. I want to read ADPCM encoded audio data, coming from an external device to my Mac via serial port (/dev/cu.usbserial-0001) as 256 byte chunks, and feed it into an audio player. So far I am using Swift and SwiftSerial (GitHub - mredig/SwiftSerial: A Swift Linux and Mac library for reading and writing to serial ports. 3) to get the data via serialPort.asyncBytes() into a AsyncStream but I am struggling to understand how to feed the stream to a AVAudioPlayer or similar. I am new to Swift and macOS audio development so any help to get me on the right track is greatly appreciated. Thx
0
0
132
3w
Get audio volume from microphone
Hello. We are trying to get audio volume from microphone. We have 2 questions. 1. Can anyone tell me about AVAudioEngine.InputNode.volume? AVAudioEngine.InputNode.volume Return 0 in the silence, Return float type value within 1.0 depending on the volume are expected work, but it looks 1.0 (default value) is returned at any time. Which case does it return 0.5 or 0? Sample code is below. Microphone works correctly. // instance member private var engine: AVAudioEngine! private var node: AVAudioInputNode! // start method self.engine = .init() self.node = engine.inputNode engine.prepare() try! engine.start() // volume getter print(\(self.node.volume)) 2. What is the best practice to get audio volume from microphone? Requirements are: Without AVAudioRecorder. We use it for streaming audio. it should withstand high frequency access. Testing info device: iPhone XR OS version: iOS 18 Best Regards.
2
0
294
3w
SFCustomLanguageModelData.CustomPronunciation and X-SAMPA string conversion
Can anyone please guide me on how to use SFCustomLanguageModelData.CustomPronunciation? I am following the below example from WWDC23 https://wwdcnotes.com/documentation/wwdcnotes/wwdc23-10101-customize-ondevice-speech-recognition/ While using this kind of custom pronunciations we need X-SAMPA string of the specific word. There are tools available on the web to do the same Word to IPA: https://openl.io/ IPA to X-SAMPA: https://tools.lgm.cl/xsampa.html But these tools does not seem to produce the same kind of X-SAMPA strings used in demo, example - "Winawer" is converted to "w I n aU @r". While using any online tools it gives - "/wI"nA:w@r/".
0
1
180
4w
MusicKit currentEntry.item is nil but currentEntry exists.
I'm trying to get the item that's assigned to the currentEntry when playing any song which is currently coming up nil when the song is playing. Note currentEntry returns: MusicPlayer.Queue.Entry(id: "evn7UntpH::ncK1NN3HS", title: "SONG TITLE") I'm a bit stumped on the API usage. if the song is playing, how could the queue item be nil? if queueObserver == nil { queueObserver = ApplicationMusicPlayer.shared.queue.objectWillChange .sink { [weak self] in self?.handleNowPlayingChange() } } } private func handleNowPlayingChange() { if let currentItem = ApplicationMusicPlayer.shared.queue.currentEntry { if let song = currentItem.item { self.currentlyPlayingID = song.id self.objectWillChange.send() print("Song ID: \(song.id)") } else { print("NO ITEM: \(currentItem)") } } else { print("No Entries: \(ApplicationMusicPlayer.shared.queue.currentEntry)") } }
0
0
162
4w
Is there a recommended approach to safeguarding against audio recording losses via app crash?
AVAudioRecorder leaves a completely useless chunk of file if a crash happens while recording. I need to be able to recover. I'm thinking of streaming the recording to disk. I know that is possible with AVAudioEngine but I also know that API is a headache that will lead to unexpected crashes unless you're lucky and the person who built it. Does Apple have a recommended strategy for failsafe audio recordings? I'm thinking of chunking recordings using many instances of AVAudioRecorder and then stitching those chunks together.
1
0
180
Oct ’24
API to use for high-level audio playback to a specific audio device?
I'm working on a little light and sound controller in Swift, driving DMX lights and audio. For the audio portion, I need to play a bunch of looping sounds (long-duration MP3s), and occasionally play sound effects (short-duration sounds, varying formats). I want all of this mixed into selected channels on specific devices. That is, I might have one audio stream going to the left channel, and a completely different one going to the right channel. What's the right API to do this from Swift? Core Audio? AVPlayer stuff?
0
0
151
Oct ’24
arm64 Logic Leaking Plugins (Not Calling AP_Close)
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects. I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep). Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
1
1
169
Oct ’24
Ford Puma Sync 3 problems with iOS 18
Good afternoon since I’ve installed ios 18 on me iphone 15 pro I have problems using Apple car play with my Ford Puma with Sync 3. More in detail, problems with audio commands, selecting audio track, bluetooth, etc.. Are you aware about it? Thanks a lot Regards Alberto
0
0
224
Oct ’24
Can Logic Pro load an Audio Unit v3 in-process?
After investing more than a week into getting a bunch of audio unit projects converted into app + appex + framework, they all are now correctly loaded in-process in the demo host app that is part of Xcode's template. However, Logic Pro adamantly refuses to load them in-process. Does Logic Pro simply not do that ever, or is there some hint or configuration my plugins need to provide to enable that? If it is unsupported, will it be supported in some future version of Logic? The entire point of investing that week was performance, which is moot if it is impossible to test the impact of loading in-process in a real-world usage scenario.
0
0
255
Oct ’24
Error on connect AudioEngin with AudioPlayerNoded with AVAudioPCMFormatInt16
Hi community, I'm trying to setup an AVAudioFormat with AVAudioPCMFormatInt16. But, i've an error : AVAEInternal.h:125 [AUInterface.mm:539:SetFormat: ([[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr])] returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 "(null)" If i understand the error code 10868, the format is not correct. But, how i can use PCM Int16 format ? Here is my method : - (void)setupAudioDecoder:(double)sampleRate audioChannels:(double)audioChannels { if (self.isRunning) { return; } self.audioEngine = [[AVAudioEngine alloc] init]; self.audioPlayerNode = [[AVAudioPlayerNode alloc] init]; [self.audioEngine attachNode:self.audioPlayerNode]; AVAudioChannelCount channelCount = (AVAudioChannelCount)audioChannels; self.audioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16 sampleRate:sampleRate channels:channelCount interleaved:YES]; NSLog(@"Audio Format: %@", self.audioFormat); NSLog(@"Audio Player Node: %@", self.audioPlayerNode); NSLog(@"Audio Engine: %@", self.audioEngine); // Error on this line [self.audioEngine connect:self.audioPlayerNode to:self.audioEngine.mainMixerNode format:self.audioFormat]; /**NSError *error = nil; if (![self.audioEngine startAndReturnError:&error]) { NSLog(@"Erreur lors de l'initialisation du moteur audio: %@", error); return; } [self.audioPlayerNode play]; self.isRunning = YES;*/ } Also, i see the audioEngine seem not running ? Audio Engine: ________ GraphDescription ________ AVAudioEngineGraph 0x600003d55fe0: initialized = 0, running = 0, number of nodes = 1 Anyone have already use this format with AVAudioFormat ? Thank you !
1
0
199
Oct ’24
Missing for a decade, full quality + remote control
Is anyone developing a way for users to control an iOS or PadOS device playing Apple Music to a DAC via USB to amp from another iOS or PadOS device wirelessly? Specifically, full control. Not Accessibility, not to Apple TV, not HomePods, not firmware downgraded Airport Expresses to a DAC or other hacks mentioned for the past decade this “connect” like feature has been desired by audiophiles seeking exclusive mode on a device with that (iOS/PadOS) but — control it while sitting on a couch or in a wheel chair across the room. Exclusive mode being the key feature iOS and PadOS offer that is desired with full or nearly full Apple Music control.
2
0
179
Oct ’24
How to observe AVCaptureDevice.DiscoverySession devices property?
At Apple Developer documentation, https://developer.apple.com/documentation/avfoundation/avcapturedevice/discoverysession you can find the sentence You can also key-value observe this property to monitor changes to the list of available devices. But how to use it? I tried it with the code above and tested on my MacBook with EarPods. When I disconnect the EarPods, nothing was happened. MacBook Air M2 macOS Sequoia 15.0.1 Xcode 16.0 import Foundation import AVFoundation let discovery_session = AVCaptureDevice.DiscoverySession.init(deviceTypes: [.microphone], mediaType: .audio, position: .unspecified) let devices = discovery_session.devices for device in devices { print(device.localizedName) } let device = devices[0] let observer = Observer() discovery_session.addObserver(observer, forKeyPath: "devices", options: [.new, .old], context: nil) let input = try! AVCaptureDeviceInput(device: device) let queue = DispatchQueue(label: "queue") var output = AVCaptureAudioDataOutput() let delegate = OutputDelegate() output.setSampleBufferDelegate(delegate, queue: queue) var session = AVCaptureSession() session.beginConfiguration() session.addInput(input) session.addOutput(output) session.commitConfiguration() session.startRunning() let group = DispatchGroup() let q = DispatchQueue(label: "", attributes: .concurrent) q.async(group: group, execute: DispatchWorkItem() { sleep(10) session.stopRunning() }) _ = group.wait(timeout: .distantFuture) class Observer: NSObject { override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) { print("Change") if keyPath == "devices" { if let newDevices = change?[.newKey] as? [AVCaptureDevice] { print("New devices: \(newDevices.map { $0.localizedName })") } if let oldDevices = change?[.oldKey] as? [AVCaptureDevice] { print("Old devices: \(oldDevices.map { $0.localizedName })") } } } } class OutputDelegate : NSObject, AVCaptureAudioDataOutputSampleBufferDelegate { func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { print("Output") } }
6
0
279
Oct ’24