AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

AVFoundation Documentation

Posts under AVFoundation tag

431 Posts
Sort by:
Post not yet marked as solved
1 Replies
230 Views
I use AVPlayerItemMetadataOutput for live HLS audio stream, each segment is a 0.96 second long AAC containing id3 metadata. In all previous versions of iOS, the AVPlayerItemMetadataOutput delegate method for this stream is called approximately every 0.96 seconds. This behaviour has changed with iOS 15.4.1. In this version, the delegate method is called exactly every 1 second, resulting in a delay in reading the metadata for each segment. Example: time(sec)----|0___________1___________2___________3______ segments-----|[segment_1][segment_2][segment_3][segment_4] |^----------^----------^----------^--------- iOS 15.2-----|call_1 call_2 call_3 call_4 |^-----------^-----------^-----------^------ iOS 15.4.1---|call_1 call_2 call_3 call_4 As it can be seen, call_4 will be called much later than segment_4 starts playing. In all previous versions of iOS, it was called simultaneously with the start of segment_4 playback. The AVMetadataItem.time property also shows the wrong time (see attached pictures). Tried adjusting the delegation in both main and background queue - no success. Changing advanceIntervalForDelegateInvocation did not change this behavior.
Posted
by aaass1.
Last updated
.
Post not yet marked as solved
4 Replies
938 Views
Here is a simple app to demonstrate problem: import SwiftUI import AVFoundation struct ContentView: View {     var synthVM = SpeakerViewModel()     var body: some View {         VStack {             Text("Hello, world!")                 .padding()             HStack {               Button("Speak") {                   if self.synthVM.speaker.isPaused {                       self.synthVM.speaker.continueSpeaking()                 } else {                     self.synthVM.speak(text: "Привет на корабле! Кто это пришел к нам, чтобы посмотреть на это произведение?")                 }               }               Button("Pause") {                   if self.synthVM.speaker.isSpeaking {                       self.synthVM.speaker.pauseSpeaking(at: .word)                 }               }               Button("Stop") {                   self.synthVM.speaker.stopSpeaking(at: .word)               }             }         }     } } struct ContentView_Previews: PreviewProvider {     static var previews: some View {         ContentView()     } } class SpeakerViewModel: NSObject {     var speaker = AVSpeechSynthesizer()      override init() {     super.init()     self.speaker.delegate = self   }      func speak(text: String) {     let utterance = AVSpeechUtterance(string: text)       utterance.voice = AVSpeechSynthesisVoice(language: "ru")     speaker.speak(utterance)   } } extension SpeakerViewModel: AVSpeechSynthesizerDelegate {   func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didStart utterance: AVSpeechUtterance) {     print("started")   }   func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didPause utterance: AVSpeechUtterance) {     print("paused")   }   func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didContinue utterance: AVSpeechUtterance) {}   func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didCancel utterance: AVSpeechUtterance) {}   func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, willSpeakRangeOfSpeechString characterRange: NSRange, utterance: AVSpeechUtterance) {       guard let rangeInString = Range(characterRange, in: utterance.speechString) else { return }       print("Will speak: \(utterance.speechString[rangeInString])")   }   func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {     print("finished")   } } On simulator all works fine, but on real device there are many strange words appears in synthesis speak. And willSpeakRangeOfSpeechString output is different on simulator and real device Simulator: started Will speak: Привет Will speak: на Will speak: корабле! Will speak: Кто Will speak: это Will speak: пришел Will speak: к Will speak: нам, Will speak: чтобы Will speak: посмотреть Will speak: на Will speak: это Will speak: произведение? finished iPhone output have errors: 2021-10-12 17:09:32.613273+0300 VoiceTest[9027:203522] [AXTTSCommon] Broken user rule: \b([234567890]+)2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0]) > Error Domain=NSCocoaErrorDomain Code=2048 "The value “\b([234567890]+)2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0])” is invalid." UserInfo={NSInvalidValue=\b([234567890]+)2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0])} 2021-10-12 17:09:32.613548+0300 VoiceTest[9027:203522] [AXTTSCommon] Broken user rule: \b(1\d+)2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0]) > Error Domain=NSCocoaErrorDomain Code=2048 "The value “\b(1\d+)2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0])” is invalid." UserInfo={NSInvalidValue=\b(1\d+)2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0])} 2021-10-12 17:09:32.613725+0300 VoiceTest[9027:203522] [AXTTSCommon] Broken user rule: \b2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0]) > Error Domain=NSCocoaErrorDomain Code=2048 "The value “\b2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0])” is invalid." UserInfo={NSInvalidValue=\b2 (мили|кварты|чашки|{столовых }ложки)(?=$|\s|[[:punct:]»\xa0])} started Will speak: Привет Will speak: на Will speak: ивет на корабле! Will speak: Кто Will speak: это Will speak: Кто это пришел Will speak: к Will speak: нам, Will speak: чтобы Will speak: посмотреть Will speak: на Will speak: реть на это Will speak: на это произведение? finished Error appears on iOS / iPadOS 15.0, 15.0.1, 15.0.2, 14.7 But all works fine on 14.8 Looks like engine error. How to fix that issue?
Posted
by sanctor.
Last updated
.
Post not yet marked as solved
2 Replies
249 Views
Hi, I migrate an audio project using AVAudioEngine from Objective-C to Swift.  All objects  are the same but in Swift instead of Objective-C. The application uses the microphone input and analyses it via a renderCallback process calling the FFT  of the Accelerate framework. The session has an important CPU usage:  - TheObjective-C version keeps the CPU usage at 8%  - The  Swift version is on average at 20%. I have isolated the code that  originates the increase and it is due to a method analysing the returned samples in order to summarize them. Parameters: arrMagCount = 12 arrMag is an array of float containing the magnitude of the frequency range related to the index samplesCount = 4096 samples is a pointer of a float array delivering the magnitude of each bins used Result:  Each arrMag[index] is updated with the magnitude related to the range of frequency underlying the index The Objective-C version calls a C process for the  function The Swift version was developped in swift since we cannot mix C code and Swift code in the same source     func arrMagFill(arrMagCount:Int, samplesCount:Int, samples:UnsafePointer<Double>, arrMag:UnsafeMutablePointer<Float>, maxFrequency:CGFloat) {         // Zeroes arrMag.         memset(arrMag, 0, arrMagCount)         for iBin in 0..<samplesCount {             for iLoop in 0..<arrMagCount {                 if maxFrequency  <  g.FreqMetersRangeMaxValue[iLoop] {                     arrMag[iLoop] = Float(samples[Int(iBin)])                     break                 }             }         }     } When I use a C equivalent code via an Objective-C  bridge Header, the CPU usage is back to [8..12] on average. Is there a way in Swift to obtain directly the same performances ?
Posted
by JMDelh.
Last updated
.
Post not yet marked as solved
0 Replies
169 Views
I have a Unity project that plays sound even when silent mode is on. How do I stop that behavior and only play game audio when silent mode is off?
Posted
by senna125.
Last updated
.
Post not yet marked as solved
4 Replies
3.1k Views
I have the following code: let ciImage = filterAndRender(ciImage: inputImage, doCrop: true)         let outCGImage = ciContext.createCGImage(ciImage, from: ciImage.extent)!         let dest = CGImageDestinationCreateWithURL(output.renderedContentURL as CFURL, kUTTypeJPEG, 1, nil)!         CGImageDestinationAddImage(dest, outCGImage, [kCGImageDestinationLossyCompressionQuality as String:1] as CFDictionary)         CGImageDestinationFinalize(dest) I get the following caution: " 'kUTTypeJPEG' was deprecated in iOS 15.0: Use UTTypeJPEG instead." However, when I substitute 'UTTypeJPEG' as directed, I get this error: "Cannot find 'UTTypeJPEG' in scope" What should I use for kUTTypeJPEG instead? Thanks!
Posted
by Eric.app.
Last updated
.
Post not yet marked as solved
1 Replies
235 Views
For reasons too convoluted to go into, I have the need to ‘tap’ the final audio going to the output device (speaker, headphones, etc) from within the app I’m working on. (just need to grab my own audio) It would be really important to not know any structures in the app, and tie into them. It seems like AVAudioEngine / installTapOnBus is the way to go, but it’s unclear what exactly would be the input- I have implemented all of the AVAudioBufferList -> CMSampleBufferRef code through to the expected output, and all I get is silence. It seemed like just attaching to the mainMixerNode should have done it, but now it seems like a “device” input would be necessary. So the questions are: Is AVAudioEngine the SDK for this task? If not, then what? If yes, then what AVAudioNode would serve as an input? Is there an AVAudioUnit that is just the ‘current device output’? I’m not looking for a complete solution, just the last few lines that can complete this connection, or a pointer to the framework that could accomplish this.
Posted Last updated
.
Post not yet marked as solved
2 Replies
572 Views
Trying to play some non DRM video from HLS url but AVPlayer is not playing the video, Exactly getting this error :- "Player Item Error: Optional("The operation couldn’t be completed. (CoreMediaErrorDomain error -12971.)")" . No idea what is happening here and also what does error -12971 means exactly?
Posted
by asifMimi.
Last updated
.
Post marked as solved
3 Replies
327 Views
I have an AVFoundation-based live camera view. There is a button by which I am calling AVCaptureDevice.showSystemUserInterface(.videoEffects) so that the user can activate the Portrait Effect. I have also opted in by setting "Camera — Opt in for Portrait Effect" to true in info.plist. However, upon tapping on the button I see this screen (The red crossed-off part is the app name): I am expecting to see something like this: Do you have any idea why that might be?
Posted
by Asteroid.
Last updated
.
Post not yet marked as solved
0 Replies
118 Views
HI In the App Store version, some users experienced no sound when playing local audio. Looking at the logs, we found the following two main problems 1、readyToPlay to Play time exceeds 10s 2、avplayer the operation could not be completed Anyone with relevant experience, please answer. ThankYou!
Posted Last updated
.
Post not yet marked as solved
0 Replies
121 Views
I am creating a fixed-focus camera app with the focus distance at infinity (or at least 30+ feet away). When I set lensPosition to 1.0, the images were blurry. Some tests letting autofocus do the job showed a lensPosition of about 0.808 for my wide and telephoto lenses and 0.84 for the ultra wide lens did the trick. (iPhone 13 Max) Will the lensPosition to focus at infinity vary between devices and lenses on that device? Is there a way to determine the appropriate lensPosition at run time?
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
0 Replies
131 Views
I am creating AVCaptureDeviceInput using an audio driver (user land driver) which has 6 channels (5.1 channels). The audio driver captures system's audio. I am creating AVCaptureAudioDataOutput using stream description of 2 channels. Now I add AVCaptureDeviceInput and AVCaptureAudioDataOutput to AVCaptureSession and write sample buffers of AVCaptureAudioDataOutput to a file. I play a 5.1 file in my system and my above sample app writes it to a file. The recorded audio will have 2 channels as per steam description. the recorded file will have all 5.1 channels recorded in a stereo file (Eg: Left Front and Rear in Left; Right Front & Rear in Right). My query is: Who handles the mixing here? Thank you.
Posted
by Deepa Pai.
Last updated
.
Post not yet marked as solved
0 Replies
134 Views
Hi, for a project I need to implement a custom player, which needs to be able to load multiple AC3 files as tracks and mix them using a linear mixer according to gains delivered by another module. The mixing algorithm is quite simple and follows the following formula for three tracks output value = gain 1 * input value 1 + gain 2 * input value 2 + gain 3 * input value 3 where gains are 0.33 when each of the tracks should be equally loud since the output value must not exceed 1.0. For now I used AVFoundation with multiple AVPlayerNodes and AVAudioMixerNode(s). The issue is that I don't know how exactly the mixer works. So what I actually need is to write a custom mixer node or any other solution to achive the desired outcome. Is this actually possible using AVFoundation or do I need to use another framework ? Thank you and best regards Stefan
Posted Last updated
.
Post not yet marked as solved
1 Replies
525 Views
I've got a fairly old app that I first built back in 2014, and have been maintaining over the years. One of the screens in the app uses the camera to scan barcodes. The barcodes we scan are printed on labels, and are in the org.iso.Code128 format. When the app was first developed, I simply set the metadataObjectTypes property of AVCaptureMetadataOutput to all available types. This worked fine. The app scanned our barcodes very quickly with almost no issues. In 2017, we started seeing issues where barcode scanning was becoming slower. I reasoned that having it configured to scan for every possible barcode format might be the issue, so I went in and changed the code to have it scan only for the org.iso.Code128 format. This helped, for a while. Now, we're seeing the problem again in iOS 14 devices. On some devices it is nearly impossible to scan the barcodes. On others, you have to wait 20 or 30 seconds before the barcode is recognized. What could be causing this issue?
Posted
by flarosa.
Last updated
.
Post not yet marked as solved
6 Replies
1.5k Views
I'm trying to create a scaled down version of a video selected from the users photo album. The max dimensions of the output will be 720p. Therefore, when retrieving the video, I'm using the .mediumQualityFormat as the deliveryMode. This causes iOS to retrieve a 720p video from iCloud if the original video or its medium quality version don't exist in the users device. swift let videoRequestOptions = PHVideoRequestOptions() videoRequestOptions.deliveryMode = .mediumQualityFormat videoRequestOptions.isNetworkAccessAllowed = true PHImageManager.default().requestAVAsset(forVideo: asset, options: videoRequestOptions) { (asset, audioMix, info) in // Proceess the asset } The problem is, when I use AVAssetExportSession to create a scaled down version of the asset, if the asset is a medium variant and not the original version, the export process fails immediately with the following error: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-17507), NSLocalizedDescription=İşlem tamamlanamadı, NSUnderlyingError=0x283bbcf60 {Error Domain=NSOSStatusErrorDomain Code=-17507 "(null)"}} I couldn't find anything about the meaning of this error anywhere. When I set the deliveryMode property to .auto or .highQualityFormat, everything is working properly. When I checked the asset url's, I noticed that if the video has been retrieved from iCloud, its filename has a ".medium" postfix like in this example: file:///var/mobile/Media/PhotoData/Metadata/PhotoData/CPLAssets/group338/191B2348-5E19-4A8E-B15C-A843F9F7B5A3.medium.MP4 The weird thing is, if I use FileManager to copy the video in this url to another directory, create a new AVAsset from that file, and use that asset when creating the AVExportSession instance, the problem goes away. I'd really appreciate if someone could provide some insight about what the problem could be. This is how I use AVAssetExportSession to create a scaled down version of the original video. swift let originalVideoURL = "The url of the asset retrieved from requestAVAsset" let outputVideoPath = NSTemporaryDirectory() + "encodedVideo.mp4" let outputVideoURL = URL(fileURLWithPath: outputVideoPath) guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality), let videoTrack = asset.tracks(withMediaType: .video).first else { handleError() return } let videoComposition = AVMutableVideoComposition() videoComposition.renderSize = scaledSize videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30) let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack) let transform = videoTrack.preferredTransform layerInstruction.setTransform(transform, at: .zero) let instruction = AVMutableVideoCompositionInstruction() instruction.timeRange = CMTimeRangeMake(start: .zero, duration: asset.duration) instruction.layerInstructions = [layerInstruction] videoComposition.instructions = [instruction] exportSession.videoComposition = videoComposition exportSession.outputURL = outputVideoURL exportSession.outputFileType = .mp4 exportSession.shouldOptimizeForNetworkUse = true exportSession.exportAsynchronously(completionHandler: {[weak self] in guard let self = self else { return } if let url = exportSession.outputURL, exportSession.status == .completed { // Works for local videos } else { // Fails with error code 17507 when loading videos with delivery size "Medium" } })
Posted
by cihant.
Last updated
.
Post not yet marked as solved
0 Replies
204 Views
I have an issue with updating MPNowPlayingInfoCenter when trying to read nowPlayingInfo I don't get a response leading to blocking the current thread indefinitely. I'm updating MPNowPlayingInfoCenter on main thread which results in an app freeze. func staticUpdate() {     logger.log(.debug, "start static update")     infoCenter.nowPlayingInfo = nowPlayingInfo     logger.log(.debug, "end static update") } func dynamicUpdate() {     logger.log(.debug, "start update - read")     var mpInfo = infoCenter.nowPlayingInfo ?? [String: Any]()     logger.log(.debug, "start update - write") ...     infoCenter.nowPlayingInfo = mpInfo     logger.log(.debug, "end update") } /* 2022-04-25 09:28:19.051435+0200 [Debug] [main] [NowPlayingInfoCenterController.swift:128] start static update 2022-04-25 09:28:19.051834+0200 [Debug] [main] [NowPlayingInfoCenterController.swift:130] end static update 2022-04-25 09:28:19.052251+0200 [Debug] [main] [NowPlayingInfoCenterController.swift:186] start update - read */ I'm overwriting nowPlayingInfo when media changes, then I'm updating it on any status changes (progress, status,...) (see timestamps, we read ~1ms after write but never reach infoCenter.nowPlayingInfo = mpInfo) Questions: shouldn't infoCenter.nowPlayingInfo always be readable? can I update infoCenter from any queue? (this would solve only app freeze..)
Posted
by tstudt.
Last updated
.
Post not yet marked as solved
0 Replies
162 Views
I am trying to mix the audio from 2 different hardware audio devices together in real-time and record the results. Does anybody have any idea how to do this? This is on macOS. Things I have tried and why it didn't work: Adding 2 audio AVCaptureDevices to an AVCaptureMovieFileOutput or AVAssetWriter. This results in a file that has 2 audio tracks. This doesn't work for me for various reasons. Sure I can mix them together with an AVAssetExportSession, but it needs to be real-time. Programmatically creating an aggregate device and recording that as an AVCaptureDevice. This "sort of" works, but it always results in a recording with strange channel issues. For example, if I combine a 1 channel mic and a 2 channel device, I get a recording with 3 channel audio (L R C). If I make an aggregate out of 2 stereo devices, I get a recording with quadraphonic sound(L R Ls Rs), which won't even play back on some players. If I always force it to stereo, all stereo tracks get turned to mono for some reason. Programmatically creating an aggregate device and trying to use it in an AVAudioEngine. I've had multiple problems with this, but the main one is that when the aggregate device is an input node, it only reports the format of its main device, and no sub-devices. And I can't force it to be 3 or 4 channels without errors. Use an AVCaptureSession to output the sample buffers of both devices, then convert and put those samples into their own AVPlayerNodes. Then mix those AVPlayerNodes into an AVAudioEngine mixer. This actually works, but the resulting audio lags so far behind real-time, that it is unusable. If I record a webcam video along with the audio, the lip-sync is off by like half a second. I really need help with this. If anybody has a way to do this, let me know. Some caveats that have also been tripping me up: The hardware devices that need to be recorded might not be the default input device for the system. The MBP built in mic might be the default device, but I need to record 2 other devices and disclose the built in mic. The devices usually don't have the same audio format. I might be mixing an lpcm mono int16 interleaved with a lpcm stereo float32 non-interleaved. It absolutely has to be real-time and 1 single audio track. It shouldn't be this hard, right?
Posted
by nitro805.
Last updated
.
Post marked as solved
1 Replies
205 Views
I have an application that captures an image with a depth map and calibration data and exports it so then I can work with it in python. The depth map and calibration data are all converted to Float32 and is stored as a json file. The image is stored as a jpeg file.  The depth map shape is (480, 640) and the image shape is (3024, 4032, 3)  My goal is to be able to create a point cloud from this data.   I’m new to working with data provided by apples TrueDepth camera and would like some clarity to what preprocessing steps I need to perform before creating the point cloud.  Here they are below: 1) since the 640x480 is a scaled version of the 12MP image, means that I can scale down the intrinsics as well. So I should scale [fx, fy, cx, cy] by the scaling factor 640/4032 = 0.15873? 2) after scaling comes taking care of the distortion, which I should use lensDistortionLookupTable to distort both the image and depth map? Are the above two questions correct or am I missing something??
Posted Last updated
.
Post not yet marked as solved
0 Replies
110 Views
I've noticed that my Catalyst app uses around 65% CPU when capturing from the camera, even when everything else is idle. Disabling camera capture drops the CPU usage to 0 (middle of graph in screenshot). I profiled using Instruments, and found that the most heavy stack trace involved face tracking (see attached screenshot), even though no metadata output was added for the camera. Is this a bug in AVFoundation?
Posted
by jacobgorm.
Last updated
.
Post not yet marked as solved
0 Replies
164 Views
When I create an aggregate device with 2 hardware inputs and 1 output and I try to use it with AVAudioEngine, it fails to start. I get the error IsFormatSampleRateAndChannelCountValid(outputHWFormat) If I use an aggregate device with only 1 input/output, it works. The problem seems to stem from how aggregate devices handle channels. If I add a 2 channel device and a 1 channel device to the aggregate as inputs, I get an aggregate device with 3 channels. However, if I try and get the format of the input node, it only reports the format of the first device in the aggregate. So instead of saying the device has 3 channels, it will say it has 1 or 2 depending on which device is the main device. I've tried creating my own AVAudioFormat using channel layouts such as kAudioChannelLayoutTag_AAC_3_0, but this only works in very specific cases and is very unreliable. Can anybody help with this? It is driving me crazy. The main problem I am trying to solve is to combine/mix 2 hardware (or virtual hardware via HAL) audio devices in real-time for recording. An aggregate device alone doesn't work (see https://developer.apple.com/forums/thread/703258) Thanks for any help, you would save my day/week.
Posted
by nitro805.
Last updated
.
Post not yet marked as solved
1 Replies
911 Views
I've been wrestling with a problem related to AVAudioEngine. I've posted a couple questions recently to other forums, but haven't had much luck, so I'm guessing not many people are encountering this, or the questions are unclear, or perhaps I'm not asking in the most appropriate subforums. So, I thought I'd try here in the 'Concurrency' forum, as using concurrency would be one way to solve the problem.The specific problem is that AVAudioPlayerNode.play() takes a long time to execute. The execution time seems to be proportional to the value of AVAudioSession.ioBufferDuration, and can be from a few milliseconds for low buffer durations to over 20 milliseconds at the default buffer duration. These execution times can be an issue in real-time applications such as games.An obvious solution would be to move such operations to a background thread using GCD, and I've seen various posts and articles that do this. Here's a code example showing what I mean:import AVFoundation import UIKit class ViewController: UIViewController { private let engine = AVAudioEngine() private let player = AVAudioPlayerNode() private let queue = DispatchQueue(label: "", qos: .userInitiated) override func viewDidLoad() { super.viewDidLoad() engine.attach(player) engine.connect(player, to: engine.mainMixerNode, format: nil) try! engine.start() } override func touchesBegan(_ touches: Set, with event: UIEvent?) { queue.async { if self.player.isPlaying { self.player.stop() } else { self.player.play() } } } }In this scenario all AVAudioEngine-related operations would be serial and never concurrent/parallel. The audio system would never be accessed simultaneously from multiple threads, only serially.My concern is that I don't know whether it's safe to use AVAudioEngine in this way. More generally, I'm not sure what should be assumed about any API for which nothing specific is said about thread safety. In such cases, can it be assumed that access from multiple threads is safe as long as only one thread is active at any given time? (The 'Thread Programming Guide' touches on this, but doesn't appear to address audio frameworks specifically.)The narrowest version of my question is whether it's safe to use GCD with AVAudioEngine, provided all access is serial. The broader question would be what assumptions should or should not be made about APIs and thread safety when it's not specifically addressed in the documentation.Any input on either of these issues would be greatly appreciated.
Posted
by Jesse1.
Last updated
.