AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

AVFoundation Documentation

Posts under AVFoundation tag

432 Posts
Sort by:
Post not yet marked as solved
1 Replies
913 Views
I've been wrestling with a problem related to AVAudioEngine. I've posted a couple questions recently to other forums, but haven't had much luck, so I'm guessing not many people are encountering this, or the questions are unclear, or perhaps I'm not asking in the most appropriate subforums. So, I thought I'd try here in the 'Concurrency' forum, as using concurrency would be one way to solve the problem.The specific problem is that AVAudioPlayerNode.play() takes a long time to execute. The execution time seems to be proportional to the value of AVAudioSession.ioBufferDuration, and can be from a few milliseconds for low buffer durations to over 20 milliseconds at the default buffer duration. These execution times can be an issue in real-time applications such as games.An obvious solution would be to move such operations to a background thread using GCD, and I've seen various posts and articles that do this. Here's a code example showing what I mean:import AVFoundation import UIKit class ViewController: UIViewController { private let engine = AVAudioEngine() private let player = AVAudioPlayerNode() private let queue = DispatchQueue(label: "", qos: .userInitiated) override func viewDidLoad() { super.viewDidLoad() engine.attach(player) engine.connect(player, to: engine.mainMixerNode, format: nil) try! engine.start() } override func touchesBegan(_ touches: Set, with event: UIEvent?) { queue.async { if self.player.isPlaying { self.player.stop() } else { self.player.play() } } } }In this scenario all AVAudioEngine-related operations would be serial and never concurrent/parallel. The audio system would never be accessed simultaneously from multiple threads, only serially.My concern is that I don't know whether it's safe to use AVAudioEngine in this way. More generally, I'm not sure what should be assumed about any API for which nothing specific is said about thread safety. In such cases, can it be assumed that access from multiple threads is safe as long as only one thread is active at any given time? (The 'Thread Programming Guide' touches on this, but doesn't appear to address audio frameworks specifically.)The narrowest version of my question is whether it's safe to use GCD with AVAudioEngine, provided all access is serial. The broader question would be what assumptions should or should not be made about APIs and thread safety when it's not specifically addressed in the documentation.Any input on either of these issues would be greatly appreciated.
Posted
by Jesse1.
Last updated
.
Post not yet marked as solved
0 Replies
186 Views
I am going to be using AVAudioPlayer to play sound effects and looping music in a game. I haven’t been able to find any recent discussion of what format to use (given I almost certainly need to compress it). https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html#//apple_ref/doc/uid/TP40009767-CH2-SW28 is deprecated (and talks about “AAC” but afconvert -hf shows at least 7 different formats that can be saved in a CAF). Has that guide been updated? Does hardware vs software playback still matter in iOS 9+? I’m not really worried about performance in terms of impacting frame rate.
Posted
by ddunham.
Last updated
.
Post not yet marked as solved
1 Replies
227 Views
I am using AVFoundation for live camera view. I can get my device from the current video input (of type AVCaptureDeviceInput) like: let device = videoInput.device The device's active format has a isPortraitEffectSupported. How can I set the Portrait Effect on and off in live camera view? I setup the camera like this: private var videoInput: AVCaptureDeviceInput! private let session = AVCaptureSession() private(set) var isSessionRunning = false private var renderingEnabled = true private let videoDataOutput = AVCaptureVideoDataOutput() private let photoOutput = AVCapturePhotoOutput() private(set) var cameraPosition: AVCaptureDevice.Position = .front func configureSession() { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } if strongSelf.setupResult != .success { return } let defaultVideoDevice: AVCaptureDevice? = strongSelf.videoDeviceDiscoverySession.devices.first(where: {$0.position == strongSelf.cameraPosition}) guard let videoDevice = defaultVideoDevice else { print("Could not find any video device") strongSelf.setupResult = .configurationFailed return } do { strongSelf.videoInput = try AVCaptureDeviceInput(device: videoDevice) } catch { print("Could not create video device input: \(error)") strongSelf.setupResult = .configurationFailed return } strongSelf.session.beginConfiguration() strongSelf.session.sessionPreset = AVCaptureSession.Preset.photo // Add a video input. guard strongSelf.session.canAddInput(strongSelf.videoInput) else { print("Could not add video device input to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } strongSelf.session.addInput(strongSelf.videoInput) // Add a video data output if strongSelf.session.canAddOutput(strongSelf.videoDataOutput) { strongSelf.session.addOutput(strongSelf.videoDataOutput) strongSelf.videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)] strongSelf.videoDataOutput.setSampleBufferDelegate(self, queue: strongSelf.dataOutputQueue) } else { print("Could not add video data output to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } // Add photo output if strongSelf.session.canAddOutput(strongSelf.photoOutput) { strongSelf.session.addOutput(strongSelf.photoOutput) strongSelf.photoOutput.isHighResolutionCaptureEnabled = true } else { print("Could not add photo output to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } strongSelf.session.commitConfiguration() } } func prepareSession(completion: @escaping (SessionSetupResult) -> Void) { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } switch strongSelf.setupResult { case .success: strongSelf.addObservers() if strongSelf.photoOutput.isDepthDataDeliverySupported { strongSelf.photoOutput.isDepthDataDeliveryEnabled = true } if let photoOrientation = AVCaptureVideoOrientation(interfaceOrientation: interfaceOrientation) { if let unwrappedPhotoOutputConnection = strongSelf.photoOutput.connection(with: .video) { unwrappedPhotoOutputConnection.videoOrientation = photoOrientation } } strongSelf.dataOutputQueue.async { strongSelf.renderingEnabled = true } strongSelf.session.startRunning() strongSelf.isSessionRunning = strongSelf.session.isRunning strongSelf.mainQueue.async { strongSelf.previewView.videoPreviewLayer.session = strongSelf.session } completion(strongSelf.setupResult) default: completion(strongSelf.setupResult) } } } Then to I set isPortraitEffectsMatteDeliveryEnabled like this: func setPortraitAffectActive(_ state: Bool) { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } if strongSelf.photoOutput.isPortraitEffectsMatteDeliverySupported { strongSelf.photoOutput.isPortraitEffectsMatteDeliveryEnabled = state } } } However, I don't see any Portrait Effect in the live camera view! Any ideas why?
Posted
by Asteroid.
Last updated
.
Post not yet marked as solved
2 Replies
2.1k Views
Hi,I have asubmitted an application to teh App Review Board. They have rejected the app with the following reason:"Your app declares support for audio in the UIBackgroundModes key in your Info.plist but did not include features that require persistent audio."My app uses audio recording in the background and asks the user for microphone permissions when they first use the application. I have also appealed the decision but they say that the way we are using audio in the background is not acceptable and still rejected the app. I am happy to make any changes that may be needed but I want clarification on what the problem is because according to the guidlines for background excecution, what I am doing is acceptable. Does anyone know how I can talk to an Apple engineer for her/him to explain to me what needs to be done? I am stuck and we have spent over a year developing this app. Best,Feras A.
Posted
by ferasOS.
Last updated
.
Post not yet marked as solved
1 Replies
212 Views
I have an AVPlayerLayer and AVPlayer setup for playback on external screen as follows: var player = AVPlayer() playerView.player = player       player.usesExternalPlaybackWhileExternalScreenIsActive = true  player.allowsExternalPlayback = true playerView is just a UIView that has AVPlayerLayer as it's main layer. This code works and automatically starts displaying and playing video on external screen. The thing is I want an option to invert the AVPlayerLayer on the external screen. I tried setting transform on playerView but that is ignored on the external screen. How do I gain more control on the external screen window? I also tried to manually add playerView to external screen window and set player.usesExternalPlaybackWhileExternalScreenIsActive = true I can also display AVPlayerLayer manually this way. But again, setting a transform on this screen has no effect on external display. So it may also be a UIKit issue.
Posted Last updated
.
Post not yet marked as solved
0 Replies
182 Views
Hi everyone, I am having a problem on AVPlayer when I try to play some videos. The video starts for a few seconds, but immediately after I see a black screen and in the console there is the following error: https://...manifest.m3u8 -12642 "CoreMediaErrorDomain" "Impossibile completare l'operazione. (Errore CoreMediaErrorDomain -12642 - No matching mediaFile found from playlist)" -12880 "CoreMediaErrorDomain" "Can not proceed after removing variants" - The strange thing is that if I try to play the same video on multiple devices, the result is that on someone it works and on someone it does not. For example on iPhone 5SE works and on iPad Pro 11'' II gen. and iPhone11 I've tried searching around to figure out what may be causing the problem, but there doesn't seem to be a clear solution. Anyone who has had a similar problem? Do you have any ideas about the reason for this problem?
Posted
by LuxLux.
Last updated
.
Post not yet marked as solved
2 Replies
216 Views
I have a UISceneConfiguration for external screen which is triggered when external display is connected to iOS device. // MARK: UISceneSession Lifecycle @available(iOS 13.0, *) func application(_ application: UIApplication, configurationForConnecting connectingSceneSession: UISceneSession, options: UIScene.ConnectionOptions) -> UISceneConfiguration { // Called when a new scene session is being created. // Use this method to select a configuration to create the new scene with. switch connectingSceneSession.role { case .windowApplication: return UISceneConfiguration(name: "Default Configuration", sessionRole: connectingSceneSession.role) case .windowExternalDisplay: return UISceneConfiguration(name: "External Screen", sessionRole: connectingSceneSession.role) default: fatalError("Unknown Configuration \(connectingSceneSession.role.rawValue)") } } I display a custom view in the external screen this way in a new UIScene linked to external display. But the problem now is if I also have an AVPlayerViewController in the flow of application, it no longer displays to external screen. I suppose AVPlayerViewController does it's own configuration for external display playback perhaps, but now I have a custom view embedded on external screen it is unable to override it. What do I need to do so that AVPlayerViewController can display content to external screen the way it does normally?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.6k Views
I am following Apple's documentation on caching HLS (.m3u8) video. https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/MediaPlaybackGuide/Contents/Resources/en.lproj/HTTPLiveStreaming/HTTPLiveStreaming.html Under Playing Offline Assets in the documentation, it is instructed to use AVAssetDownloadTask's asset to simultaneously start playing.func downloadAndPlayAsset(_ asset: AVURLAsset) { 		let downloadTask = downloadSession.makeAssetDownloadTask(asset: asset, assetTitle: assetTitle, assetArtworkData: nil, options: nil)! downloadTask.resume() let playerItem = AVPlayerItem(asset: downloadTask.urlAsset). player = AVPlayer(playerItem: playerItem) player.play() } The issue is that the same asset is downloaded twice. Right after AVPlayer is initialized it starts to buffer the asset. Initially, I assumed that the data from the buffer must be used to create cache but AVAssetDownloadTask doesn't start to download the data for caching until AVPlayer finishes playing the asset. The buffered data is basically discarded. I used KVO on currentItem.loadedTimeRanges to check state of buffer. playerTimeRangesObserver = currentPlayer.observe(\.currentItem?.loadedTimeRanges, options: [.new, .old]) { (player, change) in 		let time = self.currentPlayer.currentItem?.loadedTimeRanges.firs.																								 		if let t = time { 				 print(t.timeRangeValue.duration.seconds) 		} } Below method to check the downloading status of AVAssetDownloadTask. // Method to adopt to subscribe to progress updates of an AVAssetDownloadTask. 		func urlSession(_ session: URLSession, assetDownloadTask: AVAssetDownloadTask, didLoad timeRange: CMTimeRange, totalTimeRangesLoaded loadedTimeRanges: [NSValue], timeRangeExpectedToLoad: CMTimeRange) { 		// This delegate callback should be used to provide download progress for your AVAssetDownloadTask. 		guard let asset = activeDownloadsMap[assetDownloadTask] else { return } 		var percentComplete = 0.0 		for value in loadedTimeRanges { 				let loadedTimeRange: CMTimeRange = value.timeRangeValue 				percentComplete += 						loadedTimeRange.duration.seconds / timeRangeExpectedToLoad.duration.seconds 		} 		 		print("PercentComplete for \(asset.stream.name) = \(percentComplete)") } Is this the right behaviour or am I doing something wrong? I want to be able to use the video data that is being cached (AVAssetDownloadTask downloading is in progress) to play in AVPlayer.
Posted
by dot17.
Last updated
.
Post not yet marked as solved
0 Replies
124 Views
I have an AVFoundation-based live camera view. There is a button by which I am calling AVCaptureDevice.showSystemUserInterface(.videoEffects) so that the user can activate the Portrait Effect. I have also opted in by setting "Camera — Opt in for Portrait Effect" to true in info.plist. However, upon tapping on the button I see this screen (The red crossed-off part is the app name): I am expecting to see something like this: Do you have any idea why that might be?
Posted
by Asteroid.
Last updated
.
Post not yet marked as solved
9 Replies
2.9k Views
Hello, Our app is using avfoundation to add overlay and merge several video tracks and audio. we are using: AVMutableComposition AVMutableVideoCompositionInstruction AVMutableCompositionTrack AVMutableAudioMix AudioCompositor AVAssetExportSession.exportAsynchronously it works well until iOS 14.5. From iOS 14.5, there are more errors below (very frequently), sometime it crashes too. Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedRecoverySuggestion=Try again later., NSLocalizedDescription=Cannot Complete Action, NSUnderlyingError=0x283cf3d80 {Error Domain=NSOSStatusErrorDomain Code=-16978 "(null)"}} With our first investigations, we suppose that iOS 14.5 is more strict on memory consumption, but we cant' find any changelog about avfoundation or memory restriction on iOS 14.5 We are in trouble, because our app is completely inusable. Looking forward for your relays, thank you!
Posted Last updated
.
Post not yet marked as solved
2 Replies
345 Views
AVMIDIPlayer not working properly. Since iOS 14 is it running just a while then begin to stutter and stammer and then silence. Some very simple midi file are running OK but most of them stopped, especially if it contains some crowd of tunes like glissando. There is no crash message, the app in which I’m using AVMIDIPlayer is running father, but with no sound, as if that AVMIDIPlayer were constipated. This happens only on my devices iPhone Xr and iPad 5th gen., in simulator no problem.
Posted Last updated
.
Post not yet marked as solved
0 Replies
82 Views
Hi, I put together an AVComposition and add in a few videos together, set the frame duration to CMTime(1,30) however when I export with AVExportSession the resulting video has a frame rate of 1FPS. How do I improve the quality of the resulting exported video? Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
134 Views
Hello! We are writing an app which analyzes a real world 3D data by using the TrueDepth camera on the front of an iPhone, and an AVCaptureSession configured to produce AVDepthData along with image data. This worked great on iPhone 12, but the same code on iPhone 13 produces an unwanted "smoothing" effect which makes the scene impossible to process and breaks our app. We are unable to find any information on this effect, from Apple or otherwise, much less how to avoid it, so we are asking you experts. At the bottom of this post (Figure 3) is our code which configures the capture session, using an AVCaptureDataOutputSynchronizer, to produce frames of 640x480 image and depth data. I boiled it down as much as possible, sorry it's so long. The main two parts are the configure function, which sets up our capture session, and the dataOutputSynchronizer function, near the bottom, which fires when a sycned set of data is available. In the latter function I've included my code which extracts the information from the AVDepthData object, including looping through all 640x480 depth data points (in meters). I've excluded further processing for brevity (believe it or not :)). On an iPhone 12 device, the PNG data and the depth data merge nicely. The front view and side view of the merged pointcloud are below (Figure 1) . The angles visible in the side view are due to the application of the focal length which "de-perspectives" the data and places them in their proper position in xyz space. The same code on an iPhone 13 produces depth maps that result in point cloud further below (Figure 2 -- straight on view, angled view, and side view). There is no longer any clear distinction between objects and the background becasue the depth data appears to be "smoothed" between the mannequin and the background -- i.e., there are seven or eight points between the subject and background that are not realistic and make it impossible to do any meaningful processing such as segmenting the scene. Has anyone else encountered this issue, or have any insight into how we might change our code to avoid it? Any help or ideas are MUCH appreciated, since this is a definite showstopper (we can't tell people to only run our App on older phones :)). Thank you! Figure 1 -- Merged depth data and image into point cloud, from iPhone 12 Figure 2 -- Merged depth data and image into point cloud, from iPhone 13; unwanted smoothing effect visible Figure 3 -- Our configuration code and capture handler; edited to remove downstream processing of captured data (which was basically formatting it into an XML file and uploading to the cloud) (See Attachment) CameraService (edited).swift
Posted Last updated
.
Post marked as solved
2 Replies
176 Views
I've been following the Scrumdinger tutorial and had close to no trouble understanding the concepts, supplementing myself with the Swift guide. However, in the state and lifecycle lesson, exactly in the complete project, I've found confusing syntax that I cannot decipher with certainty. private var player: AVPlayer { AVPlayer.sharedDingPlayer } What does the closure after the AVPlayer type mean? sharedDingPlayer is a static property extending AVPlayer, so my guess is that it's either some kind of casting to this exact type or assigning the static prop to the player property when it's available. I would appreciate any help in clearing this out!
Posted Last updated
.
Post not yet marked as solved
2 Replies
173 Views
Goal: To obtain depth data & calibration data from the TrueDepth Camera for computer vision task.   I am very confused because for example apple says, To use depth data for computer vision tasks, use the data in the cameraCalibrationData property to rectify the depth data. which I tried and get nil, and then when looking through stack overflow I read, cameraCalibrationData is always nil in photo, you have to get it from photo.depthData. As long as you're requesting depth data, you'll get the calibration data. and so when I tried print(photo.depthData) to obtain depth & calibration data my output was: Optional(hdis 640x480 (high/abs)  calibration: {intrinsicMatrix: [2735.35 0.00 2017.75 | 0.00 2735.35 1518.51 | 0.00 0.00 1.00],  extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm,  distortionCenter:{2017.75,1518.51},  ref:{4032x3024}}) ^ But where is the depth data??` Below is my entire code: Note: I'm new to Xcode and I'm use to coding in python for computer vision task so I apologize in advance for the messy code.  import AVFoundation import UIKit import Photos class ViewController: UIViewController {     var session: AVCaptureSession?     let output = AVCapturePhotoOutput()     var previewLayer = AVCaptureVideoPreviewLayer()     // MARK: - Permission check     private func checkCameraPermissions() {         switch AVCaptureDevice.authorizationStatus(for: .video) {         case .notDetermined:             AVCaptureDevice.requestAccess(for: .video) { [weak self] granted in                 guard granted else { return }                 DispatchQueue.main.async { self?.setUpCamera() }             }         case .restricted:             break         case .denied:             break         case .authorized:             setUpCamera()         @unknown default:             break         }     }     // MARK: - camera SETUP     private func setUpCamera() {         let session = AVCaptureSession()         if let captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: AVMediaType.depthData, position: .unspecified) {             do {                 let input = try AVCaptureDeviceInput(device: captureDevice)                 if session.canAddInput(input) {                     session.beginConfiguration()                     session.sessionPreset = .photo                     session.addInput(input)                     session.commitConfiguration()                 }                 if session.canAddOutput(output) {                     session.beginConfiguration()                     session.addOutput(output)                     session.commitConfiguration()                 }                 output.isDepthDataDeliveryEnabled = true                 previewLayer.videoGravity = .resizeAspectFill                 previewLayer.session = session                 session.startRunning()                 self.session = session             }             catch {                 print(error)             }         }     }     //MARK: - UI Button     private let shutterButton: UIButton = {         let button = UIButton(frame: CGRect(x: 0, y: 0, width: 100, height: 100))         button.layer.cornerRadius = 50         button.layer.borderWidth = 10         button.layer.borderColor = UIColor.white.cgColor         return button     }()     //MARK: - Video Preview Setup     override func viewDidLoad() {         super.viewDidLoad()         view.backgroundColor = .black         view.layer.insertSublayer(previewLayer, at: 0)         view.addSubview(shutterButton)         checkCameraPermissions()         shutterButton.addTarget(self, action: #selector(didTapTakePhoto), for: .touchUpInside)     }     //MARK: - Video Preview Setup     override func viewDidLayoutSubviews() {         super.viewDidLayoutSubviews()         previewLayer.frame = view.bounds         shutterButton.center = CGPoint(x: view.frame.size.width/2, y: view.frame.size.height - 100)     }     //MARK: - Running and Stopping the Session     override func viewWillAppear(_ animated: Bool) {         super.viewWillAppear(animated)         session!.startRunning()     }     //MARK: - Running and Stopping the Session     override func viewWillDisappear(_ animated: Bool) {         super.viewWillDisappear(animated)         session!.stopRunning()     }     //MARK: - taking a photo     @objc private func didTapTakePhoto() {         let photoSettings = AVCapturePhotoSettings()         photoSettings.isDepthDataDeliveryEnabled = true         photoSettings.isDepthDataFiltered = true         output.capturePhoto(with: photoSettings, delegate: self)     } } extension ViewController: AVCapturePhotoCaptureDelegate {     func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {         guard let data = photo.fileDataRepresentation() else { return }         print(photo.depthData)         let image = UIImage(data: data)         session?.stopRunning()         // ADDING the IMAGE onto the UI         let imageView = UIImageView(image: image)         imageView.contentMode = .scaleAspectFill         imageView.frame = view.bounds         view.addSubview(imageView)         session?.stopRunning()         // saving photo to library         PHPhotoLibrary.requestAuthorization { status in             guard status == .authorized else { return }             PHPhotoLibrary.shared().performChanges({                 let creationRequest = PHAssetCreationRequest.forAsset()                 creationRequest.addResource(with: .photo, data: photo.fileDataRepresentation()!, options: nil)             }, completionHandler: { _, error in                 if error != nil {                     print("error")                 }             })         }     } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
127 Views
On the current project once again I use the camera. I use two streams of video and photo Video to detect rectangle and photo to capture a photo with the flash. After several checks I found the bug. On 12 Pro & 13 Pro Max in a bright room I obtain overexposed photos, if I do the same in a dark room, there are no overexposed photos. This behavior is not available on older iPhones. I look forward to all your suggestions and comments. Environment: iOS 15.4.1, iPhone 12, 12 Pro, 12 Pro Max, 13, 13 Pro, 13Pro Max Additional info: I capture photo in - func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {...}
Posted
by paulmax.
Last updated
.
Post marked as solved
1 Replies
368 Views
Goal: When user (patient) starts performing any test cycle on watchOS app (test can be anything user can interact with watchOS app). At the same time (Simultaneously) I wanted to record the audio in the background on the watchOS app. As soon as the test cycle is completed. I want to stop the audio recording & then sync the audio data with the backend. I know we can do the audio recording on watchOS app using presentAudioRecorderController (using Voice memo) but it is not allowed to perform the test cycle at the same time. Any help can be appreciated. Thanks in advance
Posted
by Natha.
Last updated
.
Post not yet marked as solved
0 Replies
184 Views
I am trying to use AVAudioEngine for listening to mic samples and playing them simultaneously via external speaker or headphones (assuming they are attached to iOS device). I tried the following using AVAudioPlayerNode and it works, but there is too much delay in the audio playback. Is there a way to hear sound realtime without delay? Why the scheduleBuffer API has so much delay I wonder. var engine: AVAudioEngine! var playerNode: AVAudioPlayerNode! var mixer: AVAudioMixerNode! var audioEngineRunning = false public func setupAudioEngine() { self.engine = AVAudioEngine() let input = engine.inputNode let format = input.inputFormat(forBus: 0) playerNode = AVAudioPlayerNode() engine.attach(playerNode) self.mixer = engine.mainMixerNode engine.connect(self.playerNode, to: self.mixer, format: playerNode.outputFormat(forBus: 0)) engine.inputNode.installTap(onBus: 0, bufferSize: 4096, format: format, block: { buffer, time in self.playerNode.scheduleBuffer(buffer, completionHandler: nil) }) do { engine.prepare() try self.engine.start() audioEngineRunning = true self.playerNode.play() } catch { print("error couldn't start engine") audioEngineRunning = false } }
Posted Last updated
.