AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

AVFoundation Documentation

Posts under AVFoundation tag

432 Posts
Sort by:
Post not yet marked as solved
0 Replies
186 Views
I am going to be using AVAudioPlayer to play sound effects and looping music in a game. I haven’t been able to find any recent discussion of what format to use (given I almost certainly need to compress it). https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html#//apple_ref/doc/uid/TP40009767-CH2-SW28 is deprecated (and talks about “AAC” but afconvert -hf shows at least 7 different formats that can be saved in a CAF). Has that guide been updated? Does hardware vs software playback still matter in iOS 9+? I’m not really worried about performance in terms of impacting frame rate.
Posted
by
Post not yet marked as solved
1 Replies
212 Views
I have an AVPlayerLayer and AVPlayer setup for playback on external screen as follows: var player = AVPlayer() playerView.player = player       player.usesExternalPlaybackWhileExternalScreenIsActive = true  player.allowsExternalPlayback = true playerView is just a UIView that has AVPlayerLayer as it's main layer. This code works and automatically starts displaying and playing video on external screen. The thing is I want an option to invert the AVPlayerLayer on the external screen. I tried setting transform on playerView but that is ignored on the external screen. How do I gain more control on the external screen window? I also tried to manually add playerView to external screen window and set player.usesExternalPlaybackWhileExternalScreenIsActive = true I can also display AVPlayerLayer manually this way. But again, setting a transform on this screen has no effect on external display. So it may also be a UIKit issue.
Posted
by
Post not yet marked as solved
0 Replies
182 Views
Hi everyone, I am having a problem on AVPlayer when I try to play some videos. The video starts for a few seconds, but immediately after I see a black screen and in the console there is the following error: https://...manifest.m3u8 -12642 "CoreMediaErrorDomain" "Impossibile completare l'operazione. (Errore CoreMediaErrorDomain -12642 - No matching mediaFile found from playlist)" -12880 "CoreMediaErrorDomain" "Can not proceed after removing variants" - The strange thing is that if I try to play the same video on multiple devices, the result is that on someone it works and on someone it does not. For example on iPhone 5SE works and on iPad Pro 11'' II gen. and iPhone11 I've tried searching around to figure out what may be causing the problem, but there doesn't seem to be a clear solution. Anyone who has had a similar problem? Do you have any ideas about the reason for this problem?
Posted
by
Post marked as solved
3 Replies
329 Views
I have an AVFoundation-based live camera view. There is a button by which I am calling AVCaptureDevice.showSystemUserInterface(.videoEffects) so that the user can activate the Portrait Effect. I have also opted in by setting "Camera — Opt in for Portrait Effect" to true in info.plist. However, upon tapping on the button I see this screen (The red crossed-off part is the app name): I am expecting to see something like this: Do you have any idea why that might be?
Posted
by
Post not yet marked as solved
0 Replies
124 Views
I have an AVFoundation-based live camera view. There is a button by which I am calling AVCaptureDevice.showSystemUserInterface(.videoEffects) so that the user can activate the Portrait Effect. I have also opted in by setting "Camera — Opt in for Portrait Effect" to true in info.plist. However, upon tapping on the button I see this screen (The red crossed-off part is the app name): I am expecting to see something like this: Do you have any idea why that might be?
Posted
by
Post not yet marked as solved
1 Replies
227 Views
I am using AVFoundation for live camera view. I can get my device from the current video input (of type AVCaptureDeviceInput) like: let device = videoInput.device The device's active format has a isPortraitEffectSupported. How can I set the Portrait Effect on and off in live camera view? I setup the camera like this: private var videoInput: AVCaptureDeviceInput! private let session = AVCaptureSession() private(set) var isSessionRunning = false private var renderingEnabled = true private let videoDataOutput = AVCaptureVideoDataOutput() private let photoOutput = AVCapturePhotoOutput() private(set) var cameraPosition: AVCaptureDevice.Position = .front func configureSession() { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } if strongSelf.setupResult != .success { return } let defaultVideoDevice: AVCaptureDevice? = strongSelf.videoDeviceDiscoverySession.devices.first(where: {$0.position == strongSelf.cameraPosition}) guard let videoDevice = defaultVideoDevice else { print("Could not find any video device") strongSelf.setupResult = .configurationFailed return } do { strongSelf.videoInput = try AVCaptureDeviceInput(device: videoDevice) } catch { print("Could not create video device input: \(error)") strongSelf.setupResult = .configurationFailed return } strongSelf.session.beginConfiguration() strongSelf.session.sessionPreset = AVCaptureSession.Preset.photo // Add a video input. guard strongSelf.session.canAddInput(strongSelf.videoInput) else { print("Could not add video device input to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } strongSelf.session.addInput(strongSelf.videoInput) // Add a video data output if strongSelf.session.canAddOutput(strongSelf.videoDataOutput) { strongSelf.session.addOutput(strongSelf.videoDataOutput) strongSelf.videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)] strongSelf.videoDataOutput.setSampleBufferDelegate(self, queue: strongSelf.dataOutputQueue) } else { print("Could not add video data output to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } // Add photo output if strongSelf.session.canAddOutput(strongSelf.photoOutput) { strongSelf.session.addOutput(strongSelf.photoOutput) strongSelf.photoOutput.isHighResolutionCaptureEnabled = true } else { print("Could not add photo output to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } strongSelf.session.commitConfiguration() } } func prepareSession(completion: @escaping (SessionSetupResult) -> Void) { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } switch strongSelf.setupResult { case .success: strongSelf.addObservers() if strongSelf.photoOutput.isDepthDataDeliverySupported { strongSelf.photoOutput.isDepthDataDeliveryEnabled = true } if let photoOrientation = AVCaptureVideoOrientation(interfaceOrientation: interfaceOrientation) { if let unwrappedPhotoOutputConnection = strongSelf.photoOutput.connection(with: .video) { unwrappedPhotoOutputConnection.videoOrientation = photoOrientation } } strongSelf.dataOutputQueue.async { strongSelf.renderingEnabled = true } strongSelf.session.startRunning() strongSelf.isSessionRunning = strongSelf.session.isRunning strongSelf.mainQueue.async { strongSelf.previewView.videoPreviewLayer.session = strongSelf.session } completion(strongSelf.setupResult) default: completion(strongSelf.setupResult) } } } Then to I set isPortraitEffectsMatteDeliveryEnabled like this: func setPortraitAffectActive(_ state: Bool) { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } if strongSelf.photoOutput.isPortraitEffectsMatteDeliverySupported { strongSelf.photoOutput.isPortraitEffectsMatteDeliveryEnabled = state } } } However, I don't see any Portrait Effect in the live camera view! Any ideas why?
Posted
by
Post not yet marked as solved
1 Replies
236 Views
For reasons too convoluted to go into, I have the need to ‘tap’ the final audio going to the output device (speaker, headphones, etc) from within the app I’m working on. (just need to grab my own audio) It would be really important to not know any structures in the app, and tie into them. It seems like AVAudioEngine / installTapOnBus is the way to go, but it’s unclear what exactly would be the input- I have implemented all of the AVAudioBufferList -> CMSampleBufferRef code through to the expected output, and all I get is silence. It seemed like just attaching to the mainMixerNode should have done it, but now it seems like a “device” input would be necessary. So the questions are: Is AVAudioEngine the SDK for this task? If not, then what? If yes, then what AVAudioNode would serve as an input? Is there an AVAudioUnit that is just the ‘current device output’? I’m not looking for a complete solution, just the last few lines that can complete this connection, or a pointer to the framework that could accomplish this.
Posted
by
Post not yet marked as solved
0 Replies
82 Views
Hi, I put together an AVComposition and add in a few videos together, set the frame duration to CMTime(1,30) however when I export with AVExportSession the resulting video has a frame rate of 1FPS. How do I improve the quality of the resulting exported video? Thanks
Posted
by
Post not yet marked as solved
0 Replies
134 Views
Hello! We are writing an app which analyzes a real world 3D data by using the TrueDepth camera on the front of an iPhone, and an AVCaptureSession configured to produce AVDepthData along with image data. This worked great on iPhone 12, but the same code on iPhone 13 produces an unwanted "smoothing" effect which makes the scene impossible to process and breaks our app. We are unable to find any information on this effect, from Apple or otherwise, much less how to avoid it, so we are asking you experts. At the bottom of this post (Figure 3) is our code which configures the capture session, using an AVCaptureDataOutputSynchronizer, to produce frames of 640x480 image and depth data. I boiled it down as much as possible, sorry it's so long. The main two parts are the configure function, which sets up our capture session, and the dataOutputSynchronizer function, near the bottom, which fires when a sycned set of data is available. In the latter function I've included my code which extracts the information from the AVDepthData object, including looping through all 640x480 depth data points (in meters). I've excluded further processing for brevity (believe it or not :)). On an iPhone 12 device, the PNG data and the depth data merge nicely. The front view and side view of the merged pointcloud are below (Figure 1) . The angles visible in the side view are due to the application of the focal length which "de-perspectives" the data and places them in their proper position in xyz space. The same code on an iPhone 13 produces depth maps that result in point cloud further below (Figure 2 -- straight on view, angled view, and side view). There is no longer any clear distinction between objects and the background becasue the depth data appears to be "smoothed" between the mannequin and the background -- i.e., there are seven or eight points between the subject and background that are not realistic and make it impossible to do any meaningful processing such as segmenting the scene. Has anyone else encountered this issue, or have any insight into how we might change our code to avoid it? Any help or ideas are MUCH appreciated, since this is a definite showstopper (we can't tell people to only run our App on older phones :)). Thank you! Figure 1 -- Merged depth data and image into point cloud, from iPhone 12 Figure 2 -- Merged depth data and image into point cloud, from iPhone 13; unwanted smoothing effect visible Figure 3 -- Our configuration code and capture handler; edited to remove downstream processing of captured data (which was basically formatting it into an XML file and uploading to the cloud) (See Attachment) CameraService (edited).swift
Posted
by
Post marked as solved
2 Replies
176 Views
I've been following the Scrumdinger tutorial and had close to no trouble understanding the concepts, supplementing myself with the Swift guide. However, in the state and lifecycle lesson, exactly in the complete project, I've found confusing syntax that I cannot decipher with certainty. private var player: AVPlayer { AVPlayer.sharedDingPlayer } What does the closure after the AVPlayer type mean? sharedDingPlayer is a static property extending AVPlayer, so my guess is that it's either some kind of casting to this exact type or assigning the static prop to the player property when it's available. I would appreciate any help in clearing this out!
Posted
by
Post not yet marked as solved
0 Replies
127 Views
On the current project once again I use the camera. I use two streams of video and photo Video to detect rectangle and photo to capture a photo with the flash. After several checks I found the bug. On 12 Pro & 13 Pro Max in a bright room I obtain overexposed photos, if I do the same in a dark room, there are no overexposed photos. This behavior is not available on older iPhones. I look forward to all your suggestions and comments. Environment: iOS 15.4.1, iPhone 12, 12 Pro, 12 Pro Max, 13, 13 Pro, 13Pro Max Additional info: I capture photo in - func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {...}
Posted
by
Post not yet marked as solved
2 Replies
173 Views
Goal: To obtain depth data & calibration data from the TrueDepth Camera for computer vision task.   I am very confused because for example apple says, To use depth data for computer vision tasks, use the data in the cameraCalibrationData property to rectify the depth data. which I tried and get nil, and then when looking through stack overflow I read, cameraCalibrationData is always nil in photo, you have to get it from photo.depthData. As long as you're requesting depth data, you'll get the calibration data. and so when I tried print(photo.depthData) to obtain depth & calibration data my output was: Optional(hdis 640x480 (high/abs)  calibration: {intrinsicMatrix: [2735.35 0.00 2017.75 | 0.00 2735.35 1518.51 | 0.00 0.00 1.00],  extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm,  distortionCenter:{2017.75,1518.51},  ref:{4032x3024}}) ^ But where is the depth data??` Below is my entire code: Note: I'm new to Xcode and I'm use to coding in python for computer vision task so I apologize in advance for the messy code.  import AVFoundation import UIKit import Photos class ViewController: UIViewController {     var session: AVCaptureSession?     let output = AVCapturePhotoOutput()     var previewLayer = AVCaptureVideoPreviewLayer()     // MARK: - Permission check     private func checkCameraPermissions() {         switch AVCaptureDevice.authorizationStatus(for: .video) {         case .notDetermined:             AVCaptureDevice.requestAccess(for: .video) { [weak self] granted in                 guard granted else { return }                 DispatchQueue.main.async { self?.setUpCamera() }             }         case .restricted:             break         case .denied:             break         case .authorized:             setUpCamera()         @unknown default:             break         }     }     // MARK: - camera SETUP     private func setUpCamera() {         let session = AVCaptureSession()         if let captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: AVMediaType.depthData, position: .unspecified) {             do {                 let input = try AVCaptureDeviceInput(device: captureDevice)                 if session.canAddInput(input) {                     session.beginConfiguration()                     session.sessionPreset = .photo                     session.addInput(input)                     session.commitConfiguration()                 }                 if session.canAddOutput(output) {                     session.beginConfiguration()                     session.addOutput(output)                     session.commitConfiguration()                 }                 output.isDepthDataDeliveryEnabled = true                 previewLayer.videoGravity = .resizeAspectFill                 previewLayer.session = session                 session.startRunning()                 self.session = session             }             catch {                 print(error)             }         }     }     //MARK: - UI Button     private let shutterButton: UIButton = {         let button = UIButton(frame: CGRect(x: 0, y: 0, width: 100, height: 100))         button.layer.cornerRadius = 50         button.layer.borderWidth = 10         button.layer.borderColor = UIColor.white.cgColor         return button     }()     //MARK: - Video Preview Setup     override func viewDidLoad() {         super.viewDidLoad()         view.backgroundColor = .black         view.layer.insertSublayer(previewLayer, at: 0)         view.addSubview(shutterButton)         checkCameraPermissions()         shutterButton.addTarget(self, action: #selector(didTapTakePhoto), for: .touchUpInside)     }     //MARK: - Video Preview Setup     override func viewDidLayoutSubviews() {         super.viewDidLayoutSubviews()         previewLayer.frame = view.bounds         shutterButton.center = CGPoint(x: view.frame.size.width/2, y: view.frame.size.height - 100)     }     //MARK: - Running and Stopping the Session     override func viewWillAppear(_ animated: Bool) {         super.viewWillAppear(animated)         session!.startRunning()     }     //MARK: - Running and Stopping the Session     override func viewWillDisappear(_ animated: Bool) {         super.viewWillDisappear(animated)         session!.stopRunning()     }     //MARK: - taking a photo     @objc private func didTapTakePhoto() {         let photoSettings = AVCapturePhotoSettings()         photoSettings.isDepthDataDeliveryEnabled = true         photoSettings.isDepthDataFiltered = true         output.capturePhoto(with: photoSettings, delegate: self)     } } extension ViewController: AVCapturePhotoCaptureDelegate {     func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {         guard let data = photo.fileDataRepresentation() else { return }         print(photo.depthData)         let image = UIImage(data: data)         session?.stopRunning()         // ADDING the IMAGE onto the UI         let imageView = UIImageView(image: image)         imageView.contentMode = .scaleAspectFill         imageView.frame = view.bounds         view.addSubview(imageView)         session?.stopRunning()         // saving photo to library         PHPhotoLibrary.requestAuthorization { status in             guard status == .authorized else { return }             PHPhotoLibrary.shared().performChanges({                 let creationRequest = PHAssetCreationRequest.forAsset()                 creationRequest.addResource(with: .photo, data: photo.fileDataRepresentation()!, options: nil)             }, completionHandler: { _, error in                 if error != nil {                     print("error")                 }             })         }     } }
Posted
by
Post not yet marked as solved
0 Replies
184 Views
I am trying to use AVAudioEngine for listening to mic samples and playing them simultaneously via external speaker or headphones (assuming they are attached to iOS device). I tried the following using AVAudioPlayerNode and it works, but there is too much delay in the audio playback. Is there a way to hear sound realtime without delay? Why the scheduleBuffer API has so much delay I wonder. var engine: AVAudioEngine! var playerNode: AVAudioPlayerNode! var mixer: AVAudioMixerNode! var audioEngineRunning = false public func setupAudioEngine() { self.engine = AVAudioEngine() let input = engine.inputNode let format = input.inputFormat(forBus: 0) playerNode = AVAudioPlayerNode() engine.attach(playerNode) self.mixer = engine.mainMixerNode engine.connect(self.playerNode, to: self.mixer, format: playerNode.outputFormat(forBus: 0)) engine.inputNode.installTap(onBus: 0, bufferSize: 4096, format: format, block: { buffer, time in self.playerNode.scheduleBuffer(buffer, completionHandler: nil) }) do { engine.prepare() try self.engine.start() audioEngineRunning = true self.playerNode.play() } catch { print("error couldn't start engine") audioEngineRunning = false } }
Posted
by
Post not yet marked as solved
0 Replies
194 Views
I am using AVAudioSession with playAndRecord category as follows: private func setupAudioSessionForRecording() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setActive(false) try audioSession.setPreferredSampleRate(Double(48000)) } catch { NSLog("Unable to deactivate Audio session") } let options:AVAudioSession.CategoryOptions = [.allowAirPlay, .mixWithOthers] do { try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: options) } catch { NSLog("Could not set audio session category \(error)") } do { try audioSession.setActive(true) } catch { NSLog("Unable to activate AudioSession") } } Next I use AVAudioEngine to repeat what I say in the microphone to external speakers (on the TV connected with iPhone using HDMI Cable). //MARK:- AudioEngine var engine: AVAudioEngine! var playerNode: AVAudioPlayerNode! var mixer: AVAudioMixerNode! var audioEngineRunning = false public func setupAudioEngine() { self.engine = AVAudioEngine() engine.connect(self.engine.inputNode, to: self.engine.outputNode, format: nil) do { engine.prepare() try self.engine.start() } catch { print("error couldn't start engine") } audioEngineRunning = true } public func stopAudioEngine() { engine.stop() audioEngineRunning = false } The issue is I hear some kind of reverb/humming noise after I speak for a few seconds that keeps getting amplified and repeated. If I use a RemoteIO unit instead, no such noise comes out of speakers. I am not sure if my setup of AVAudioEngine is correct. I have tried all kinds of AVAudioSession configuration but nothing changes. The link to sample audio with background speaker noise is posted [here] in the Stackoverflow forum (https://stackoverflow.com/questions/72170548/echo-when-using-avaudioengine-over-hdmi#comment127514327_72170548)
Posted
by
Post not yet marked as solved
2 Replies
216 Views
I have a UISceneConfiguration for external screen which is triggered when external display is connected to iOS device. // MARK: UISceneSession Lifecycle @available(iOS 13.0, *) func application(_ application: UIApplication, configurationForConnecting connectingSceneSession: UISceneSession, options: UIScene.ConnectionOptions) -> UISceneConfiguration { // Called when a new scene session is being created. // Use this method to select a configuration to create the new scene with. switch connectingSceneSession.role { case .windowApplication: return UISceneConfiguration(name: "Default Configuration", sessionRole: connectingSceneSession.role) case .windowExternalDisplay: return UISceneConfiguration(name: "External Screen", sessionRole: connectingSceneSession.role) default: fatalError("Unknown Configuration \(connectingSceneSession.role.rawValue)") } } I display a custom view in the external screen this way in a new UIScene linked to external display. But the problem now is if I also have an AVPlayerViewController in the flow of application, it no longer displays to external screen. I suppose AVPlayerViewController does it's own configuration for external display playback perhaps, but now I have a custom view embedded on external screen it is unable to override it. What do I need to do so that AVPlayerViewController can display content to external screen the way it does normally?
Posted
by
Post not yet marked as solved
0 Replies
103 Views
I see AVPlayerViewController automatically hides video preview on iOS device the moment it detects external display. I want to present video preview on external display as well as on iOS device at the same time. The audio should be routed to external screen via HDMI and not playback on iOS device. Is it possible to do this using standard APIs? I could think of a couple of ways mentioned below and would like to hear from AVFoundation team and others what is recommended or the way out: Create two AVPlayerViewController and two AVPlayers. One of it is attached to externalScreen window, and both display same local video file. Audio would be muted for the playerController displaying video on iOS device. Some play/pause/seek syncing code is required to sync both the players. Use only one AVPlayerViewController that airplays to external screen. Attach AVPlayerItemVideoOutput to the playerItem and intercept video frames and display them using Metal/CoreImage on the iOS device. Use two AVPlayerLayer and add them to external and device screen. Video frames will be replicated to both external screen and iOS device , but not sure if it is possible to route audio only to external screen via HDMI but muted locally.
Posted
by
Post not yet marked as solved
2 Replies
124 Views
I'm trying to play a sine sound for as long as the user wants it. So my plan is to use two buffers. Fill one buffer, play it and while it plays fill the second buffer and then switch between those two. But I always get audio jumps when switching the buffers, I can't get a continuous sound. Could someone look at my code and tell me what I'm doing wrong? The audio data is correct, when I schedule a bunch of buffers in a row a get a smooth sound. import Foundation import AVFoundation var engine = AVAudioEngine() var player = AVAudioPlayerNode() var mixer = engine.mainMixerNode; var dq = DispatchQueue(label: "Sine") let sampleRate = mixer.outputFormat(forBus: 0).sampleRate let samplesOfAudio = UInt32 (10000) var buffer1 = AVAudioPCMBuffer(pcmFormat: (player.outputFormat(forBus: 0)), frameCapacity: samplesOfAudio)! var buffer2 = AVAudioPCMBuffer(pcmFormat: (player.outputFormat(forBus: 0)), frameCapacity: samplesOfAudio)! var f = 440.0 var s = UInt32(0) //current sample let s2t = 2.0 * Double.pi / sampleRate //sample number to time conversion factor func fillBuffer(buffer: AVAudioPCMBuffer) { let leftChannel = buffer.floatChannelData![0] let rightChannel = buffer.floatChannelData![1] for i in 0 ..< samplesOfAudio { let v = sin(f * Double(s) * s2t) * 0.2 leftChannel[Int(i)] = Float(v) rightChannel[Int(i)] = Float(v) s = s + 1 } } func playSine() { engine.attach(player) engine.connect(player, to: mixer, format: player.outputFormat(forBus: 0)) do{ try engine.start() } catch { } buffer1.frameLength = samplesOfAudio buffer2.frameLength = samplesOfAudio fillBuffer(buffer: buffer1) var nextBuffer = buffer1 let semaphore = DispatchSemaphore(value: 0) dq.async { while true { player.scheduleBuffer(nextBuffer) { semaphore.signal() } if(nextBuffer == buffer1) { nextBuffer = buffer2 } else { nextBuffer = buffer1 } fillBuffer(buffer: nextBuffer) semaphore.wait() } } player.play() }
Post not yet marked as solved
0 Replies
103 Views
Did anybody get the memory leak issue when using apple ScreenCaptureKit, we got 200MB memory leaks after running one hour. Testing hardware is MacbookPro 2018.
Posted
by