Hi, Apple's engineer.
Hoping that you can reply to this one.
We're developing a Text-to-Speak app. Everything went well until the IOS got upgraded to 18.
AVSpeechSynthesisVoice(language: "zh-CN") is running well under IOS 16 AND IOS 17. It speaks Mandarin correctly.
In IOS 18, we noticed that Siri's Language setting interrupted the performance of AVSpeechSynthesisVoice. It plays Cantonese instead of Mandarin.
Buggy language setting in Siri that affects the AVSpeechSynthesisVoice :
Chinese (Cantonese - China mainland)
Chinese (Cantonese -Hong Kong)
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
We are having issues with ScreenCaptureKit. Our use case is the following:
We have multiple applications that each starts a stream capture, using ScreenCaptureKit. It works fine when just one application is streaming, but when starting multiple streams continuesly, all streams stops or crashes, without ScreenCaptureKit reporting an error back.
Restarting replayd for the user will allow us to start streaming again, if the streaming applications are restartet too.
We have build a small test program that we have tested on different Macs, running different versions of macOS, with identical results. The test program just calls the 'getShareableContentExcludingDesktopWindows' function multiple times, since this was the simplest way to show the problem.
Our test setup is the following:
Mac Mini M2 macOS 13.6
MacBook Pro M3 macOS 14.4
MacBook Pro M3 macOS 15.1
Code main.m
#import <Foundation/Foundation.h>
#import <ScreenCaptureKit/ScreenCaptureKit.h>
@interface Runner : NSObject
@property(atomic, assign) BOOL keepRunning;
@property(atomic, retain) SCShareableContent* availableWindows;
-(void) updateAvailableWindows;
-(void) notif:(NSNotification*) aNotif;
@end
int main(int argc, const char * argv[]) {
@autoreleasepool {
NSRunLoop* loo = [NSRunLoop mainRunLoop];
Runner* r = [[Runner alloc] init];
[r updateAvailableWindows];
while (r.keepRunning)
[loo runUntilDate:[NSDate distantFuture]];
NSLog(@"Program exit");
}
return 0;
}
@implementation Runner
-(instancetype) init
{
self = [super init];
if(self){
_keepRunning = YES;
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(notif:) name:@"windowWasNotFound" object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(notif:) name:@"windowWasFound" object:nil];
}
return self;
}
-(void) updateAvailableWindows
{
@autoreleasepool {
NSLog(@"begin");
[SCShareableContent getShareableContentExcludingDesktopWindows:NO onScreenWindowsOnly:YES completionHandler:^(SCShareableContent* content, NSError* error){
NSLog(@"running");
if(!error){
self.availableWindows = content;
for (__unused SCWindow* aWin in self.availableWindows.windows) {
[NSThread sleepForTimeInterval:0.01];
}
[[NSNotificationCenter defaultCenter] postNotificationName:@"windowWasFound" object:self.availableWindows];
}
else{
[[NSNotificationCenter defaultCenter] postNotificationName:@"windowWasNotFound" object:nil];
}
}];
}
}
-(void) notif:(NSNotification*) aNotif
{
if(aNotif.object)
[self performSelectorOnMainThread:@selector(updateAvailableWindows) withObject:nil waitUntilDone:NO];
else{
NSLog(@"Not Found");
self.keepRunning = NO;
}
}
@end
How to replicate:
Compile and run the program from multiple terminal windows (terminal must be granted screen recording permission) and notice that the output stops, when the replayd stops responding (our assumption). Restarting the application does nothing, before the replayd also is restarted.
Running th application in Xcode gives the following error:
[ERROR] -[RPDaemonProxy fetchShareableContentWithOption:windowID:currentProcess:withCompletionHandler:]_block_invoke:902 error: 4097
This error is not something we have been able to detect in our applications - and since the only workaround is restarting replayd and the applications, catching the error would help us.
After investing more than a week into getting a bunch of audio unit projects converted into app + appex + framework, they all are now correctly loaded in-process in the demo host app that is part of Xcode's template.
However, Logic Pro adamantly refuses to load them in-process.
Does Logic Pro simply not do that ever, or is there some hint or configuration my plugins need to provide to enable that? If it is unsupported, will it be supported in some future version of Logic?
The entire point of investing that week was performance, which is moot if it is impossible to test the impact of loading in-process in a real-world usage scenario.
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...")
In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails.
If I "Archive" the app+plugin, there is no audio unit extension in the bundle.
If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there.
Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
I have spent a long time refactoring lots of older Swift code to compile without error in Swift 6.
The app is a v3 audio unit host and audio unit.
Having installed Sonoma and XCode 16 I compile the code using Swift 6 and it compiles and runs without any warnings or errors.
My host will load my AU no problem.
LOGIC PRO is still the ONLY audio unit host that will load native Mac V3 audio units and so I like to test my code using Logic.
In Sonoma with XCode 16...
My AU passes the most stringent AUVAL tests both in terminal and Logic pro.
If I compile the AU source in Swift 5 Logic will see the AU, load it and run it without problems.
But when I compile the AU in Swift 6 Logic sees the AU, will scan it and verify it passes the tests but will not load the AU. In XCode I see a log message that a "helper application failed to run" but the debugger never connects to the AU and I don't think Logic even gets as far as instantiating the AU.
So... what is causing this? I'm stumped..
Developing AUv3 is a brain-aching maze of undocumented hurdles and I'm hoping someone might have found a solution for this one. Meanwhile I guess my only option is to continue using the Swift 5 compiler.
(appending a little note just to mention that all the DSP code is written in C/C++, Swift is used mainly for the user interface and also does some offline thready work )
I recently got some plugins from Universal Audio, and have licensed them properly through both UA and iLok manager. Whenever I try to load up the plugins (specifically from UA) in GarageBand, it first says that
"NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified.
After clicking either 'show in finder' or 'okay', it opens the plugin in a form without its GUI and showing that it is not licensed (even though it is). It also displays error code 100001. I have tried only some basic stuff to troubleshoot like restarting the DAW/my computer and reinstalling/relicensing the softwares. I don't know if the macOS version has anything to do with it but for some reason I just can't get it to work.
How can I use my RGB Curve points:
let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)]
let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)]
in colorCurvesFilter which I've found in Apple Docs:
func colorCurves(inputImage: CIImage) -> CIImage {
let colorCurvesEffect = CIFilter.colorCurves()
colorCurvesEffect.inputImage = inputImage
colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1)
colorCurvesEffect.curvesData = Data(
bytes: [Float32]([
0.0,0.0,0.0,
0.8,0.8,0.8,
1.0,1.0,1.0
]), count: 36)
colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB()
return colorCurvesEffect.outputImage!
}
Hi! I'm attempting to add purpose string when requesting to access the camera, but for some reason it doesn't seem to be working. Below I've included the alert and the plist.
I use the iTunes Library framework in one of my apps, starting with macOS Sequoia 15.1 i can't create the ITLibrary object anymore with the following error:
Connection to amplibraryd was interrupted. clientName:iTunesLibrary(ITLibraryLoader)
Error connecting to the server via proxy object Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.amp.library.framework" UserInfo={NSDebugDescription=connection to service named com.apple.amp.library.framework}
configure failed: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.amp.library.framework" UserInfo={NSDebugDescription=connection to service named com.apple.amp.library.framework}
I created a new sandboxed macOS app, added the music folder read permission and it reproduced the error:
import SwiftUI
import iTunesLibrary
@main
struct ITLibraryLoaderApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
struct ContentView: View {
var body: some View {
Button("Load ITLibrary") {
loadLibrary()
}
}
func loadLibrary() {
do {
let _ = try ITLibrary(apiVersion: "1.0", options: .none)
}
catch {
print(error)
}
}
}
I restarted my developer machine and the music app with no luck.
I'm trying to implement Ambisonic B-Format audio playback on Vision Pro with head tracking. So far audio plays, head tracking works, and the sound appears to be stereo. The problem is that it is not a proper binaural playback when compared to playing back the audiofile with a DAW. Has anyone successfully implemented B-Format playback on Vision Pro? Any suggestions on my current implementation:
func playAmbiAudioForum() async {
do {
try AVAudioSession.sharedInstance().setCategory(.playback)
try AVAudioSession.sharedInstance().setActive(true)
// AudioFile laoding/preperation
guard let testFileURL = Bundle.main.url(forResource: "audiofile", withExtension: "wav") else {
print("Test file not found")
return
}
let audioFile = try AVAudioFile(forReading: testFileURL)
let audioFileFormat = audioFile.fileFormat
// create AVAudioFormat with Ambisonics B Format
guard let layout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Ambisonic_B_Format) else {
print("layout failed")
return
}
let format = AVAudioFormat(
commonFormat: audioFile.processingFormat.commonFormat,
sampleRate: audioFile.fileFormat.sampleRate,
interleaved: false,
channelLayout: layout
)
// write audiofile to buffer
guard let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: UInt32(audioFile.length)) else {
print("buffer failed")
return
}
try audioFile.read(into: buffer)
playerNode.renderingAlgorithm = .HRTF
// connecting nodes
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.outputNode, format: format)
audioEngine.prepare()
playerNode.scheduleBuffer(buffer, at: nil) {
print("File finished playing")
}
try audioEngine.start()
playerNode.play()
} catch {
print("Setup error:", error)
}
}
In my project, i want to add emoji to my video but emoji image becomes dark when add in hdr video, im trying to convert my emoji image to hdr format or create with hdr format but seems not work. I hope someone has experiences in this case could help me.
I am developing an iOS app with video call functionality and implementing Picture in Picture (PiP) mode for video calls. The issue I am facing is that the camera stops capturing video when the app goes to the background, even though the PiP view is still visible.
I have noticed that some apps, like Telegram, manage to keep the camera working in PiP mode while the app is in the background. How can I achieve this in my app?
i have a CarPlay implementation eand I want to show previous/next track button on player UI
MPRemoteCommandCenter.shared().seekForwardCommand.isEnabled = false
MPRemoteCommandCenter.shared().seekBackwardCommand.isEnabled = false
MPRemoteCommandCenter.shared().previousTrackCommand.isEnabled = true
MPRemoteCommandCenter.shared().nextTrackCommand.isEnabled = true
It works correctly on CarPlay simulator , but on some car only SEEK button are shown .
I have to suppose that it is that a problem on the car side , but I would ask about your opinion , maybe there is some pieces I'm missing
Hello!
Just curious if AVAssetWriter append() randomly fails for anyone but me? Seems to happen when live streaming (encoding with VTCompressionSession) and recording (with AVAssetWriter) at the same time.
Topic:
Media Technologies
SubTopic:
Streaming
I’m experiencing a crash at runtime when trying to extract audio from a video. This issue occurs on both iOS 18 and earlier versions. The crash is caused by the following error:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once.'
*** First throw call stack:
(0x1875475ec 0x184ae1244 0x1994c49c0 0x217193358 0x217199899 0x192e208b9 0x217192fd9 0x30204c88d 0x3019e5155 0x301e5fb41 0x301af7add 0x301aff97d 0x301af888d 0x301aff27d 0x301ab5fa5 0x301ab6101 0x192e5ee39)
libc++abi: terminating due to uncaught exception of type NSException
My previous code worked fine, but it's crashing with Swift 6.
Does anyone know a solution for this?
## **Previous code:**
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler = exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
// Periodically check the progress of the export session
let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in
audioExportProgressPublisher.send(exportSession.progress)
}
// Export the audio track asynchronously
exportSession.exportAsynchronously {
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
timer.invalidate()
}
}
## New Code:
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) async {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
if #available(iOS 18.0, *) {
do {
try await exportSession.export(to: outputURL, as: .m4a)
let states = exportSession.states(updateInterval: 0.1)
for await state in states {
switch state {
case .pending, .waiting:
break
case .exporting(progress: let progress):
print("Exporting: \(progress.fractionCompleted)")
if progress.isFinished {
completion(.success(outputURL))
}else if progress.isCancelled {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
}else {
audioExportProgressPublisher.send(Float(progress.fractionCompleted))
}
}
}
}catch let error {
print(error.localizedDescription)
}
}else {
// Periodically check the progress of the export session
let publishTimer = Timer.publish(every: 0.1, on: .main, in: .common)
.autoconnect()
.sink { [weak exportSession] _ in
guard let exportSession else { return }
audioExportProgressPublisher.send(exportSession.progress)
}
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
await exportSession.export()
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
publishTimer.cancel()
}
}
Hello there,
I need to move through video loaded in an AVPlayer one frame at a time back or forth. For that I tried to use AVPlayerItem's method step(byCount:) and it works just fine.
However I need to know when stepping happened and as far as I observed it is not immediate using the method. If I check the currentTime() just after calling the method it's the same and if I do it slightly later (depending of the video itself) it shows the correct "jumped" time.
To achieve my goal I tried subclassing AVPlayerItem and implement my own async method utilizing NotificationCenter and the timeJumpedNotification assuming it would deliver it as the time actually jumps but it's not the case.
Here is my "stripped" and simplified version of the custom Player Item:
import AVFoundation
final class PlayerItem: AVPlayerItem {
private var jumpCompletion: ( (CMTime) -> () )?
override init(asset: AVAsset, automaticallyLoadedAssetKeys: [String]?) {
super .init(asset: asset, automaticallyLoadedAssetKeys: automaticallyLoadedAssetKeys)
NotificationCenter.default.addObserver(self, selector: #selector(timeDidChange(_:)), name: AVPlayerItem.timeJumpedNotification, object: self)
}
deinit {
NotificationCenter.default.removeObserver(self, name: AVPlayerItem.timeJumpedNotification, object: self)
jumpCompletion = nil
}
@discardableResult func step(by count: Int) async -> CMTime {
await withCheckedContinuation { continuation in
step(by: count) { time in
continuation.resume(returning: time)
}
}
}
func step(by count: Int, completion: @escaping ( (CMTime) -> () )) {
guard jumpCompletion == nil else {
completion(currentTime())
return
}
jumpCompletion = completion
step(byCount: count)
}
@objc private func timeDidChange(_ notification: Notification) {
switch notification.name {
case AVPlayerItem.timeJumpedNotification where notification.object as? AVPlayerItem [==](https://www.example.com/) self:
jumpCompletion?(currentTime())
jumpCompletion = nil
default: return
}
}
}
In short the notification never gets called thus the above is not working.
I guess the key there is that in the docs about the timeJumpedNotification: is said:
"A notification the system posts when a player item’s time changes discontinuously."
so the step(byCount:) is not considered as discontinuous operation and doesn't trigger it.
I'd be really helpful if somebody can help as I don't want to use seek(to:toleranceBefore:toleranceAfter:) mainly cause it's not accurate in terms of the exact next/previous frame as the video might have VFR and that causes repeating frames sometimes or even skipping one or another.
Thanks a lot
We move to another streaming service and need to deliver a ASK, .PEM &key, and CRT to enable DRM. Now the issue is that we don't have that information anymore.
Most logical would be to revoke the current certificate and create a new one. Unfortunately for Fairplay Streaming Certificates there is no revoke button.
We asked developer support who isn't able to help. We then did a request to revoke as described in article 2.7 of the Apple Developer Program License Agreement. They can only do this when the certificate is compromised.
So now we are stuck. Anyone out there who had the same issue and found a solution?
Your help is much appreciated.
iPadOS 18.3 beta 3 (22D5055b) fixed the issue for me and my 7th generation iPad.
Topic:
Media Technologies
SubTopic:
Audio
Hey, I'm building a camera app and I want to use the captured HDRGainMap along side the photo to do some processing with a CIFilter chain. How can this be done? I can't find any documentation any where on this, only on how to access the HDRGainMap from an existing HEIC file, which I have done successfully. For this I'm doing something like the following:
let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap)
let gainDict = NSDictionary(dictionary: gainmap)
let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data
let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription]
let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata]
However I'm not sure what the approach is with a AVCapturePhoto output from a AVCaptureDevice.
Thanks!
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)?
Thanks so much for the help.
class CameraManager: NSObject {
enum Errors: Error {
case noCaptureDevice
case couldNotAddInput
case unsupportedConfiguration
}
enum Resolution {
case hd1080p
case uhd4K
var preset: AVCaptureSession.Preset {
switch self {
case .hd1080p:
return .hd1920x1080
case .uhd4K:
return .hd4K3840x2160
}
}
var dimensions: CMVideoDimensions {
switch self {
case .hd1080p:
return CMVideoDimensions(width: 1920, height: 1080)
case .uhd4K:
return CMVideoDimensions(width: 3840, height: 2160)
}
}
}
enum CameraType {
case wide
case ultraWide
var captureDeviceType: AVCaptureDevice.DeviceType {
switch self {
case .wide:
return .builtInWideAngleCamera
case .ultraWide:
return .builtInUltraWideCamera
}
}
}
enum FrameRate: Int {
case fps60 = 60
case fps120 = 120
case fps240 = 240
}
let orientationManager = OrientationManager()
let captureSession: AVCaptureSession
let previewLayer: AVCaptureVideoPreviewLayer
let movieFileOutput = AVCaptureMovieFileOutput()
let videoDataOutput = AVCaptureVideoDataOutput()
private var videoCaptureDevice: AVCaptureDevice?
override init() {
self.captureSession = AVCaptureSession()
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
super.init()
self.previewLayer.videoGravity = .resizeAspect
}
func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws {
assert(Thread.isMainThread)
captureSession.beginConfiguration()
defer { captureSession.commitConfiguration() }
captureSession.sessionPreset = resolution.preset
if captureSession.canAddOutput(movieFileOutput) {
captureSession.addOutput(movieFileOutput)
} else {
throw Errors.couldNotAddInput
}
videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue"))
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
// Set the video orientation if needed
if let connection = videoDataOutput.connection(with: .video) {
//connection.videoOrientation = .portrait
}
} else {
throw Errors.couldNotAddInput
}
guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else {
throw Errors.noCaptureDevice
}
let useDimensions = resolution.dimensions
guard let format = videoCaptureDevice.formats.first(where: { format in
let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height
let frameRates = format.videoSupportedFrameRateRanges
return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) })
}) else {
throw Errors.unsupportedConfiguration
}
self.videoCaptureDevice = videoCaptureDevice
do {
let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
if captureSession.canAddInput(videoInput) {
captureSession.addInput(videoInput)
} else {
throw Errors.couldNotAddInput
}
try videoCaptureDevice.lockForConfiguration()
videoCaptureDevice.activeFormat = format
videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000)
videoCaptureDevice.exposureMode = .locked
videoCaptureDevice.unlockForConfiguration()
} catch {
throw error
}
configureStabilization(enabled: stabilizationEnabled)
}`