Post not yet marked as solved
I am using Lidar to measure the distance between the target point and the iPhone Pro. I am getting the correct distance only if I am greater than 70 cm away from the target point. I need that value to be accurate for distances below 70 cm as well.
Is there any coding level issue or It's Lidar's limitations?
Post not yet marked as solved
We are facing a weird behaviour when implementing the AirPlay functionality of our iOS app.
When we test our app on Apple TV devices everything works fine. On some smart TVs with a specific AirPlay receiver version, (more details below) the stream gets stuck on buffering state immediately after switching to AirPlay mode. On other smart TVs, with different AirPlay receiver version, everything works as expected.
The interesting part is that other free or DRM protected streams, work fine on all devices.
Smart TVs that AirPlay works fine
AirPlay Version -> 25.06 (19.9.9)
Smart TVs that AirPlay stuck at buffering state:
AirPlayReceiverSDKVersion -> 3.3.0.54
AirPlayReceiverAppVersion -> 53.122.0
You can reproduce this issue using the following stream url:
https://tr.vod.cdn.cosmotetvott.gr/v1/310/668/1674288197219/1674288197219.ism/.m3u8?qual=a&ios=1&hdnts=st=1713194669\~exp=1713237899\~acl=\*/310/668/1674288197219/1674288197219.ism/\*\~id=cab757e3-9922-48a5-988b-3a4f5da368b6\~data=de9bbd0100a8926c0311b7dbe5389f7d91e94a199d73b6dc75ea46a4579769d7~hmac=77b648539b8f3a823a7d398d69e5dc7060632c29
If this link expires, notify me to send a new one for testing.
Could you please provide to us any specific suggestion as to what causes this issue on those specific streams?
Post not yet marked as solved
I just upgraded to iOS 17 and it looks like AVSpeechSynthesizer is now broken.
I noticed when feeding certain strings to AVSpeechUtterance it just dies flat out stops after only speaking a portion of the string.
For example I fed it a string of approx. 1200 words and it speaks up to around 300 words or so and then just stops. The synthesizer delegate method -speechSynthesizer:didFinishSpeechUtterance: is called when this happens, as if this is supposed to be the end even though it is not even close to being finishes.
Was working fine on iOS 16.
FWIW I create the AVSpeechUtterance with -initWithString:
Post not yet marked as solved
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video.
I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view.
Is this even possible?
Post not yet marked as solved
On iOS, working with a video feed, I am getting a yellow warning that:
"'AVCaptureVideoOrientation' was deprecated in iOS 17.0: Use AVCaptureDeviceRotationCoordinator instead".
But I haven't been able to figure out how to get AVCaptureDevice.RotationCoordinator to work, and I haven't found any example of its usage in the Developer Forums or on the wider internet (the one mention of it in a WWDC session doesn't provide illustration of its use). Can anyone offer a working example using Swift?
Post not yet marked as solved
I have a custom USB device that includes a microphone. I can see the microphone on macOS when I plug in the device so I know that it is working with the kernel and AV subsystems. I can enumerate and reference the microphone using AVCaptureDevice but I have not been able to figure out how to use this device reference with AVAudioEngine. I'm trying to accomplish two things with this microphone.
I want to stream audio from the microphone and have it rendered to the speakers on my MacBook Pro.
I want to capture sound data from the microphone and forward it to a live streaming API.
To my mind, from what I've read, I need AVAudioEngine to do this but I'm having trouble determining from the documentation just how to go about it on macOS. It seems that there is a lot more information for iOS or iPadOS but since USB-C support is sparsely documented on those operating systems, I'm focusing on the desktop (macOS) for now.
Can I convert an AVCaptureDevice into and audio input for AVAudioEngine? If not, how can I accomplish what I'm trying to do using whatever is available on AVFoundation?
Post not yet marked as solved
I'm doing random access sampling from AVAsset of local h264 video file
let track = asset.tracks(withMediaType: .video)[0]
let assetReader = try! AVAssetReader(asset: asset)
let trackOutput = AVAssetReaderTrackOutput(track: track, outputSettings: nil)
trackOutput.supportsRandomAccess = true
assetReader.add(trackOutput)
assetReader.startReading()
...
let targetFrameDTS = CMTime(value: 56, timescale: 30)
let timeRange = CMTimeRange(
start: CMTimeAdd(time, CMTime(value: -1, timescale: 30)),
duration: CMTime(value: 2, timescale: 30)
)
// reset output to be near target frame decoding time
trackOutput.reset(forReadingTimeRanges: [NSValue(timeRange: timeRange)])
while assetReader.status == .reading {
guard let sample = trackOutput.copyNextSampleBuffer() else { break }
let dts = CMSampleBufferGetDecodeTimeStamp(sample)
print("\(dts.value)/\(dts.timescale)")
}
for some reason with some targetFrameDTS assetReader copyNextSampleBuffer will skip samples.
in my particular case the output is
...
47/30
48/30
50/30
51/30
54/30
55/30
57/30
why is it so?
Post not yet marked as solved
As the title describe, I cause a crash when call [AVCaptureSession stopRunning] on macos 14.4.1. the crash stack is as below:
Process: Nebula for Mac [31922]
Path: /Applications/Nebula for Mac.app/Contents/MacOS/Nebula for Mac
Identifier: ai.nreal.nebula.mac
Version: 0.8.0.1098 (0.8.0)
Code Type: ARM-64 (Native)
Parent Process: launchd [1]
User ID: 501
Date/Time: 2024-04-11 14:12:34.6474 +0800
OS Version: macOS 14.4.1 (23E224)
Report Version: 12
Anonymous UUID: C438684A-95E7-7DA1-D063-81E1A5FBF5DC
Sleep/Wake UUID: 3EB85031-82AC-4BDB-8F28-FAF4CBD28CA1
Time Awake Since Boot: 110000 seconds
Time Since Wake: 1108 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: com.apple.avfoundation.proprietarydefaults.singleton.source_queue.0x202f8b460
Exception Type: EXC_CRASH (SIGTRAP)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 5 Trace/BPT trap: 5
Terminating Process: Nebula for Mac [31922]
Thread 0 Crashed:: Dispatch queue: com.apple.avfoundation.proprietarydefaults.singleton.source_queue.0x202f8b460
0 libsystem_kernel.dylib 0x19c1a61f4 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x19c1b8b24 mach_msg2_internal + 80
2 libsystem_kernel.dylib 0x19c1aee34 mach_msg_overwrite + 476
3 libsystem_kernel.dylib 0x19c1a6578 mach_msg + 24
4 libdispatch.dylib 0x19c0513b0 _dispatch_mach_send_and_wait_for_reply + 544
5 libdispatch.dylib 0x19c051740 dispatch_mach_send_with_result_and_wait_for_reply + 60
6 libxpc.dylib 0x19bef2af0 xpc_connection_send_message_with_reply_sync + 288
7 AVFCapture 0x1b9e5565c -[CMIOProprietaryDefaultsSource setObject:forKey:] + 140
8 AVFCapture 0x1b9e57044 __58-[AVCaptureProprietaryDefaultsSingleton setObject:forKey:]_block_invoke + 36
9 libdispatch.dylib 0x19c0363e8 _dispatch_client_callout + 20
10 libdispatch.dylib 0x19c0458d8 _dispatch_lane_barrier_sync_invoke_and_complete + 56
11 AVFCapture 0x1b9e597e0 -[AVCaptureProprietaryDefaultsSingleton _runBlockOnProprietaryDefaultsSourceQueueSync:] + 136
12 AVFCapture 0x1b9e56fbc -[AVCaptureProprietaryDefaultsSingleton setObject:forKey:] + 180
13 AVFCapture 0x1b9e776a0 -[AVCaptureDALDevice _refreshCenterStageUnavailableReasons] + 400
14 AVFCapture 0x1b9e7d0fc -[AVCaptureDALDevice updateActivelyProvidingInputCountForActiveUseState:] + 488
15 AVFCapture 0x1b9e33474 -[AVCaptureSession_Tundra _updateNewActiveUseState:forConnection:] + 196
16 AVFCapture 0x1b9e32e4c -[AVCaptureSession_Tundra _setRunning:] + 428
17 AVFCapture 0x1b9e32a28 -[AVCaptureSession_Tundra stopRunning] + 432
18 libnr_api.dylib 0x1446d7514 0x144478000 + 2487572
19 libnr_api.dylib 0x14468a690 0x144478000 + 2172560
20 libnr_api.dylib 0x14468bcb0 0x144478000 + 2178224
21 libnr_api.dylib 0x1444d0268 0x144478000 + 361064
22 libnr_api.dylib 0x1444ecb00 0x144478000 + 477952
23 libnr_api.dylib 0x1444ec724 0x144478000 + 476964
24 libnr_api.dylib 0x144541bcc 0x144478000 + 826316
25 libnr_api.dylib 0x144543e00 0x144478000 + 835072
26 libnr_api.dylib 0x144543f88 0x144478000 + 835464
27 libnr_api.dylib 0x144542ca8 0x144478000 + 830632
28 GameAssembly.dylib 0x12117d4c4 0x120000000 + 18339012
29 GameAssembly.dylib 0x1211894e0 0x120000000 + 18388192
30 GameAssembly.dylib 0x121165fe4 0x120000000 + 18243556
31 GameAssembly.dylib 0x1202e4248 0x120000000 + 3031624
32 GameAssembly.dylib 0x12116931c 0x120000000 + 18256668
33 GameAssembly.dylib 0x1201dcdf0 0x120000000 + 1953264
34 GameAssembly.dylib 0x1201dcd2c 0x120000000 + 1953068
35 UnityPlayer.dylib 0x10428dc60 0x103c38000 + 6642784
36 UnityPlayer.dylib 0x104295170 0x103c38000 + 6672752
37 UnityPlayer.dylib 0x1042b1620 0x103c38000 + 6788640
38 UnityPlayer.dylib 0x103f788d0 0x103c38000 + 3410128
39 UnityPlayer.dylib 0x1040d8c4c 0x103c38000 + 4852812
40 UnityPlayer.dylib 0x1040d8c98 0x103c38000 + 4852888
41 UnityPlayer.dylib 0x1040d8f2c 0x103c38000 + 4853548
42 UnityPlayer.dylib 0x104b104b8 0x103c38000 + 15566008
43 UnityPlayer.dylib 0x104b10304 0x103c38000 + 15565572
44 Foundation 0x19d430224 __NSFireTimer + 104
45 CoreFoundation 0x19c2e1f90 CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION + 32
46 CoreFoundation 0x19c2e1c34 __CFRunLoopDoTimer + 972
47 CoreFoundation 0x19c2e176c __CFRunLoopDoTimers + 356
48 CoreFoundation 0x19c2c4ba4 __CFRunLoopRun + 1856
49 CoreFoundation 0x19c2c3e0c CFRunLoopRunSpecific + 608
50 HIToolbox 0x1a6a5f000 RunCurrentEventLoopInMode + 292
51 HIToolbox 0x1a6a5ec90 ReceiveNextEventCommon + 220
52 HIToolbox 0x1a6a5eb94 _BlockUntilNextEventMatchingListInModeWithFilter + 76
53 AppKit 0x19fb1c970 _DPSNextEvent + 660
54 AppKit 0x1a030edec -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 700
55 AppKit 0x19fb0fcb8 -[NSApplication run] + 476
56 AppKit 0x19fae6f54 NSApplicationMain + 880
57 UnityPlayer.dylib 0x104b0ffe4 PlayerMain(int, char const**) + 944
58 dyld 0x19be5e0e0 start + 2360
Post not yet marked as solved
Hi guys,
I'm designing a customized camera based on avfoundation. I can output Live Photo from avCaptureDeviceInput for now. I expect to take still and live Photos with different aspect ratio, just like the apple's camera app does (1:1, 4:3, 16:9).
I didn't find any useful infos from docs, any suggestion?
Post not yet marked as solved
On a Vision Pro I load an HDR video served over HLS using AVPlayer.
Per FFMPEG the video has:
pixel format: yuv420p10le
color space / ycbcr matrix: bt2020nc
color primaries: bt2020
transfer function: smte2084
I wanted to try out letting AVFoundation do all of the color conversion instead of making my own YUV -> RGB shader.
To display a 10-bit texture in a drawable queue, the destination Metal texture format must be MTLPixelFormat.rgba16Float (no other formats above 8-bit are supported). So the pixel format I am capturing in is kCVPixelFormatType_64RGBAHalf since it's pretty close.
It's worth noting that the AVAsset shows no track information...must be because it's HLS?
I am using AVPlayerItemVideoOutput to get pixel buffers:
AVPlayerItemVideoOutput(outputSettings: [
AVVideoColorPropertiesKey: [
AVVideoColorPrimariesKey: AVVideoColorPrimaries_ITU_R_2020,
AVVideoTransferFunctionKey: AVVideoTransferFunction_SMPTE_ST_2084_PQ,
AVVideoYCbCrMatrixKey: AVVideoYCbCrMatrix_ITU_R_2020
],
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_64RGBAHalf),
kCVPixelBufferMetalCompatibilityKey as String: true
])
I can change these settings in real time and see they are having an effect on my drawable queue.
The BT.2020 primaries do not look correct to me, it's very bright and washed out. When I switch to BT.709 it looks closer to the output of the AVPlayer. The AVPlayer by itself doesn't look terrible, just a little dark maybe.
When I leave out the outputSettings and let the AVPlayerItemVideoOutput choose its own color settings, it appears to choose BT.2020 also.
Is it enough to put in these outputSettings and expect an RGB pixelBuffer that perfectly matches those settings? Or do I have to just capture in YUV and do all of the conversion manually?
Am I misunderstanding something related to color settings here? I am definitely not an expert.
Thanks
Post not yet marked as solved
https://developer.apple.com/videos/play/wwdc2023/10235/ - In this WWDC session,
at 3:19 - Apple has introduced **Other audio ducking ** feature
In iOS17, we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession ?
We are using an AVAudioSession for a VOIP call while concurrently attempting to play a video through an AVPlayer. However, the volume of the AVPlayer is considerably low.
Does anyone have any ideas on how to achieve the level of control that AVAudioEngine offers?
Post not yet marked as solved
I added VideoPlayer view inside my project, but I noticed that during loading or with different aspect ratio the default color of this view is black.
I would like to change it according to to my app background.
Unfortunately using modifiers such as .background or .foregroundColor doesn't seem to change it.
Is there a way to customize this color?
struct PlayerLooperView: View {
private let queuePlayer: AVQueuePlayer!
private let playerLooper: AVPlayerLooper!
init(url: URL) {
let playerItem = AVPlayerItem(url: url)
self.queuePlayer = AVQueuePlayer(items: [playerItem])
self.queuePlayer.isMuted = true
self.playerLooper = AVPlayerLooper(player: queuePlayer,
templateItem: playerItem)
self.queuePlayer.play()
}
var body: some View {
VideoPlayer(player: queuePlayer)
.disabled(true)
.scaledToFit()
}
}
Post not yet marked as solved
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request.
Our app ideally would have the timestamp difference between the photos in a single capture request as short as possible, but we don't have a good idea of what the theoretical or practical limits of this timestamp difference are.
In my testing on an iPhone 12 Pro, with a frame rate of 33Hz and the preset set to hd1920x1080, I get the timestamp difference between photos at approx 0.3ms, which seems smaller than I would expect, unless the frames are being synchronised incredibly well under the hood.
This leaves the following unanswered questions:
What sort of ranges of values should we expect to come out of these timestamp differences between photos?
What factors influence this?
Is there any way to control these values to ensure they are as small as possible? (Will likely be answered by (2))
Post not yet marked as solved
After my iPad 6 upgrades from iOS 17.3 to 17.4, the AVCaptureMetadataOutput delegate is not called anymore. I find there is the same problem in a stackoverflow post:
https://stackoverflow.com/questions/78128010/ipados-17-4-avcapturemetadataoutput-delegate-not-called-qrscanner
An Apple webpage said the "QR code scanning" issue is fixed in iPadOS 17.4.1:
If your iPad is unable to scan QR codes after updating to iPadOS 17.4 - Apple Support - https://support.apple.com/en-lamr/118614
That's true, I confirm that on my iPad 6.
But, unfortunately, iPadOS 17.4.1 does fix ONLY QR code scanning! It doesn't fix barcode scanning, like PDF 417
Happening on
iPad (7th Generation)
iPad (6th Generation)
iPad Pro 12.9-inch (2nd Generation)
iPad Pro 10.5-inch
Post not yet marked as solved
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code.
So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock;
Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback:
[playerNode installTapOnBus:bus
bufferSize:bufferSize
format:format
block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
//Inspect current audio here and fire...
}];
[playerNode scheduleBuffer:fullbuffer
atTime:startTime
options:0
completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType)
{
// some code is here, not important to this question.
}];
The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled).
Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Post not yet marked as solved
Hey!
I'm trying to open the front camera on my demo app, and from what I read on the Apple docs and forums if you have configured your Persona you will get that image.
But I'm having some issues with it, this is my code:
struct ContentView: View {
@Environment(\.presentationMode) var presentationMode
var body: some View {
ZStack {
VStack {
Image("logo")
.resizable()
.frame(width: 337, height: 211)
.accessibilityHidden(true)
Text("My first Vision Pro app.")
.multilineTextAlignment(.center)
.font(.headline)
.frame(width: 340)
.padding(.bottom, 10)
Button {
// Add camera functionality here
} label: {
Text("Open Camera")
.frame(maxWidth: .infinity)
}
.onAppear {
requestCameraAccess()
}
.onTapGesture {
// Check if camera permission is granted
if AVCaptureDevice.authorizationStatus(for: .video) == .authorized {
openFrontCamera()
} else {
requestCameraAccess()
}
}
}
}
}
func requestCameraAccess() {
AVCaptureDevice.requestAccess(for: .video) { authorized in
DispatchQueue.main.async {
if authorized {
// Permission granted, open camera if needed
openFrontCamera()
} else {
// Handle permission denied case (optional)
}
}
}
}
func openFrontCamera() {
}
}```
On the openFrontCamera() function I tried using .devices() .default() and other methods like you would use for other Apple devices but this doesn't work with Vision Pro and I can't find anything that tells me how to open it.
Has anyone been able to work this out?
Post not yet marked as solved
For our application, we are aiming to have full control over setting and locking the camera exposure settings when taking a video. We’re working with Apple’s AVFoundation framework for a range of devices, but most of the development is focused on the iPad 8 front camera. The manual settings are specific to our use, so we aim to use the custom exposure mode with e.g ISO = 100, exposureDuration = 1/60, and a fixed white balance. The duration, ISO, and white balance are all set in advance of recording, but when we begin we can see that something is still adjusting and compensating for lighting changes.
We then also tried locking the exposure mode after setting the custom values, but there appears to be a delay in this lock taking effect. While tracking the ISO during a recording, we see that the ISO values change in the first second of the recording, leading to oversaturated images, despite our efforts to keep it locked.
This is our attempt to lock the settings using custom mode, which we don’t adjust ourselves during the recording, but it does not work as expected:
func setCameraSettings(newValueISO: Float, newValueDuration: CMTime){
do {
try cameraDevice?.lockForConfiguration()
cameraDevice?.automaticallyAdjustsVideoHDREnabled = false
cameraDevice?.setExposureModeCustom(duration: newValueDuration, iso: newValueISO, completionHandler: { [self] _ in
cameraDevice?.setWhiteBalanceModeLocked(with: cameraDevice!.deviceWhiteBalanceGains)
if ((cameraDevice!.isFocusModeSupported(.locked))) {
do {
cameraDevice!.focusMode = .locked
debugPrint("Focus mode set to locked.")
}
}
cameraDevice?.unlockForConfiguration()
})
} catch {
debugPrint("Error adjusting the exposure")
cameraDevice?.unlockForConfiguration()
}
}
We then tried to lock the exposure mode after setting the custom values, but it then changes during the first second of the recording. We also explicitly tried setting exposureTargetBias to 0, but this made no difference.
func setCameraSettings(newValueISO: Float, newValueDuration: CMTime){
guard let camera = cameraDevice else { return }
do {
if camera.isExposureModeSupported(.custom) {
do {
try camera.lockForConfiguration()
let customExposureBias: Float = 0
//camera.setExposureTargetBias(customExposureBias, completionHandler: nil)
if camera.isExposureModeSupported(.custom) {
camera.setExposureModeCustom(duration: newValueDuration, iso: newValueISO) { [weak camera ] _ in
guard let camera = camera else { return }
if camera.isExposureModeSupported(.locked) {
camera.exposureMode = .locked
}
}
}
camera.unlockForConfiguration()
print("Exposure settings locked with custom values.")
} catch {
print("Failed to lock configuration for capture device: \(error.localizedDescription)")
camera.unlockForConfiguration()
}
} else {
print("Custom exposure mode is not supported.")
}
}
}
We would very much appreciate input on how to keep the manually selected camera settings fixed throughout the video recording.
Post not yet marked as solved
Hey all!
I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween)
When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio.
I wonder how recording in stereo audio works, are there any guides or documentation available for that?
Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently?
This is my Audio Session code:
func configureAudioSession(configuration: CameraConfiguration) throws {
ReactLogger.log(level: .info, message: "Configuring Audio Session...")
// Prevent iOS from automatically configuring the Audio Session for us
audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false
let enableAudio = configuration.audio != .disabled
// Check microphone permission
if enableAudio {
let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio)
if audioPermissionStatus != .authorized {
throw CameraError.permission(.microphone)
}
}
// Remove all current inputs
for input in audioCaptureSession.inputs {
audioCaptureSession.removeInput(input)
}
audioDeviceInput = nil
// Audio Input (Microphone)
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio input...")
guard let microphone = AVCaptureDevice.default(for: .audio) else {
throw CameraError.device(.microphoneUnavailable)
}
let input = try AVCaptureDeviceInput(device: microphone)
guard audioCaptureSession.canAddInput(input) else {
throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input"))
}
audioCaptureSession.addInput(input)
audioDeviceInput = input
}
// Remove all current outputs
for output in audioCaptureSession.outputs {
audioCaptureSession.removeOutput(output)
}
audioOutput = nil
// Audio Output
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio Data output...")
let output = AVCaptureAudioDataOutput()
guard audioCaptureSession.canAddOutput(output) else {
throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output"))
}
output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue)
audioCaptureSession.addOutput(output)
audioOutput = output
}
}
This is how I activate the audio session just before I start recording:
let audioSession = AVAudioSession.sharedInstance()
try audioSession.updateCategory(AVAudioSession.Category.playAndRecord,
mode: .videoRecording,
options: [.mixWithOthers,
.allowBluetoothA2DP,
.defaultToSpeaker,
.allowAirPlay])
if #available(iOS 14.5, *) {
// prevents the audio session from being interrupted by a phone call
try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true)
}
if #available(iOS 13.0, *) {
// allow system sounds (notifications, calls, music) to play while recording
try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true)
}
audioCaptureSession.startRunning()
And this is how I set up the AVAssetWriter:
let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType)
let format = audioInput.device.activeFormat.formatDescription
audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format)
audioWriter!.expectsMediaDataInRealTime = true
assetWriter.add(audioWriter!)
ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.")
The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono.
Is there anything I'm missing here?
Post not yet marked as solved
Hi folks,
iOS 17.4 - iPhone 14 Pro - Real device
I'm facing an error when I try to implement a simple VideoPlayer view:
Main thread blocked by synchronous property query on not-yet-loaded property (PreferredTransform) for HTTP(S) asset. This could have been a problem if this asset were being read from a slow network.
This is the code I'm using to show the video player:
struct ContentView: View {
let player = AVPlayer(url: URL(string: "https://video.twimg.com/amplify_video/1760315643142750208/vid/avc1/640x360/-1etorSK7w2g9Nlc.mp4?tag=16")!)
var body: some View {
VideoPlayer(player: player)
}
}
I noticed that the bundle initi method of AVPlayer doesn't trigger this error:
AVPlayer(url: Bundle.main.url(forResource: "video", withExtension: "mp4")!)
I also tried to load metadata's asset with:
try await AVAsset.load(_:)
Before giving It to AVPlayer via AVPlayerItem with no success :/
Does anyone is facing this error too ? Or is It a bug from the SDK ?
Post not yet marked as solved
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput and scan QRCode.
After updating to iPadOS 17.4, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices.
The following devices are experiencing this issue.
iPad (7th Gen)
iPad (6th Gen)
iPad Pro (10.5)
iPad Pro (12.9 2nd Gen)
This issue has not occur on any other devices I have.
This may only occur on devices with model number "iPad7,x".
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs.
https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Are any additional settings required after iPadOS17.4?
Or is there some problem on the OS side?