AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

AVFoundation Documentation

Posts under AVFoundation tag

360 Posts
Sort by:
Post not yet marked as solved
0 Replies
47 Views
In my app I play HLS streams via AVPlayer. It works well! However, when I try to download those same HLS urls via MakeAssetDownloadTask I regularly come across the error: Download error for identifier 21222: Error Domain=CoreMediaErrorDomain Code=-12938 "HTTP 404: File Not Found" UserInfo={NSDescription=HTTP 404: File Not Found, _NSURLErrorRelatedURLSessionTaskErrorKey=( "BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>" ), _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>} I have a feeling that the AVPlayer has a way to resolve this that the MakeAssetDownloadTask lacks. I am wondering if any of you have come across this or have insight. Thank you! BTW this is using Xcode Version 15.3 (15E204a) and developing for visionOS 1.0.1
Posted Last updated
.
Post not yet marked as solved
0 Replies
100 Views
Dear Apple Developer Forum, we have customers here complaining about not being able to play live streams (HLS FairPlay) with ou application anymore since having upgraded their phone to iOS 17.4.1. We can't reproduce this problem in-house but the error code sent to ou analytics platform is CoreMediaErrorDomain error -12852 . Would it be possible to get more information on this error especially the potential cause of this and if the app is not responsible how we can help our customers ? Kind regards Cédric
Posted Last updated
.
Post not yet marked as solved
2 Replies
254 Views
Hello, I tried to build AVCam sample application for iOS17 and run it on MacBook (designed as iPad) with macos14.3 (Sonoma). https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app?language=objc When building and testing with Xcode 15.2, AVCam application crashes systematically when choosing target "My Mac (Designed for iPad)" In fact, SIGABORT signal is received in a thread dealing with "portrait effect" Thread 19 Queue : com.apple.portrait.effect_init (serial) Is it a known bug? Is there a workaround about this case? Best regards External webcam is detected by AVCam but preview and capture are systematically upside down. (may be the same FaceTime HD camera's) Is it a known bug? Is there a workaround about this case?
Posted
by ftristani.
Last updated
.
Post not yet marked as solved
0 Replies
74 Views
Is it possible to find IDR frame (CMSampleBuffer) in AVAsset h264 video file?
Posted
by tien6b0.
Last updated
.
Post not yet marked as solved
0 Replies
76 Views
I have a camera application which aims to take images as close to simultaneously as possible from the wide and ultra-wide cameras. The AVCaptureMultiCamSession is setup with manual connections. Note: we are not using builtInDualWideCamera with constituent photo delivery enabled since some features we use are not supported in that mode. At the moment, we are manually trying to synchronize frames between the two cameras, but we would like to use the AVCaptureDataOutputSynchronizer to improve our results. Is it possible to synchronize the wide and ultra-wide video outputs? All examples and docs that I've found show synchronization with video and depth, metadata, or audio, but not two video outputs. From my testing, I've found that the dataOutputSynchronizer either fires with the wide video output, or the ultra video output, but never both (at least one is nil), suggesting that they are not being synchronized. self.outputSync = AVCaptureDataOutputSynchronizer(dataOutputs: [wideCameraOutput, ultraCameraOutput]) outputSync.setDelegate(self, queue: .main) ... func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { guard let syncWideData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.wideCameraOutput) as? AVCaptureSynchronizedSampleBufferData, let syncedUltraData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.ultraCameraOutput) as? AVCaptureSynchronizedSampleBufferData else { return; } // either syncWideData or syncUltraData is always nil, so the guard condition never passes. }
Posted
by nanders.
Last updated
.
Post not yet marked as solved
1 Replies
187 Views
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Posted Last updated
.
Post not yet marked as solved
0 Replies
96 Views
We found that crashes occur on some specific devices. But don't know the root cause for it. It only appears on the user side and cannot be reproduced on our local devices. From the stack, a crash occurs inside AVCapture after calling discoverySessionWithDeviceTypes: NSArray<AVCaptureDevice*>* GetVideoCaptureDevices() { NSArray* captureDeviceType = @[ AVCaptureDeviceTypeBuiltInWideAngleCamera, AVCaptureDeviceTypeExternalUnknown ]; AVCaptureDeviceDiscoverySession* deviceDiscoverySession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:captureDeviceType mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionUnspecified]; return deviceDiscoverySession.devices; } The following is the crash call stack: OS Version: macOS 13.5 (22G74) Report Version: 104 Crashed Thread: 10301 Application Specific Information: Fatal Error: EXC_BAD_INSTRUCTION / EXC_I386_INVOP / 0x7ff8194b3522 Thread 10301 Crashed: 0 AppKit 0x7ff8194b3522 -[NSApplication _crashOnException:] 1 AppKit 0x7ff8194b32b3 -[NSApplication reportException:] 2 AppKit 0x7ff819569efa NSApplicationUncaughtExceptionHandler 3 CoreFoundation 0x7ff8161c010a <unknown> 4 libobjc.A.dylib 0x7ff815c597c8 <unknown> 5 libc++abi.dylib 0x7ff815f926da std::__terminate 6 libc++abi.dylib 0x7ff815f92695 std::terminate 7 libobjc.A.dylib 0x7ff815c65929 <unknown> 8 libdispatch.dylib 0x7ff815e38046 _dispatch_client_callout 9 libdispatch.dylib 0x7ff815e39266 _dispatch_once_callout 10 AVFCapture 0x7ff8328cafb6 +[AVCaptureDALDevice devices] 11 AVFCapture 0x7ff832996410 +[AVCaptureDevice_Tundra _devicesWithAllowIOSMacEnvironment:] 12 AVFCapture 0x7ff83299652b +[AVCaptureDevice_Tundra _devicesWithDeviceTypes:mediaType:position:allowIOSMacEnvironment:] 13 AVFCapture 0x7ff83299e8c0 -[AVCaptureDeviceDiscoverySession_Tundra _initWithDeviceTypes:mediaType:position:allowIOSMacEnvironment:prefersUnsuspendedAndAllowsAnyPosition:] 14 AVFCapture 0x7ff83299e7a4 +[AVCaptureDeviceDiscoverySession_Tundra discoverySessionWithDeviceTypes:mediaType:position:] 15 Electron Framework 0x119453784 media::GetVideoCaptureDevices (video_capture_device_avfoundation_helpers.mm:22) I want to know what is the root cause of this crash. How should I simulate it and fix it? Any suggestions would be highly appreciated. Thank you.
Posted
by Colin1994.
Last updated
.
Post not yet marked as solved
0 Replies
85 Views
For whoever needs to hear this... Say you have an AVURLAsset: let asset = AVURLAsset(url: URL(string: "https://www.example.com/playlist.m3u8")!) Then say you load that asset into an AVPlayerItem, and would like it to automatically load certain asset keys you're interested in ahead of time: let playerItem = AVPlayerItem( asset: avURLAsset, automaticallyLoadedAssetKeys: [ "metadata", "commonMetadata", "availableMetadataFormats", "allMediaSelections", "hasProtectedContent", "overallDurationHint"]) Among those keys, do not use "tracks" even though it's one of the available options. That will break AirPlay across all platforms (the user chooses an AirPlay destination and the AVPlayerItem's status instantly switches to failed). Took me far too long to track this down, just wanted to get it out there to save anybody else some time if they ever run into it.
Posted
by Suges.
Last updated
.
Post not yet marked as solved
1 Replies
151 Views
I am using Lidar to measure the distance between the target point and the iPhone Pro. I am getting the correct distance only if I am greater than 70 cm away from the target point. I need that value to be accurate for distances below 70 cm as well. Is there any coding level issue or It's Lidar's limitations?
Posted
by Ramneet.
Last updated
.
Post not yet marked as solved
1 Replies
228 Views
We are facing a weird behaviour when implementing the AirPlay functionality of our iOS app. When we test our app on Apple TV devices everything works fine. On some smart TVs with a specific AirPlay receiver version, (more details below) the stream gets stuck on buffering state immediately after switching to AirPlay mode. On other smart TVs, with different AirPlay receiver version, everything works as expected. The interesting part is that other free or DRM protected streams, work fine on all devices. Smart TVs that AirPlay works fine AirPlay Version -> 25.06 (19.9.9) Smart TVs that AirPlay stuck at buffering state: AirPlayReceiverSDKVersion -> 3.3.0.54 AirPlayReceiverAppVersion -> 53.122.0 You can reproduce this issue using the following stream url: https://tr.vod.cdn.cosmotetvott.gr/v1/310/668/1674288197219/1674288197219.ism/.m3u8?qual=a&ios=1&hdnts=st=1713194669\~exp=1713237899\~acl=\*/310/668/1674288197219/1674288197219.ism/\*\~id=cab757e3-9922-48a5-988b-3a4f5da368b6\~data=de9bbd0100a8926c0311b7dbe5389f7d91e94a199d73b6dc75ea46a4579769d7~hmac=77b648539b8f3a823a7d398d69e5dc7060632c29 If this link expires, notify me to send a new one for testing. Could you please provide to us any specific suggestion as to what causes this issue on those specific streams?
Posted
by gcharita.
Last updated
.
Post not yet marked as solved
12 Replies
2.6k Views
I just upgraded to iOS 17 and it looks like AVSpeechSynthesizer is now broken. I noticed when feeding certain strings to AVSpeechUtterance it just dies flat out stops after only speaking a portion of the string. For example I fed it a string of approx. 1200 words and it speaks up to around 300 words or so and then just stops. The synthesizer delegate method -speechSynthesizer:didFinishSpeechUtterance: is called when this happens, as if this is supposed to be the end even though it is not even close to being finishes. Was working fine on iOS 16. FWIW I create the AVSpeechUtterance with -initWithString:
Posted Last updated
.
Post not yet marked as solved
0 Replies
163 Views
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video. I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view. Is this even possible?
Posted Last updated
.
Post not yet marked as solved
2 Replies
182 Views
On iOS, working with a video feed, I am getting a yellow warning that: "'AVCaptureVideoOrientation' was deprecated in iOS 17.0: Use AVCaptureDeviceRotationCoordinator instead". But I haven't been able to figure out how to get AVCaptureDevice.RotationCoordinator to work, and I haven't found any example of its usage in the Developer Forums or on the wider internet (the one mention of it in a WWDC session doesn't provide illustration of its use). Can anyone offer a working example using Swift?
Posted
by grb2007.
Last updated
.
Post not yet marked as solved
1 Replies
485 Views
I have a custom USB device that includes a microphone. I can see the microphone on macOS when I plug in the device so I know that it is working with the kernel and AV subsystems. I can enumerate and reference the microphone using AVCaptureDevice but I have not been able to figure out how to use this device reference with AVAudioEngine. I'm trying to accomplish two things with this microphone. I want to stream audio from the microphone and have it rendered to the speakers on my MacBook Pro. I want to capture sound data from the microphone and forward it to a live streaming API. To my mind, from what I've read, I need AVAudioEngine to do this but I'm having trouble determining from the documentation just how to go about it on macOS. It seems that there is a lot more information for iOS or iPadOS but since USB-C support is sparsely documented on those operating systems, I'm focusing on the desktop (macOS) for now. Can I convert an AVCaptureDevice into and audio input for AVAudioEngine? If not, how can I accomplish what I'm trying to do using whatever is available on AVFoundation?
Posted Last updated
.
Post not yet marked as solved
0 Replies
134 Views
I'm doing random access sampling from AVAsset of local h264 video file let track = asset.tracks(withMediaType: .video)[0] let assetReader = try! AVAssetReader(asset: asset) let trackOutput = AVAssetReaderTrackOutput(track: track, outputSettings: nil) trackOutput.supportsRandomAccess = true assetReader.add(trackOutput) assetReader.startReading() ... let targetFrameDTS = CMTime(value: 56, timescale: 30) let timeRange = CMTimeRange( start: CMTimeAdd(time, CMTime(value: -1, timescale: 30)), duration: CMTime(value: 2, timescale: 30) ) // reset output to be near target frame decoding time trackOutput.reset(forReadingTimeRanges: [NSValue(timeRange: timeRange)]) while assetReader.status == .reading { guard let sample = trackOutput.copyNextSampleBuffer() else { break } let dts = CMSampleBufferGetDecodeTimeStamp(sample) print("\(dts.value)/\(dts.timescale)") } for some reason with some targetFrameDTS assetReader copyNextSampleBuffer will skip samples. in my particular case the output is ... 47/30 48/30 50/30 51/30 54/30 55/30 57/30 why is it so?
Posted
by tien6b0.
Last updated
.
Post not yet marked as solved
2 Replies
138 Views
As the title describe, I cause a crash when call [AVCaptureSession stopRunning] on macos 14.4.1. the crash stack is as below: Process: Nebula for Mac [31922] Path: /Applications/Nebula for Mac.app/Contents/MacOS/Nebula for Mac Identifier: ai.nreal.nebula.mac Version: 0.8.0.1098 (0.8.0) Code Type: ARM-64 (Native) Parent Process: launchd [1] User ID: 501 Date/Time: 2024-04-11 14:12:34.6474 +0800 OS Version: macOS 14.4.1 (23E224) Report Version: 12 Anonymous UUID: C438684A-95E7-7DA1-D063-81E1A5FBF5DC Sleep/Wake UUID: 3EB85031-82AC-4BDB-8F28-FAF4CBD28CA1 Time Awake Since Boot: 110000 seconds Time Since Wake: 1108 seconds System Integrity Protection: enabled Crashed Thread: 0 Dispatch queue: com.apple.avfoundation.proprietarydefaults.singleton.source_queue.0x202f8b460 Exception Type: EXC_CRASH (SIGTRAP) Exception Codes: 0x0000000000000000, 0x0000000000000000 Termination Reason: Namespace SIGNAL, Code 5 Trace/BPT trap: 5 Terminating Process: Nebula for Mac [31922] Thread 0 Crashed:: Dispatch queue: com.apple.avfoundation.proprietarydefaults.singleton.source_queue.0x202f8b460 0 libsystem_kernel.dylib 0x19c1a61f4 mach_msg2_trap + 8 1 libsystem_kernel.dylib 0x19c1b8b24 mach_msg2_internal + 80 2 libsystem_kernel.dylib 0x19c1aee34 mach_msg_overwrite + 476 3 libsystem_kernel.dylib 0x19c1a6578 mach_msg + 24 4 libdispatch.dylib 0x19c0513b0 _dispatch_mach_send_and_wait_for_reply + 544 5 libdispatch.dylib 0x19c051740 dispatch_mach_send_with_result_and_wait_for_reply + 60 6 libxpc.dylib 0x19bef2af0 xpc_connection_send_message_with_reply_sync + 288 7 AVFCapture 0x1b9e5565c -[CMIOProprietaryDefaultsSource setObject:forKey:] + 140 8 AVFCapture 0x1b9e57044 __58-[AVCaptureProprietaryDefaultsSingleton setObject:forKey:]_block_invoke + 36 9 libdispatch.dylib 0x19c0363e8 _dispatch_client_callout + 20 10 libdispatch.dylib 0x19c0458d8 _dispatch_lane_barrier_sync_invoke_and_complete + 56 11 AVFCapture 0x1b9e597e0 -[AVCaptureProprietaryDefaultsSingleton _runBlockOnProprietaryDefaultsSourceQueueSync:] + 136 12 AVFCapture 0x1b9e56fbc -[AVCaptureProprietaryDefaultsSingleton setObject:forKey:] + 180 13 AVFCapture 0x1b9e776a0 -[AVCaptureDALDevice _refreshCenterStageUnavailableReasons] + 400 14 AVFCapture 0x1b9e7d0fc -[AVCaptureDALDevice updateActivelyProvidingInputCountForActiveUseState:] + 488 15 AVFCapture 0x1b9e33474 -[AVCaptureSession_Tundra _updateNewActiveUseState:forConnection:] + 196 16 AVFCapture 0x1b9e32e4c -[AVCaptureSession_Tundra _setRunning:] + 428 17 AVFCapture 0x1b9e32a28 -[AVCaptureSession_Tundra stopRunning] + 432 18 libnr_api.dylib 0x1446d7514 0x144478000 + 2487572 19 libnr_api.dylib 0x14468a690 0x144478000 + 2172560 20 libnr_api.dylib 0x14468bcb0 0x144478000 + 2178224 21 libnr_api.dylib 0x1444d0268 0x144478000 + 361064 22 libnr_api.dylib 0x1444ecb00 0x144478000 + 477952 23 libnr_api.dylib 0x1444ec724 0x144478000 + 476964 24 libnr_api.dylib 0x144541bcc 0x144478000 + 826316 25 libnr_api.dylib 0x144543e00 0x144478000 + 835072 26 libnr_api.dylib 0x144543f88 0x144478000 + 835464 27 libnr_api.dylib 0x144542ca8 0x144478000 + 830632 28 GameAssembly.dylib 0x12117d4c4 0x120000000 + 18339012 29 GameAssembly.dylib 0x1211894e0 0x120000000 + 18388192 30 GameAssembly.dylib 0x121165fe4 0x120000000 + 18243556 31 GameAssembly.dylib 0x1202e4248 0x120000000 + 3031624 32 GameAssembly.dylib 0x12116931c 0x120000000 + 18256668 33 GameAssembly.dylib 0x1201dcdf0 0x120000000 + 1953264 34 GameAssembly.dylib 0x1201dcd2c 0x120000000 + 1953068 35 UnityPlayer.dylib 0x10428dc60 0x103c38000 + 6642784 36 UnityPlayer.dylib 0x104295170 0x103c38000 + 6672752 37 UnityPlayer.dylib 0x1042b1620 0x103c38000 + 6788640 38 UnityPlayer.dylib 0x103f788d0 0x103c38000 + 3410128 39 UnityPlayer.dylib 0x1040d8c4c 0x103c38000 + 4852812 40 UnityPlayer.dylib 0x1040d8c98 0x103c38000 + 4852888 41 UnityPlayer.dylib 0x1040d8f2c 0x103c38000 + 4853548 42 UnityPlayer.dylib 0x104b104b8 0x103c38000 + 15566008 43 UnityPlayer.dylib 0x104b10304 0x103c38000 + 15565572 44 Foundation 0x19d430224 __NSFireTimer + 104 45 CoreFoundation 0x19c2e1f90 CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION + 32 46 CoreFoundation 0x19c2e1c34 __CFRunLoopDoTimer + 972 47 CoreFoundation 0x19c2e176c __CFRunLoopDoTimers + 356 48 CoreFoundation 0x19c2c4ba4 __CFRunLoopRun + 1856 49 CoreFoundation 0x19c2c3e0c CFRunLoopRunSpecific + 608 50 HIToolbox 0x1a6a5f000 RunCurrentEventLoopInMode + 292 51 HIToolbox 0x1a6a5ec90 ReceiveNextEventCommon + 220 52 HIToolbox 0x1a6a5eb94 _BlockUntilNextEventMatchingListInModeWithFilter + 76 53 AppKit 0x19fb1c970 _DPSNextEvent + 660 54 AppKit 0x1a030edec -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 700 55 AppKit 0x19fb0fcb8 -[NSApplication run] + 476 56 AppKit 0x19fae6f54 NSApplicationMain + 880 57 UnityPlayer.dylib 0x104b0ffe4 PlayerMain(int, char const**) + 944 58 dyld 0x19be5e0e0 start + 2360
Posted
by pxhero.
Last updated
.
Post not yet marked as solved
0 Replies
130 Views
Hi guys, I'm designing a customized camera based on avfoundation. I can output Live Photo from avCaptureDeviceInput for now. I expect to take still and live Photos with different aspect ratio, just like the apple's camera app does (1:1, 4:3, 16:9). I didn't find any useful infos from docs, any suggestion?
Posted
by ayumizll.
Last updated
.
Post not yet marked as solved
0 Replies
122 Views
On a Vision Pro I load an HDR video served over HLS using AVPlayer. Per FFMPEG the video has: pixel format: yuv420p10le color space / ycbcr matrix: bt2020nc color primaries: bt2020 transfer function: smte2084 I wanted to try out letting AVFoundation do all of the color conversion instead of making my own YUV -> RGB shader. To display a 10-bit texture in a drawable queue, the destination Metal texture format must be MTLPixelFormat.rgba16Float (no other formats above 8-bit are supported). So the pixel format I am capturing in is kCVPixelFormatType_64RGBAHalf since it's pretty close. It's worth noting that the AVAsset shows no track information...must be because it's HLS? I am using AVPlayerItemVideoOutput to get pixel buffers: AVPlayerItemVideoOutput(outputSettings: [ AVVideoColorPropertiesKey: [ AVVideoColorPrimariesKey: AVVideoColorPrimaries_ITU_R_2020, AVVideoTransferFunctionKey: AVVideoTransferFunction_SMPTE_ST_2084_PQ, AVVideoYCbCrMatrixKey: AVVideoYCbCrMatrix_ITU_R_2020 ], kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_64RGBAHalf), kCVPixelBufferMetalCompatibilityKey as String: true ]) I can change these settings in real time and see they are having an effect on my drawable queue. The BT.2020 primaries do not look correct to me, it's very bright and washed out. When I switch to BT.709 it looks closer to the output of the AVPlayer. The AVPlayer by itself doesn't look terrible, just a little dark maybe. When I leave out the outputSettings and let the AVPlayerItemVideoOutput choose its own color settings, it appears to choose BT.2020 also. Is it enough to put in these outputSettings and expect an RGB pixelBuffer that perfectly matches those settings? Or do I have to just capture in YUV and do all of the conversion manually? Am I misunderstanding something related to color settings here? I am definitely not an expert. Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
149 Views
https://developer.apple.com/videos/play/wwdc2023/10235/ - In this WWDC session, at 3:19 - Apple has introduced **Other audio ducking ** feature In iOS17, we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession ? We are using an AVAudioSession for a VOIP call while concurrently attempting to play a video through an AVPlayer. However, the volume of the AVPlayer is considerably low. Does anyone have any ideas on how to achieve the level of control that AVAudioEngine offers?
Posted
by dhilipr.
Last updated
.