Opening this question after discussing the issue in the AVCapture lab, hopefully so we can track down this issue.
We've been noticing some crashes in App Store Connect caused by layoutSublayers being called on a background thread.
After debugging the issue a bit we found that all calls which modified the AVCaptureSession or preview layer were indeed done on the main thread. It would be useful to see what results in AVCaptureVideoPreviewLayer.updateFormatDescription being called.
I've attached the crashlog below.
Crash log.ips - https://developer.apple.com/forums/content/attachment/800b0dba-3477-4c5a-b56c-f4cc393b384f
AVFoundation
RSS for tagWork with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.
Posts under AVFoundation tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Title says it all.
I am calling AVSampleBufferDisplayLayer.flush from a background queue but this seems to occasionally crash the app. I am calling it from the same thread I pass to - (void)requestMediaDataWhenReadyOnQueue:(dispatch_queue_t)queue usingBlock:(void (^)(void))block;.
My question is, is this API threadsafe, or do I need to call flush from the main thread? Or is there another issue that I am not considering? It seems strange to me that this API would trigger an autolayout pass.
0 CoreFoundation 0x00000001bb384e38 __exceptionPreprocess + 164
1 libobjc.A.dylib 0x00000001b451b8d8 objc_exception_throw + 59
2 CoreAutoLayout 0x00000001d7e09e84 _AssertAutoLayoutOnAllowedThreadsOnly + 327
3 CoreAutoLayout 0x00000001d7e00e60 -[NSISEngine withBehaviors:performModifications:] + 35
4 UIKitCore 0x00000001be58fd40 -[UIView _postMovedFromSuperview:] + 671
5 UIKitCore 0x00000001bd56dfec -[UIView(Internal) _addSubview:positioned:relativeTo:] + 1903
6 UIKitCore 0x00000001bda57ccc -[_UITextLayoutCanvasView textViewportLayoutController:configureRenderingSurfaceForTextLayoutFragment:] + 455
7 UIFoundation 0x00000001c588bc9c __48-[NSTextViewportLayoutController layoutViewport]_block_invoke_4 + 151
8 UIFoundation 0x00000001c5836b50 __80-[NSTextLayoutManager enumerateViewportElementsFromLocation:options:usingBlock:]_block_invoke + 43
9 UIFoundation 0x00000001c580e158 __83-[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:]_block_invoke_2 + 535
10 CoreFoundation 0x00000001bb385350 __NSARRAY_IS_CALLING_OUT_TO_A_BLOCK__ + 23
11 CoreFoundation 0x00000001bb3b24dc -[__NSSingleObjectArrayI enumerateObjectsWithOptions:usingBlock:] + 91
12 UIFoundation 0x00000001c580de28 __83-[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:]_block_invoke + 775
13 UIFoundation 0x00000001c57f7504 -[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:] + 659
14 UIFoundation 0x00000001c57f7264 -[NSTextLayoutManager enumerateViewportElementsFromLocation:options:usingBlock:] + 99
15 UIFoundation 0x00000001c57f6d7c -[NSTextViewportLayoutController layoutViewport] + 1299
16 UIKitCore 0x00000001bd580a3c +[UIView(Animation) performWithoutAnimation:] + 75
17 UIKitCore 0x00000001bd5582d0 -[_UITextLayoutCanvasView layoutSubviews] + 139
18 UIKitCore 0x00000001bd5544c8 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1979
19 QuartzCore 0x00000001bca277fc CA::Layer::layout_if_needed(CA::Transaction*) + 499
20 QuartzCore 0x00000001bca3aeb0 CA::Layer::layout_and_display_if_needed(CA::Transaction*) + 147
21 QuartzCore 0x00000001bca4c234 CA::Context::commit_transaction(CA::Transaction*, double, double*) + 443
22 QuartzCore 0x00000001bca81630 CA::Transaction::commit() + 651
23 MediaToolbox 0x00000001ca8d0da0 videoQueueRemote_SetProperty + 367
24 AVFCore 0x00000001cad191b4 __63-[AVSampleBufferVideoRenderer _setContentLayerOnFigVideoQueue:]_block_invoke + 179
25 libdispatch.dylib 0x00000001c299cf88 _dispatch_client_callout + 19
26 libdispatch.dylib 0x00000001c29ac574 _dispatch_lane_barrier_sync_invoke_and_complete + 55
27 AVFCore 0x00000001cad190d0 -[AVSampleBufferVideoRenderer _setContentLayerOnFigVideoQueue:] + 167
28 AVFCore 0x00000001cad14674 -[AVSampleBufferVideoRenderer _createVideoQueue:errorStep:] + 195
29 AVFCore 0x00000001cad14ac8 -[AVSampleBufferVideoRenderer createVideoQueue:] + 55
30 AVFCore 0x00000001cad179cc -[AVSampleBufferVideoRenderer flushWithRemovalOfDisplayedImage:completionHandler:] + 439
31 App 0x0000000102370214 -[AVSampleBufferDisplayLayer flush] + 51
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers:
an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers,
and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers.
The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image.
Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized.
However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time.
On the other hand, with the same player code and network streams on macOS, the synchronization always works fine.
This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again.
So, any help / hints on this sync problem will be greatly appreciated! :)
We activate our camera extension from host application and wait for user to allow access it in System Settings. Once our host application receives notification camera extension is ready to be used we want to communicate with the extension.
When we enumerate AVCaptureDevices or try to find newly added device using CMIOObjectGetPropertyData for property kCMIOHardwarePropertyDevices, our camera extension is not shown. Once we stop and restart host application camera extension is shown as expected, issue only happens once right after activating the extension.
Looks like capture devices are not refreshed for host application after camera extension is activated and approved. Is there a way to force system to refresh cameras? Or any other ideas to make extension immediately visible for host application without relaunching it?
I'm developing iPad app that will be mostly dedicated for certain external camera for visually impaired people.
The linux UVC api (e.g. using guvcview) allows to enable automatic exposure for the camera. IOs api "isExposureModeSupported" unfortunately returns false for any of the exposure modes.
Is it a bug? Or perhaps AVFoundation doesn't support UVC exposure yet?
Hi,
I am creating an app that can include videos or images in it's data. While
@Attribute(.externalStorage)
helps with images, with AVAssets I actually would like access to the URL behind that data. (as it would be stupid to load and then save the data again just to have a URL)
One key component is to keep all of this clean enough so that I can use (private) CloudKit syncing with the resulting model.
All the best
Christoph
Are there any plans to support developers for a portion of the iPhone 15 series' 24MP photoshoot? I wonder if the app can support it other than the basic camera.
Hello,
Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong?
Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329)
private func setupLiDARCaptureInput() throws {
// Look up the LiDAR camera.
guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else {
throw ConfigurationError.lidarDeviceUnavailable
}
guard let format = (device.formats.last { format in
format.formatDescription.dimensions.width == preferredWidthResolution &&
format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange &&
format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil &&
!format.isVideoBinned &&
!format.supportedDepthDataFormats.isEmpty
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in
depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
// Begin the device configuration.
try device.lockForConfiguration()
// Configure the device and depth formats.
device.activeFormat = format
device.activeDepthDataFormat = depthFormat
let desc = format.formatDescription
dimensions = CMVideoFormatDescriptionGetDimensions(desc)
let duration = CMTime(value:1, timescale:CMTimeScale(60))
device.activeVideoMinFrameDuration = duration
device.activeVideoMaxFrameDuration = duration
// Finish the device configuration.
device.unlockForConfiguration()
self.device = device
print("Selected video format: \(device.activeFormat)")
print("Selected depth format: \(String(describing: device.activeDepthDataFormat))")
// Add a device input to the capture session.
let deviceInput = try AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
guard let audioDevice = AVCaptureDevice.default(for: .audio) else {
return
}
// Configure audio input - always configure audio even if isAudioEnabled is false
audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice)
captureSession.addInput(audioDeviceInput)
deviceSystemPressureStateObservation = device.observe(
\.systemPressureState,
options: .new
) { _, change in
guard let systemPressureState = change.newValue else { return }
print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)")
}
}
Here's how I'm setting up the output:
private func setupLiDARCaptureOutputs() {
// Create an object to output video sample buffers.
videoDataOutput = AVCaptureVideoDataOutput()
captureSession.addOutput(videoDataOutput)
// Create an object to output depth data.
depthDataOutput = AVCaptureDepthDataOutput()
depthDataOutput.isFilteringEnabled = false
captureSession.addOutput(depthDataOutput)
audioDeviceOutput = AVCaptureAudioDataOutput()
audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue)
captureSession.addOutput(audioDeviceOutput)
// Create an object to synchronize the delivery of depth and video data.
outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput])
outputVideoSync.setDelegate(self, queue: videoQueue)
// Enable camera intrinsics matrix delivery.
guard let outputConnection = videoDataOutput.connection(with: .video) else { return }
if outputConnection.isCameraIntrinsicMatrixDeliverySupported {
outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
}
The top part of my delegate implementation is as follows:
func dataOutputSynchronizer(
_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection
) {
// Retrieve the synchronized depth and sample buffer container objects.
guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData,
let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil {
print("no depth data at time \(mach_absolute_time())")
}
if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil {
print("no video data at time \(mach_absolute_time())")
}
return
}
print("received depth data \(mach_absolute_time())")
}
As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame.
Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either).
Any help is most appreciated! Thanks.
So I've spent the last five years optimizing my video AI system so that it runs with less than 5% CPU while processing a 30fps video feed on a Macbook Pro M2, and everything is great, until Sonoma comes out, and I find myself consuming 40% CPU for the exact same workload.
So I fire up Instruments, and the "heaviest stack trace" (see screenshot) turns out to be Espresso doing some completely unasked-for and absolutely useless processing on my video frames. I turn off Reactions, but nothing helps - the CPU consumptions stays at 40%.
"Reactions" is nothing but a useless toy to please some WWDC keynote fanboys, I don't want it anywhere near my app or my users, and I especially do not want to take the blame for it pissing away the user's CPU cycles and battery.
Now, how do I make it go away, for ever?
Best regards
Jacob
First of all, I tried MobileVLCKit but there is too much delay
Then I wrote a UDPManager class and I am writing my codes below. I would be very happy if anyone has information and wants to direct me.
Broadcast code
ffmpeg -f avfoundation -video_size 1280x720 -framerate 30 -i "0" -c:v libx264 -preset medium -tune zerolatency -f mpegts "udp://127.0.0.1:6000?pkt_size=1316"
Live View Code (almost 0 delay)
ffplay -fflags nobuffer -flags low_delay -probesize 32 -analyzeduration 1 -strict experimental -framedrop -f mpegts -vf setpts=0 udp://127.0.0.1:6000
OR
mpv udp://127.0.0.1:6000 --no-cache --untimed --no-demuxer-thread --vd-lavc-threads=1
UDPManager
import Foundation
import AVFoundation
import CoreMedia
import VideoDecoder
import SwiftUI
import Network
import Combine
import CocoaAsyncSocket
import VideoToolbox
class UDPManager: NSObject, ObservableObject, GCDAsyncUdpSocketDelegate {
private let host: String
private let port: UInt16
private var socket: GCDAsyncUdpSocket?
@Published var videoOutput: CMSampleBuffer?
init(host: String, port: UInt16) {
self.host = host
self.port = port
}
func connectUDP() {
do {
socket = GCDAsyncUdpSocket(delegate: self, delegateQueue: .global())
//try socket?.connect(toHost: host, onPort: port)
try socket?.bind(toPort: port)
try socket?.enableBroadcast(true)
try socket?.enableReusePort(true)
try socket?.beginReceiving()
} catch {
print("UDP soketi oluşturma hatası: \(error)")
}
}
func closeUDP() {
socket?.close()
}
func udpSocket(_ sock: GCDAsyncUdpSocket, didConnectToAddress address: Data) {
print("UDP Bağlandı.")
}
func udpSocket(_ sock: GCDAsyncUdpSocket, didNotConnect error: Error?) {
print("UDP soketi bağlantı hatası: \(error?.localizedDescription ?? "Bilinmeyen hata")")
}
func udpSocket(_ sock: GCDAsyncUdpSocket, didReceive data: Data, fromAddress address: Data, withFilterContext filterContext: Any?) {
if !data.isEmpty {
DispatchQueue.main.async {
self.videoOutput = self.createSampleBuffer(from: data)
}
}
}
func createSampleBuffer(from data: Data) -> CMSampleBuffer? {
var blockBuffer: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes),
blockLength: data.count,
blockAllocator: kCFAllocatorNull,
customBlockSource: nil,
offsetToData: 0,
dataLength: data.count,
flags: 0,
blockBufferOut: &blockBuffer)
if status != noErr {
return nil
}
var sampleBuffer: CMSampleBuffer?
let sampleSizeArray = [data.count]
status = CMSampleBufferCreateReady(
allocator: kCFAllocatorDefault,
dataBuffer: blockBuffer,
formatDescription: nil,
sampleCount: 1,
sampleTimingEntryCount: 0,
sampleTimingArray: nil,
sampleSizeEntryCount: 1,
sampleSizeArray: sampleSizeArray,
sampleBufferOut: &sampleBuffer)
if status != noErr {
return nil
}
return sampleBuffer
}
}
I didn't know how to convert the data object to video, so I searched and found this code and wanted to try it
func createSampleBuffer(from data: Data) -> CMSampleBuffer? {
var blockBuffer: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes),
blockLength: data.count,
blockAllocator: kCFAllocatorNull,
customBlockSource: nil,
offsetToData: 0,
dataLength: data.count,
flags: 0,
blockBufferOut: &blockBuffer)
if status != noErr {
return nil
}
var sampleBuffer: CMSampleBuffer?
let sampleSizeArray = [data.count]
status = CMSampleBufferCreateReady(
allocator: kCFAllocatorDefault,
dataBuffer: blockBuffer,
formatDescription: nil,
sampleCount: 1,
sampleTimingEntryCount: 0,
sampleTimingArray: nil,
sampleSizeEntryCount: 1,
sampleSizeArray: sampleSizeArray,
sampleBufferOut: &sampleBuffer)
if status != noErr {
return nil
}
return sampleBuffer
}
And I tried to make CMSampleBuffer a player but it just shows a white screen and doesn't work
struct SampleBufferPlayerView: UIViewRepresentable {
typealias UIViewType = UIView
var sampleBuffer: CMSampleBuffer
func makeUIView(context: Context) -> UIView {
let view = UIView(frame: .zero)
let displayLayer = AVSampleBufferDisplayLayer()
displayLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(displayLayer)
context.coordinator.displayLayer = displayLayer
return view
}
func updateUIView(_ uiView: UIView, context: Context) {
context.coordinator.sampleBuffer = sampleBuffer
context.coordinator.updateSampleBuffer()
}
func makeCoordinator() -> Coordinator {
Coordinator()
}
class Coordinator {
var displayLayer: AVSampleBufferDisplayLayer?
var sampleBuffer: CMSampleBuffer?
func updateSampleBuffer() {
guard let displayLayer = displayLayer, let sampleBuffer = sampleBuffer else { return }
if displayLayer.isReadyForMoreMediaData {
displayLayer.enqueue(sampleBuffer)
} else {
displayLayer.requestMediaDataWhenReady(on: .main) {
if displayLayer.isReadyForMoreMediaData {
displayLayer.enqueue(sampleBuffer)
print("isReadyForMoreMediaData")
}
}
}
}
}
}
And I tried to use it but I couldn't figure it out, can anyone help me?
struct ContentView: View {
// udp://@127.0.0.1:6000
@ObservedObject var udpManager = UDPManager(host: "127.0.0.1", port: 6000)
var body: some View {
VStack {
if let buffer = udpManager.videoOutput{
SampleBufferDisplayLayerView(sampleBuffer: buffer)
.frame(width: 300, height: 200)
}
}
.onAppear(perform: {
udpManager.connectUDP()
})
}
}
Hi everyone,
I'm currently facing an issue with AVAudioPlayer in my SwiftUI project. Despite ensuring that the sound file "buttonsound.mp3" is properly added to the project's resources (I dragged and dropped it into Xcode), the application is still unable to locate the file when attempting to play it.
Here's the simplified version of the code I'm using:
import SwiftUI
import AVFoundation
struct ContentView: View {
var body: some View {
VStack {
Button("Play sound") {
playSound(named: "buttonsound", ofType: "mp3")
}
}
}
}
func playSound(named name: String, ofType type: String) {
guard let soundURL = Bundle.main.url(forResource: name, withExtension: type) else {
print("Sound file not found")
return
}
do {
let audioPlayer = try AVAudioPlayer(contentsOf: soundURL)
audioPlayer.prepareToPlay()
audioPlayer.play()
} catch let error {
print("Error playing sound: \(error.localizedDescription)")
}
}
FeedbackID: FB13636921
I'm using /usr/sbin/screencapture -v -x -C -k -R 0,0,500,500 /path/to/a/movfile/in/a/folder/in/my/apps/sandbox/Group/Container in my app to allow users to capture screenshots and recordings.
Screenshots keep working fine on macOS Sonoma 14.4b23E5196e, but video recordings no longer work.
I'm guessing the following log output has something to do with it:
default 15:01:53.151819+0100 screencapture sampleBuffer: start recording time: 3123.604833 target: 3123.474266, overshot: 0.130568
error 15:01:53.185179+0100 screencapture <private>:246:<private> Not writable url (null).!folderIsWritable == true
error 15:01:53.185236+0100 screencapture <private>:50:<private> We could not create a byte stream!
error 15:01:53.185252+0100 screencapture <private>:87:<private> NULL byte stream.
error 15:01:53.185298+0100 screencapture <private>:3479:<private> ### Err -45,
error 15:01:53.185312+0100 screencapture <private>:3814:<private> ### Err -45,
error 15:01:53.185334+0100 screencapture <<<< AVCaptureMovieFileOutput >>>> Fig assert: "status == 0 " at (AVCaptureMovieFileOutput.m:2522) - CMIOFileWritingControlTokenStartWriting (err=-45)
error 15:01:53.185374+0100 screencapture <private>:1885:<private> ### Err -67452,
error 15:01:53.185388+0100 screencapture <private>:303:<private> FigMovieFormatFileWriter::PostProcessMovie: WriteMovie() errored!!! -67452
error 15:01:53.185476+0100 screencapture <private>:4687:<private> consolidate movie fragments err : -17913
default 15:01:53.185610+0100 screencapture <<<< AVError >>>> AVLocalizedErrorWithUnderlyingOSStatus: Returning error (AVFoundationErrorDomain / -11800) status (-45)
default 15:01:53.186201+0100 screencapture didFinishRecording: No trim finish. duration: 0.000000s size: 0, error: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={AVErrorRecordingSuccessfullyFinishedKey=false, NSLocalizedDescription=The operation could not be completed, NSLocalizedFailureReason=An unknown error occurred (-45), NSURL=file:///Users/eternalstorms/Library/Group%20Containers/group.com.apple.screencapture/ScreenRecordings/3ED15EE7-A814-47A7-A398-29D5A6AD03C1.mov, NSUnderlyingError=0x6000031d80c0 {Error Domain=NSOSStatusErrorDomain Code=-45 "fLckdErr: file is locked"}}
error 15:01:53.186290+0100 screencapture recording failed. The operation could not be completed
Are there new entitlements we need for this in our apps starting with macOS Sonoma 14.4? Or is it a bug? Calling it directly from Terminal works.
Thank you,
Matthias
I am running a modified RoomPllan app in my test environment I get two ARSessions active, sometimes more. It appears that the first one is created by Scene Kit because it is related go ARSCNView. Who controls that and what gets processed through it? I noticed that I get a lot of Session Interruptions from Sensor Failure when I am doing World Tracking and the first one happens almost immediately.
When I get the room capture delegates fired up I start getting images to the delegate via a second session that is collecting images. How do I tell which session is the scene kit session and which one is the RoomCapture session on thee fly when it comes through the delegate? Is there a difference in the object desciptor that I can use as a differentiator? Relying on the Address of the ARSession buffer being different is okay if you get your timing right. It wasn't clear from any of the documentation that there would be TWO or more AR Sessions delivering data through the delegates. The books on the use of ARKIT are not much help in determining the partition of responsibilities between the origins. The buffer arrivals at the functions supported by the delegates do not have a clear delineation of what function is delivered through which delegate discernible from the highly fragmented documentation provided by the Developer document library. Can someone give me some guidance here? Are there sources for CLEAR documentation of what is delivered via which delegate for the various interfaces?
As a straightforward example, I've taken Apple's MV-HEVC sample project and added two lines.
First, after the AVAssetWriterInput is created:
frameInput.performsMultiPassEncodingIfSupported = true
Second, after the call to multiviewWriter.startWriting():
print("canPerformMultiplePasses: \(frameInput.canPerformMultiplePasses)")
Which prints true.
This leads me to believe that the first encoding pass should proceed as-normal (even though I haven't handled the logic for the completion of the first pass, etc.).
However, I receive this error when the code attempts to appendTaggedBuffers to the AVAssetWriterInputTaggedPixelBufferGroupAdaptor:
Fatal error: Failed to append tagged buffers to multiview output
Am I missing a step? Or is the multi-pass encoding only supported for standard sample/pixel buffers (and not tagged buffers)?
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code.
So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock;
Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback:
[playerNode installTapOnBus:bus
bufferSize:bufferSize
format:format
block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
//Inspect current audio here and fire...
}];
[playerNode scheduleBuffer:fullbuffer
atTime:startTime
options:0
completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType)
{
// some code is here, not important to this question.
}];
The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled).
Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Topic:
Media Technologies
SubTopic:
Audio
Tags:
AVAudioNode
AVAudioSession
AVAudioEngine
AVFoundation
Hi,
just generated a HDR10 MVHEVC file, mediainfo is below:
Color range : Limited
Color primaries : BT.2020
Transfer characteristics : PQ
Matrix coefficients : BT.2020 non-constant
Codec configuration box : hvcC+lhvC
then generate the segment files with below command:
mediafilesegmenter --iso-fragmented -t 4 -f av_1 av_new_1.mov
then upload the segment files and prog_index.m3u8 to web server.
just find that can not play the HLS stream on Safari...
the url is http://ip/vod/prog_index.m3u8
just checked that if i remove the tag Transfer characteristics : PQ when generating the MVHEVC file.
above same mediafilesegmenter command and upload the files to web server.
the new version of HLS stream is can play on Safari...
Is there any way to play HLS PQ video on Safari. thanks.
Hi everyone !
I'm getting random crashes when I'm using the Speech Recognizer functionality in my app.
This is an old bug (for 8 years on Apple Forums) and I will really appreciate if anyone from Apple will be able to find a fix for this crashes.
Can anyone also help me please to understand what could I do to keep the Speech Recognizer functionality still available in my app, but to avoid this crashes (if there is any other native library available or a CocoaPod library).
Here is my code and also the crash log for it.
Code:
func startRecording() {
startStopRecordBtn.setImage(UIImage(#imageLiteral(resourceName: "microphone_off")), for: .normal)
if UserDefaults.standard.bool(forKey: Constants.darkTheme) {
commentTextView.textColor = .white
} else {
commentTextView.textColor = .black
}
commentTextView.isUserInteractionEnabled = false
recordingLabel.text = Constants.recording
if recognitionTask != nil {
recognitionTask?.cancel()
recognitionTask = nil
}
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSession.Category.record)
try audioSession.setMode(AVAudioSession.Mode.measurement)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
} catch {
showAlertWithTitle(message: Constants.error)
}
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
let inputNode = audioEngine.inputNode
guard let recognitionRequest = recognitionRequest else {
fatalError(Constants.error)
}
recognitionRequest.shouldReportPartialResults = true
recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
var isFinal = false
if result != nil {
self.commentTextView.text = result?.bestTranscription.formattedString
isFinal = (result?.isFinal)!
}
if error != nil || isFinal {
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
self.startStopRecordBtn.isEnabled = true
}
})
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer: AVAudioPCMBuffer, when: AVAudioTime) in // CRASH HERE
self?.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
} catch {
showAlertWithTitle(message: Constants.error)
}
}
Here is the crash log:
Thanks for very much for reading this !
After the session video, "Build a great Lock Screen camera capture experience", was unclear about the UI.
So do developers need to provide a whole new UI in the extension? The main UI cannot be repurposed?
Hello,
We've a music app reading MPMediaItem.
We got items using MPMediaQuery. But we realized that some downloaded tracks from Apple Music were fetched too. Not all downloaded track but only those who were played recently.
Of course, since these tracks are protected with DRM we can't play them in our player.
It's weird to get them in our query because we added predicate in order to dont fetch protected asset and iCloud item
MPMediaPropertyPredicate(value: false, forProperty: MPMediaItemPropertyHasProtectedAsset)
MPMediaPropertyPredicate(value: false, forProperty: MPMediaItemPropertyIsCloudItem)
To be sure, we made a second check on each item we've fetched
extension MPMediaItem {
public func isValid() -> Bool {
return self.assetURL != nil && !self.isCloudItem && !self.hasProtectedAsset
}
}
But we still get these items. Their hasProtectedAsset attribute always return false.
I dont know if it's a bug, but since we can't detect this items as Apple Music downloaded track, we can't either:
filter them to not add them in our application library
OR
switch on a MPMusicPlayerController.applicationMusicPlayer to allow the user to play them
Topic:
Media Technologies
SubTopic:
General
Tags:
Apple Music API
Media Player
Media Library
AVFoundation