I am currently working on a macOS app which will be creating very large video files with up to an hour of content. However, generating the images and adding them to AVAssetWriter leads to VTDecoderXPCService using 16+ GB of memory and the kernel-task using 40+ GB (the max I saw was 105GB).
It seems like the generated video is not streamed onto the disk but rather written to memory for it to be written to disk all at once. How can I force it to stream the data to disk while the encoding is happening?
Btw. my app itself consistently needs around 300MB of memory, so I don't think I have a memory leak here.
Here is the relevant code:
func analyse()
{
				self.videoWritter = try! AVAssetWriter(outputURL: outputVideo, fileType: AVFileType.mp4)
				let writeSettings: [String: Any] = [
						AVVideoCodecKey: AVVideoCodecType.h264,
						AVVideoWidthKey: videoSize.width,
						AVVideoHeightKey: videoSize.height,
						AVVideoCompressionPropertiesKey: [
								AVVideoAverageBitRateKey: 10_000_000,
						]
				]
				self.videoWritter!.movieFragmentInterval = CMTimeMake(value: 60, timescale: 1)
				self.frameInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: writeSettings)
				self.frameInput?.expectsMediaDataInRealTime = true
				self.videoWritter!.add(self.frameInput!)
				if self.videoWritter!.startWriting() == false {
						print("Could not write file: \(self.videoWritter!.error!)")
						return
				}
}
func writeFrame(frame: Frame)
{
				/* some more code to determine the correct frame presentation time stamp */
				let newSampleBuffer = self.setTimeStamp(frame.sampleBuffer, newTime: self.nextFrameStartTime!)
				self.frameInput!.append(newSampleBuffer)
				/* some more code */
}
func setTimeStamp(_ sample: CMSampleBuffer, newTime: CMTime) -> CMSampleBuffer {
				var timing: CMSampleTimingInfo = CMSampleTimingInfo(
						duration: CMTime.zero,
						presentationTimeStamp: newTime,
						decodeTimeStamp: CMTime.zero
				)
				var newSampleBuffer: CMSampleBuffer?
				CMSampleBufferCreateCopyWithNewTiming(
						allocator: kCFAllocatorDefault,
					 sampleBuffer: sample,
					 sampleTimingEntryCount: 1,
					 sampleTimingArray: &timing,
					 sampleBufferOut: &newSampleBuffer
			 )
				return	newSampleBuffer!
		}
My specs:
MacBook Pro 2018
32GB Memory
macOS Big Sur 11.1
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
Hi,
for the implementation of an audio player with signed URL's, I need to be able to set an authorization header to the request for an AVURLAsset.
This works but not on Airplay when trying to stream multiple songs in a queue.
For each item I do:
let headerFields: [String: String] = ["Authorization": getIdToken()!]
super.init(url: url, options: ["AVURLAssetHTTPHeaderFieldsKey": headerFields])
But only the first 2 songs in the queue actually get this authorization header sent along, somehow it is removed for subsequent songs.
Any ideas on how I can fix this?
thanks,
Thomas
I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below).
The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload."
I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist:
https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105
I've also opened a Feedback Assistant ticket (FB9037528):
https://feedbackassistant.apple.com/feedback/9037528
Does anyone know what could be causing this issue?
Thanks for any help.
Cheers,
Flux aka Andy.
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro16,1
Processor Name: 8-Core Intel Core i9
Processor Speed: 2.4 GHz
Number of Processors: 1
Total Number of Cores: 8
L2 Cache (per Core): 256 KB
L3 Cache: 16 MB
Hyper-Threading Technology: Enabled
Memory: 64 GB
System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0)
System Software Overview:
System Version: macOS 11.2.3 (20D91)
Kernel Version: Darwin 20.3.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Computer Name: mycomputername
User Name: myusername
Secure Virtual Memory: Enabled
System Integrity Protection: Enabled
USB interface: Denon DJ DS1
Snippet of Console logs
error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7
default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload
default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644
...
default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no
default 21:07:05.102926-0500 coreaudiod CAReportingClient.mm:508 message {
HostApplicationDisplayID = "com.apple.Music";
cause = Unknown;
deadline = 2615019;
"input_device_source_list" = Unknown;
"input_device_transport_list" = USB;
"input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";
"io_buffer_size" = 512;
"io_cycle" = 1;
"is_prewarming" = 0;
"is_recovering" = 0;
"issue_type" = overload;
lateness = "-535";
"output_device_source_list" = Unknown;
"output_device_transport_list" = USB;
"output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";
}: (null)
I’m using AVAudioEngine to get a stream of AVAudioPCMBuffers from the device’s microphone using the usual installTap(onBus:) setup.
To distribute the audio stream to other parts of the program, I’m sending the buffers to a Combine publisher similar to the following:
private let publisher = PassthroughSubject<AVAudioPCMBuffer, Never>()
I’m starting to suspect I have some kind of concurrency or memory management issue with the buffers, because when consuming the buffers elsewhere I’m getting a range of crashes that suggest some internal pointer in a buffer is NULL (specifically, I’m seeing crashes in vDSP.convertElements(of:to:) when I try to read samples from the buffer).
These crashes are in production and fairly rare — I can’t reproduce them locally.
I never modify the audio buffers, only read them for analysis.
My question is: should it be possible to put AVAudioPCMBuffers into a Combine pipeline? Does the AVAudioPCMBuffer class not retain/release the underlying AudioBufferList’s memory the way I’m assuming? Is this a fundamentally flawed approach?
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly.
I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController.
If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails.
On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses.
However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices.
My questions are:
Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad)
Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode.
The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format.
The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports.
So the following:
AVAudioEngine *engine = [[AVAudioEngine alloc] init];
playerNode = [[AVAudioPlayerNode alloc] init];
AVAudioFormat *format = [[AVAudioFormat alloc]
initWithSettings:utterance.voice.audioFileSettings];
[engine attachNode:self.playerNode];
[engine connect:self.playerNode to:engine.mainMixerNode format:format];
Throws an exception:
Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\""
I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer.
Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes.
I could not find any constant or function which can make such format, ASDB or settings.
The smartest way I could think of, which does not work:
AudioStreamBasicDescription toDesc;
FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,
2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);
AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc];
Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated.
So what is the correct way to do this?
Title says it all.
I have an AVPlayerViewController in my app playing video and my largest source of errors is NSURLErrorDomain Code=-1008. The underlying error they provide is Error Domain=CoreMediaErrorDomain Code=-12884 "(null)". I couldn't find what the error implies or is caused by - osstatus.com also does not have a reference to it.
Hi,
I'm having trouble saving user presets in the plugin for Audio Units. This works well for saving the user presets in the Host, but I get an error when trying to save them in the plugin.
I'm not using a parameter tree, but instead using the fullState's getter and setter for saving and retrieving a dictionary with the state.
With some simplified parameters it looks something like this:
var gain: Double = 0.0
var frequency: Double = 440.0
private var currentState: [String: Any] = [:]
override var fullState: [String: Any]? {
get {
// Save the current state
currentState["gain"] = gain
currentState["frequency"] = frequency
// Return the preset state
return ["myPresetKey": currentState]
}
set {
// Extract the preset state
currentState = newValue?["myPresetKey"] as? [String: Any] ?? [:]
// Set the Audio Unit's properties
gain = currentState["gain"] as? Double ?? 0.0
frequency = currentState["frequency"] as? Double ?? 440.0
}
}
This works perfectly well for storing user presets when saved in the host. When trying to save them in the plugin to be able to reuse them across hosts, I get the following error in the interface: "Missing key in preset state map". Note that I am testing mostly in AUM.
I could not find any documentation for what the missing key is about and how can I get around this. Any ideas?
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers:
an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers,
and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers.
The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image.
Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized.
However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time.
On the other hand, with the same player code and network streams on macOS, the synchronization always works fine.
This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again.
So, any help / hints on this sync problem will be greatly appreciated! :)
I just upgraded to iOS 17 and it looks like AVSpeechSynthesizer is now broken.
I noticed when feeding certain strings to AVSpeechUtterance it just dies flat out stops after only speaking a portion of the string.
For example I fed it a string of approx. 1200 words and it speaks up to around 300 words or so and then just stops. The synthesizer delegate method -speechSynthesizer:didFinishSpeechUtterance: is called when this happens, as if this is supposed to be the end even though it is not even close to being finishes.
Was working fine on iOS 16.
FWIW I create the AVSpeechUtterance with -initWithString:
When you initialize AVSpeechSynthesizer as View prorety in SwiftUI project in Xcode 15 with iOS 17 simulator, you get some comments in console:
Failed to get sandbox extensions
Query for com.apple.MobileAsset.VoiceServicesVocalizerVoice failed: 2
#FactoryInstall Unable to query results, error: 5
Unable to list voice folder
Query for com.apple.MobileAsset.VoiceServices.GryphonVoice failed: 2
Unable to list voice folder
Unable to list voice folder
Query for com.apple.MobileAsset.VoiceServices.GryphonVoice failed: 2
Unable to list voice folder
When you try to run utterance inside a Button as synthesizer.speak(AVSpeechUtterance(string: "iOS 17 broke TextToSpeech")), you get endless stream of warnings that repeaths on and on in console like this:
AddInstanceForFactory: No factory registered for id <CFUUID 0x60000024f200> F8BB1C28-BAE8-11D6-9C31-00039315CD46
Cannot find executable for CFBundle 0x600003b2cd20 </Library/Developer/CoreSimulator/Volumes/iOS_21A328/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/TextToSpeechMauiSupport.framework> (not loaded)
Failed to load first party audio unit from TextToSpeechMauiSupport.framework
Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)"
Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)"
Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)"
Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)"
Could not instantiate audio unit. Error=Error Domain=NSOSStatusErrorDomain Code=-3000 "(null)"
Couldn't find audio unit for request SSML Length: 40, Voice: [AVSpeechSynthesisProviderVoice 0x600002127e30] Name: Samantha, Identifier: com.apple.voice.compact.en-US.Samantha, Supported Languages (
"en-US"
), Age: 0, Gender: 0, Size: 0, Version: (null)
VoiceProvider: Could not start synthesis for request SSML Length: 40, Voice: [AVSpeechSynthesisProviderVoice 0x600002127e30] Name: Samantha, Identifier: com.apple.voice.compact.en-US.Samantha, Supported Languages (
"en-US"
), Age: 0, Gender: 0, Size: 0, Version: (null), converted from tts request [TTSSpeechRequest 0x600003709680] iOS 17 broke TextToSpeech language: en-US footprint: compact rate: 0.500000 pitch: 1.000000 volume: 1.000000
Failed to speak request with error: Error Domain=TTSErrorDomain Code=-4010 "(null)". Attempting to speak again with fallback identifier: com.apple.voice.compact.en-US.Samantha
CPU is under pressure (more than 100%). AVSpeechSynthesizer doesn't speak.
All works fine on iOS 16.
The code of View:
import SwiftUI
import AVFoundation
struct ContentView: View {
let synthesizer = AVSpeechSynthesizer()
var body: some View {
VStack {
Button {
synthesizer.speak(AVSpeechUtterance(string: "iOS 17 broke TextToSpeech"))
} label: {
Text("speak")
}
.buttonStyle(.borderedProminent)
}
.padding()
}
}
#Preview {
ContentView()
}
On the real device nothing at all happened.
The same happens to my production app. I have so much crashes related to TextToSpeach and iOS 17. What's going on?
Hi there, I'm having some trouble with AVAudioMixerNode only working when there is a single input, and outputting silence or very quiet buzzing when >1 input node is connected. My setup has voice processing enabled, input going to a sink, and N source nodes going to the main mixer node, going to the output node. In all cases I am connecting nodes in the graph with the same declared format: 48kHz 1 channel Float32 PCM.
This is working great for 1 source node, but as soon as I add a second it breaks. I can reproduce this behaviour in the SignalGenerator sample, when the same format is used everywhere. Again, it'll work fine with 1 source node even in this configuration, but add another and there's silence.
Am I doing something wrong with formats here? Is this expected? As I understood it with voice processing on and use of a mixer node I should be able to use my own format essentially everywhere in my graph?
My SignalGenerator modified repro example follows:
import Foundation
import AVFoundation
// True replicates my real app's behaviour, which is broken.
// You can remove one source node connection
// to make it work even when this is true.
let showBrokenState: Bool = true
// SignalGenerator constants.
let frequency: Float = 440
let amplitude: Float = 0.5
let duration: Float = 5.0
let twoPi = 2 * Float.pi
let sine = { (phase: Float) -> Float in
return sin(phase)
}
let whiteNoise = { (phase: Float) -> Float in
return ((Float(arc4random_uniform(UINT32_MAX)) / Float(UINT32_MAX)) * 2 - 1)
}
// My "application" format.
let format: AVAudioFormat = .init(commonFormat: .pcmFormatFloat32,
sampleRate: 48000,
channels: 1,
interleaved: true)!
// Engine setup.
let engine = AVAudioEngine()
let mainMixer = engine.mainMixerNode
let output = engine.outputNode
try! output.setVoiceProcessingEnabled(true)
let outputFormat = engine.outputNode.inputFormat(forBus: 0)
let sampleRate = Float(format.sampleRate)
let inputFormat = format
var currentPhase: Float = 0
let phaseIncrement = (twoPi / sampleRate) * frequency
let srcNodeOne = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
for frame in 0..<Int(frameCount) {
let value = sine(currentPhase) * amplitude
currentPhase += phaseIncrement
if currentPhase >= twoPi {
currentPhase -= twoPi
}
if currentPhase < 0.0 {
currentPhase += twoPi
}
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = value
}
}
return noErr
}
let srcNodeTwo = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
for frame in 0..<Int(frameCount) {
let value = whiteNoise(currentPhase) * amplitude
currentPhase += phaseIncrement
if currentPhase >= twoPi {
currentPhase -= twoPi
}
if currentPhase < 0.0 {
currentPhase += twoPi
}
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = value
}
}
return noErr
}
engine.attach(srcNodeOne)
engine.attach(srcNodeTwo)
engine.connect(srcNodeOne, to: mainMixer, format: inputFormat)
engine.connect(srcNodeTwo, to: mainMixer, format: inputFormat)
engine.connect(mainMixer, to: output, format: showBrokenState ? inputFormat : outputFormat)
// Put the input node to a sink just to match the formats and make VP happy.
let sink: AVAudioSinkNode = .init { timestamp, numFrames, data in
.zero
}
engine.attach(sink)
engine.connect(engine.inputNode, to: sink, format: showBrokenState ? inputFormat : outputFormat)
mainMixer.outputVolume = 0.5
try! engine.start()
CFRunLoopRunInMode(.defaultMode, CFTimeInterval(duration), false)
engine.stop()
Hello,
I'm facing an issue with Xcode 15 and iOS 17: it seems impossible to get AVAudioEngine's audio input node to work on simulator.
inputNode has a 0ch, 0kHz input format,
connecting input node to any node or installing a tap on it fails systematically.
What we tested:
Everything works fine on iOS simulators <= 16.4, even with Xcode 15.
Nothing works on iOS simulator 17.0 on Xcode 15.
Everything works fine on iOS 17.0 device with Xcode 15.
More details on this here: https://github.com/Fesongs/InputNodeFormat
Any idea on this? Something I'm missing?
Thanks for your help 🙏
Tom
PS: I filed a bug on Feedback Assistant, but it usually takes ages to get any answer so I'm also trying here 😉
Hello,
I used kAudioDevicePropertyDeviceIsRunningSomewhere to check if an internal or external microphone is being used.
My code works well for the internal microphone, and for microphones which are connected using a cable.
External microphones which are connected using bluetooth are not reporting their status.
The status is always requested successfully, but it is always reported as inactive.
Main relevant parts in my code :
static inline AudioObjectPropertyAddress
makeGlobalPropertyAddress(AudioObjectPropertySelector selector) {
AudioObjectPropertyAddress address = {
selector,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster,
};
return address;
}
static BOOL getBoolProperty(AudioDeviceID deviceID,
AudioObjectPropertySelector selector)
{
AudioObjectPropertyAddress const address =
makeGlobalPropertyAddress(selector);
UInt32 prop;
UInt32 propSize = sizeof(prop);
OSStatus const status =
AudioObjectGetPropertyData(deviceID, &address, 0, NULL, &propSize, &prop);
if (status != noErr) {
return 0; //this line never gets executed in my tests. The call above always succeeds, but it always gives back "false" status.
}
return static_cast<BOOL>(prop == 1);
}
...
__block BOOL microphoneActive = NO;
iterateThroughAllInputDevices(^(AudioObjectID object, BOOL *stop) {
if (getBoolProperty(object, kAudioDevicePropertyDeviceIsRunningSomewhere) !=
0) {
microphoneActive = YES;
*stop = YES;
}
});
What could cause this and how could it be fixed?
Thank you for your help in advance!
Are there any plans to support developers for a portion of the iPhone 15 series' 24MP photoshoot? I wonder if the app can support it other than the basic camera.
I've added a listener block for camera notifications. This works as expected: the listener block is invoked then the camera is activated/deactivated.
However, when I call CMIOObjectRemovePropertyListenerBlock to remove the listener block, though the call succeeds, camera notifications are still delivered to the listener block.
Since in the header file it states this function "Unregisters the given CMIOObjectPropertyListenerBlock from receiving notifications when the given properties change." I'd assume that once called, no more notifications would be delivered?
Sample code:
#import <Foundation/Foundation.h>
#import <CoreMediaIO/CMIOHardware.h>
#import <AVFoundation/AVCaptureDevice.h>
int main(int argc, const char * argv[]) {
AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
OSStatus status = -1;
CMIOObjectID deviceID = 0;
CMIOObjectPropertyAddress propertyStruct = {0};
propertyStruct.mSelector = kAudioDevicePropertyDeviceIsRunningSomewhere;
propertyStruct.mScope = kAudioObjectPropertyScopeGlobal;
propertyStruct.mElement = kAudioObjectPropertyElementMain;
deviceID = (UInt32)[camera performSelector:NSSelectorFromString(@"connectionID") withObject:nil];
CMIOObjectPropertyListenerBlock listenerBlock = ^(UInt32 inNumberAddresses, const CMIOObjectPropertyAddress addresses[]) {
NSLog(@"Callback: CMIOObjectPropertyListenerBlock invoked");
};
status = CMIOObjectAddPropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), listenerBlock);
if(noErr != status) {
NSLog(@"ERROR: CMIOObjectAddPropertyListenerBlock() failed with %d", status);
return -1;
}
NSLog(@"Monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID);
sleep(10);
status = CMIOObjectRemovePropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_main_queue(), listenerBlock);
if(noErr != status) {
NSLog(@"ERROR: 'AudioObjectRemovePropertyListenerBlock' failed with %d", status);
return -1;
}
NSLog(@"Stopped monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID);
sleep(10);
return 0;
}
Compiling and running this code outputs:
Monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21)
Callback: CMIOObjectPropertyListenerBlock invoked
Callback: CMIOObjectPropertyListenerBlock invoked
Stopped monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21)
Callback: CMIOObjectPropertyListenerBlock invoked
Callback: CMIOObjectPropertyListenerBlock invoked
Note the last two log messages showing that the CMIOObjectPropertyListenerBlock is still invoked ...even though CMIOObjectRemovePropertyListenerBlock has successfully been invoked.
Am I just doing something wrong here? Or is the API broken?
I am following this Apple Article on how to setup an AVPlayer. The only difference is I am using @Observable instead of an ObservableObject with @Published vars.
Using @Observable results in the following error: Cannot find '$isPlaying' in scope
If I remove the "$" symbol I get a bit more insight:
Cannot convert value of type 'Bool' to expected argument type 'Published<Bool>.Publisher'
If I change by class to an OO, it works fine, although, is there anyway to get this to work with @Observable?
Following this Apple Article, I copied their code over for observePlayingState().
The only difference I am using @Observable instead of ObservableObject and @Published for var isPlaying.
We get a bit more insight after removing the $ symbol, leading to a more telling error of: Cannot convert value of type 'Bool' to expected argument type 'Published.Publisher'
Is there anyway to get this working with @Observable?
I'm trying to decode MV-HEVC videos on visionOS, but I noticed the normal API for using AVPlayer (AVPlayerVideoOutput and some new methods for setting videoOutput on AVPlayer) are not available in visionOS 1.0.
They're only available in iOS 17.2 and macOS 14.2 and the header claims visionOS 1.1 but it doesn't say that in the Apple documentation anywhere. Does this mean there's really no way to work on this functionality at this time (!) This seems like a major omission given we can't even target visionOS 1.1 with the beta version of Xcode. Can you please move this API forward into visionOS 1.0.