Post not yet marked as solved
Hello,
I'm trying to test the example code for the AudioDriverKit and the "Open User Client" button returns a kIOReturnNotPermitted error. Using the standard entitlements that come with the example and I'm signing to run locally. SIP is disabled and I'm using
systemextensionsctl developer on
When the "Open User Client" button is pressed, I see the following calls into the dext:
SimpleAudioDriverUserClient::init
SimpleAudioDriverUserClient::Start
SimpleAudioDriverUserClient::Stop
SimpleAudioDriverUserClient::free
None return any errors.
Seems like this is a permission issue, but since this is just the straight example code, it seems like it should work out-of-the-box. Any clues as to why this is failing?
Thank you!
Post not yet marked as solved
Hi,
I'm currently port old driver to DriverKit framework.
This driver use the properties mechanism to change the state of the device from user space. I would like also use it, but I can not force the call to overridden SetProperties function on driver site.
Is it possible that SetProperties will be called when client call IOConnectSetCFProperties on given service? or I have to use the ExternalMethod mechanism. Or I will be possible to pass some dictionary structure which will mimic the SetProperty Call.
Thank you for your answer.
Best Regards,
Andrzej
Post not yet marked as solved
Hi,
Example code of AudioDriverKit from WWDC2021 stopped to work correctly for me. It builds and loads but when I want to record sine wave immediately after startIO function stopIO is called. It happens on both MacBook Air and Mac mini with macOS 12.0.1.
I discovered it because my audio driver also stopped to work correctly - exactly the same problem I described above. Earlier I was testing both example code and my audio driver on macOS 12 public beta and everything was ok.
Below I paste logs from console when I start recording:
default 12:54:19.161261+0100 kernel StartIO: Start IO: device 2
default 12:54:19.161274+0100 kernel StartIO: Start IO: device 2
default 12:54:19.161092+0100 coreaudiod HALS_IOEngine2::StartIO: starting IO on device SimpleAudioDevice-UID
default 12:54:19.161279+0100 kernel StartIO: Start IO: device 2
default 12:54:19.161138+0100 coreaudiod HALS_IOEngine2::_StartIO(435) on Context 443 state: Prewarm: 0 Play: 0 State: Stopped
default 12:54:19.161374+0100 CAReportingService CAReportingService.mm:157 service type 9 set for reporter 897648164879
default 12:54:19.161437+0100 coreaudiod HALS_IOEngine2::_StartIO(435) on Context 443 state: Prewarm: 0 Play: 1 State: Running
default 12:54:19.161761+0100 coreaudiod HALS_Device::_GetCombinedVolumeScalar: client 246 (pid 517) is not present and has a combined volume scalar is 1.000000
default 12:54:19.161798+0100 coreaudiod HALS_Device::_GetCombinedVolumeScalar: client 246 (pid 517) is not present and has a combined volume scalar is 1.000000
default 12:54:19.161808+0100 coreaudiod HALS_IOUADevice::HandlePropertiesChanged: Object: 431: SimpleAudioDevice-UID
default 12:54:19.162034+0100 coreaudiod 'goin', 'glob', 0
default 12:54:19.162580+0100 coreaudiod CAReportingClient.mm:508 message {
"device_is_aggregate" = 0;
"input_avail_phys_formats" = "{ [16/48000/1 lpcm], [16/44100/1 lpcm] }";
"input_avail_virt_formats" = "{ [32/48000/1 lpcm], [32/44100/1 lpcm] }";
"input_bits_per_channel" = 32;
"input_bytes_per_frame" = 4;
"input_bytes_per_packet" = 4;
"input_channels_per_frame" = 1;
"input_device_source_list" = Unknown;
"input_device_transport_list" = BuiltIn;
"input_device_uid_list" = "SimpleAudioDevice-UID";
"input_format_id" = lpcm;
"input_frames_per_packet" = 1;
"input_num_tap_streams" = 0;
"input_scalar_volume" = "1.000000";
"io_buffer_size" = 15;
message = StartHardware;
"output_num_tap_streams" = 0;
"output_scalar_volume" = "1.000000";
"sample_rate" = 48000;
}: (
897648164879
)
error 12:54:19.163011+0100 coreaudiod 206515 HALS_IOUAUCDriver.cpp:500 Throwing Exception: ret != kIOReturnSuccess Failed to register event link
error 12:54:19.163100+0100 coreaudiod 206515 HALS_IOUAEngine.cpp:157 Failed to register io thread!
default 12:54:19.163950+0100 kernel StopIO: Stop IO: device 2
default 12:54:19.163968+0100 kernel StopIO: Stop IO: device 2
default 12:54:19.163974+0100 kernel StopIO: Stop IO: device 2
default 12:54:19.163166+0100 coreaudiod HALS_IOContext_Legacy_Impl::IOWorkLoop: failed to register io thread
error 12:54:19.163310+0100 coreaudiod 206515 HALS_IOUAEngine.cpp:180 Throwing Exception: error != 0 Failed to disassociate event link 22
error 12:54:19.163482+0100 coreaudiod 206515 HALS_IOUAEngine.cpp:187 Failed to unregister io thread!
default 12:54:19.163613+0100 coreaudiod HALS_IOEngine2::StopIO: stopping IO on device SimpleAudioDevice-UID
default 12:54:19.163744+0100 coreaudiod HALS_IOEngine2::_StopIO(435) on Context 443 state: Prewarm: 0 Play: 1 State: Running
default 12:54:19.164082+0100 coreaudiod HALS_IOUADevice::HandlePropertiesChanged: Object: 431: SimpleAudioDevice-UID
default 12:54:19.164134+0100 coreaudiod 'goin', 'glob', 0
default 12:54:19.164289+0100 coreaudiod CAReportingClient.mm:480 stopping (
897648164879
)
default 12:54:19.164542+0100 coreaudiod CAReportingClient.mm:508 message {
"session_duration" = "0.003519058227539062";
}: (
897648164879
)
error 12:54:19.165611+0100 Audacity HALC_ProxyIOContext::IOWorkLoop: the server failed to start, Error: 0x77686174
default 12:54:19.165045+0100 coreaudiod IO Stopped Context 443 after 0 frames.
default 12:54:19.165199+0100 coreaudiod HALS_IOContext_Legacy_Impl::IOThreadEntry: 443 SimpleAudioDevice-UID (SimpleAudioDevice-UID): stopping with error 2003329396
default 12:54:19.165220+0100 coreaudiod HALB_PowerAssertion::Release: releasing power assertion ID 34859 of type 'PreventUserIdleSystemSleep' with name: 'com.apple.audio.context443.preventuseridlesleep' on behalf of 517
error 12:54:19.165390+0100 coreaudiod HALS_IOContext_Legacy_Impl::StartIOThread: the IO thread failed to start, Error: 2003329396 (what)
Any advice?
Thank you and regards
Post not yet marked as solved
Video: https://developer.apple.com/videos/play/wwdc2021/10190/
I'm trying to run the sample code from the SimpleAudioExtension workshop I get a validation error for OSSystemExtensionErrorDomain. Is there something that is missing from the instructions that is required to work around this error? If not, how would one work around this?
Post not yet marked as solved
Hello!
There are a few similar open-source projects that implement support for creating macOS M1 guest Virtual Machine on macOS M1 Host, for example:
https://github.com/jspahrsummers/Microverse
https://github.com/KhaosT/MacVM
But both of them share a common problem: any audio from guest VM (system sounds or youtube videos in Safari or Firefox) is played late on host by approximately 0.3 seconds.
Is there any way how we can remove this latency, so it is possible to hear real-time audio from guest VM?
Regards, Eugene.
Post not yet marked as solved
I'm using USBDriverKit to write an audio driver for a High Speed USB device.
In an attempt to understand the difference between DriverKit Extension and Kernel Extension latencies, I'm dispatching individual isochronous microframes that for this device each account for a duration 125µs, or 6 samples at 48kHz.
I don't yet know what kind of latency I'm actually getting but I was surprised to see a high CPU usage of ~11% on a 512GB M1 mac mini running Big Sur 11.6.
That's 8000 IsochIO() calls and 8000 completion callbacks per second.
Instruments.app tells me that most of my time (60%) is being spent inside mach_msg, as part of a remote procedure call.
Multiple questions occur to me:
is this normal? should I expect lower CPU usage?
isn't mach_msg blocking? shouldn't CPU usage be low?
don't low latency audio folks avoid things like remote procedure calls?
is seeking low latency throughput with USBDriverKit futile?
does Monterey's AudioDriverKit enable lower latency, or is it a convenience API?
Post not yet marked as solved
Hello,
I rewriting a audio driver that was created in IOKit to a new one based on AudioDriverKit.
I would like to block possibility to connect multiple devices to the computer so that driver support just one device. In IOKit it was done by overriding IOAudioEngine::getLocalUniqueID() method and I cannot find anything similar in AudioDriverKit (IOUserAudioDevice). Do you know how to do it?
Best Regards
Post not yet marked as solved
I have created virtual driver from Soundpusher as well as Null device packages, and the virtual microphone & speakers are working well with audio applications (Google meet, zoom etc..).
Now, my problem is getting silence in capture/playout when my virtual driver application is stopped/ exited. The virtual driver application - read data /play data from/to actual mic/speaker.
To handle this situation, I should either create the devices dynamically or set visibility dynamically (hidden/visible) using anyone of the parameters.
Create virtual mic/speaker or set visibility to virtual mic/speaker if my virtual driver application is running
Delete virtual mic/speaker or set invisibility to virtual mic/speaker if my virtual driver application is Not running/Stopped/Exited
Since, I am new to this driver technology for MacOsx, I am struggling to add these features into my driver. It will be great, if anyone can provide some help on how to do create & delete the devices dynamically (OR) set visibility dynamically. Also provide any sample code if available. Thanks in Advance.
Post not yet marked as solved
Apple presented how to Create audio drivers with DriverKit on the latest WWDC 2021.
Video presentation:
https://developer.apple.com/videos/play/wwdc2021/10190
Code sample:
https://developer.apple.com/documentation/audiodriverkit/creating_an_audio_device_driver
We need a similar approach for cameras. The audio driver mentioned above can be compiled using the new Xcode 13 beta. So this approach is in progress. We need to develop a custom driver for the camera. Is there a solution in DriverKit for cameras? Is it planned? Should we develop a driver from scratch using USBDriverKit?
Any suggestions are appreciated.
Post not yet marked as solved
At 3:38-4:00 in the session video, it seems Baek San Chang says that AudioDriverKit will not be allowed to be used for virtual audio devices
Video: https://developer.apple.com/videos/play/wwdc2021/10190/
Here is what he says:
Keep in mind that the sample code presented is purely for demonstrative purposes and creates a virtual audio driver that is not associated with a hardware device, and so entitlements will not be granted for that kind of use case.
For virtual audio driver, where device is all that is needed, the audio server plugin driver model should continue to be used.
The mentioning of sample code is a little confusing; Does he mean the entitlements for hardware access won't be granted for a virtual device? That would seem obvious.
But if he means the entitlements for driver kit extensions (com.apple.developer.driverkit and com.apple.developer.driverkit.allow-any-userclient-access) won't be granted for virtual audio devices, and this is why AudioServerPlugins should still be used, then that's another story.
Are we allowed to use AudioDriverKit Extension for Virtual Devices?
The benefit of having the extension bundled with the app rather than requiring an installer is a significant reason to use an extension if allowed.
I need to create a virtual audio driver that presents a virtual microphone and a virtual speaker to the user. The user can then select these virtual endpoints in 3rd party audio communication apps like Skype, Zoom etc. The virtual audio driver implementation then routes audio between physical devices (selected by the user in the virtual driver userspace control app) and the virtual devices.
It is a requirement that the virtual audio driver and its control app can be published to the Apple app store for users to download and install on their machine without any problems.
How should I go about this?