Is there an option to create a macOS filter driver for the IONVMeFamily Driver that would enable sending NVMe Admin commands( likes security send , security receive, Firmware download)? This approach would allow us to reuse the IO path provided by the family driver.
Drivers
RSS for tagUnderstand the role of drivers in bridging the gap between software and hardware, ensuring smooth hardware functionality.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi, I’m developing my own Pcie Ethernet driverkit. My Pcie Ethernet card connect on Razor Core X and connect to MacBook via thunderbolt 3.
The Problem:
Click Driver application and send activate system extension request, then go to System setting -> Privacy & Security, in Extension section ->click “allow” , the peripherals malfunction immediately after "allow" clicked and type in the password.I can't control all peripherals devices like touchpad, keyboard and all of thunderbolt ports. However, it can regain functionality after plugging and unplugging the device.
results I expected:
User approve Driver Extensions enable and all peripherals work normally and Ethernet Card works.
Has anyone encountered this problem? maybe something wrong in "OSSystemExtensionRequestDelegate" but I have no idea how to fix it
Please Help.
My Xcode version is Version 15.3 (15E204a).
Thanks
Run on Xcode
Unchecked Scheme -> Run -> 'Debug excitable' is fine.
But, I need the debug log, how to fix?
Hi Team ,
I want to create a system where i can mirror the iPhone screen connected through USB and control it from the web browser.
can anyone help me ?
Thanks ,
Mukta
Topic:
App & System Services
SubTopic:
Drivers
Hello Everyone,
I am trying to develop a DriverKit for RAID system, using PCIDriverKit & SCSIControllerDriverKit framework. The driver can detect the Vendor ID and Device ID. But before communicating to the RAID system, I would like to simulate a virtual Volume using a memory block to talk with macOS.
In the UserInitializeController(), I allocated a 512K memory for a IOBufferMemoryDescriptor* volumeBuffer, but fail to use Map() to map memory for volumeBuffer.
result = ivars->volumeBuffer->Map(
0, // Options: Use default
0, // Offset: Start of the buffer
ivars->volumeSize, // Length: Must not exceed buffer size
0, // Flags: Use default
nullptr, // Address space: Default address space
&mappedAddress // Output parameter
);
Log("Memory mapped completed at address: 0x%llx", mappedAddress); // this line never run
The Log for Map completed never run, just restart to run the Start() and makes this Driver re-run again and again, in the end, the driver eat out macOS's memory and system halt.
Are the parameters for Map() error? or I should not put this code in UserInitializeController()?
Any help is appreciated!
Thanks in advance.
Charles
Hi,
our CoreAudio server plugin supports different clock sources. A switch might result in a change of the selectable sample rates (and other settings). On a clock source switch the plugin reconfigures the set of available kAudioStreamPropertyAvailablePhysicalFormats and announces the change via AudioServerPlugInHostInterface::PropertiesChanged(). However at least the Audio MIDI Setup seems to ignore to update it's UI. The changes are first reflected after selecting another device and re-selecting the device of interest. (Latest macOS, M4 macMini)
Is this a bug? Or is our CoreAudio server plugin required to indicate the change in the list of available audio formats differently?
Thanks!
VirtIO provides macOS VM users on Intel with integrations like Shared Folders, Shared Clipboard or Drag and Drop files.
After updating VM to macOS 15.4, the VirtIO is no longer available, and we see that the functionality listed above doesn't work.
Please fix it.
Topic:
App & System Services
SubTopic:
Drivers
How can I get the Testflight invitation code
Topic:
App & System Services
SubTopic:
Drivers
"Bu beta şu anda yeni testçileri kabul etmiyor"
Lütfen bu hatayı düzeltin
My back camera freezes after first frame is displayed on the screen. Issue is present in all the apps that use the back camera. Same issue with front camera, however front camera works with Facetime and WhatsApp but not the native camera app.
I'm developing a CarPlay app and encountered an inconsistent behavior with template detection.
When I present a CPActionSheetTemplate and then print the presentedTemplate property, it returns nil.
However, when I present a CPAlertTemplate, the presentedTemplate property correctly returns the template object.
This inconsistency is causing issues in my app where I need to check if there's already a presented template before showing another one to avoid conflicts.
Why does CPActionSheetTemplate not get detected in presentedTemplate while CPAlertTemplate does? Is this intended behavior or a bug?
Any guidance on how to properly detect if a CPActionSheetTemplate is currently presented would be greatly appreciated.
We have developed the driver for the ProCapture video capture card based on PCIDriverKit. The App can communicate with the driver through the UserClient API. Currently, there is an issue where, when the App starts, there is a small probability that it causes a driver crash. However, the crash stack trace does not point to our code but appears to be within the PCIDriverKit framework. We have spent several weeks debugging but still cannot identify the root cause of the crash. Could you please review the crash log and suggest any methods to help pinpoint the issue?
com.magewell.ProCaptureDriver-2025-09-15-153522.ips
com.magewell.ProCaptureDriver-2025-09-15-082500.ips
Hello everyone,
I'm developing a CarPlay app and am trying to test it with the dock on the right side of the screen, as is standard for right-hand drive vehicles like those in Japan.
Currently, the CarPlay Simulator always displays the dock on the left, and I can't find an option to change its position. This is important for ensuring a proper user experience for my target market.
Has anyone figured out how to configure the simulator for RHD layouts? Any guidance on how to move the dock to the right would be greatly appreciated.
Thanks in advance for your help!
We have a management application which manages security enable and disable for an external PCIe based storage device using Kernel Extension(SCSI Architecture Model Family for External USB Mass Storage devices which deals with IOSCSIBlockCommandsDevice, IOSCSIPeripheralDeviceType00 and IOBlockStorageService)
Now the issue we are facing is when we try to unlock the device using security code (already being set) , we will relink the device using VU command (RelinkBridge). Even-though both the commands are successful , the device is not getting relinked due to which we are not able to use the device for storage.
If someone have faced similar issue or any pointer on how to resolve this issue , it would be helpful.
Thanks in advance.
1) The circumstances:
iPADOS~>18, USB still-imaging-gadget, 1-3 gadgets might connect simultaneously via USB-hub, proprietary DExt,
UserClient App in developers' iPADs or in TestFlight.
= 1 variation A) UserClientApp has attribute [Background modes][Enable external communications].
= 1 variation B) "active USB-hub" vs "passive".
= 1 variation C) "ConsoleApp logs iPAD" vs "ConsoleApp is not started".
= 1 the term "zombie" below assume issue:
== after plug-in ConsoleApp logs "IOUsbUserInterface::init", "::start";
== UserClient never receives respective callback from IOServiceAddMatchingNotification
== further IOKit APIs "teardown", "restart" and "re-enumeration of connected gadgets" doesn't reveal the zombie (while it sees another simultaneous gadget);
== unplug of the gadget logs "IOUsbUserInterface::stop".
=2) The situation when UserClient is in foreground:
~ everything is fine.
Note in MAC OS everything (same DExt and UserClient-code) works fine in background and in foreground.
=3) The situation when UserClient is in background (beneath ~"Files" or "Safari"):
=3A) "1A=true" phys plugin causes zombie in ~1/20 cases (satisfactory)
=3B) "1A=false" phys plugin causes zombie in ~1/4 cases (non-satisfactory)
=3C) "1 variation B" and "1 variation C" reduce zombies by ~>30%.
=3D) ConsoleApp logs with filter "IOAccessory" (~about USB-energy) always look similar by my eyes.
=4) From Apple-store:
"The app declares support for external-accessory in the UIBackgroundModes"..."The external accessory background mode is intended for apps that communicate with hardware accessories through the External Accessory framework."..."Additionally, the app must be authorized by MFi,"
=5) My wish/ask:
=5 option A) A clue to resolve the "zombie-plugin-issue in background" in a technical manner.
= E.g. find a straightforward solution that ~elevates UserClient's "privileges" in background like [Enable external communication]...
E.g. what ConsoleApp-logs to add/watch to reveal my potential bugs in DExt or in UserClient?
=5 option B) A way to pursue Apple-store to accept (without MFI) an UserClientApp with [Enable external communication].
I have a USB-device with three interfaces:
Vendor-Specific Bulk
CDC Serial Control
CDC Serial Data
To configure the vendor specific bulk endpoints I need to send vendor specific control requests to endpoint 0. I'm using libusb for this task. As long as the interfaces for the CDC serial port are present I get an access error when trying to send vendor control requests.
If I disable these CDC interfaces I can send vendor control request without any problems:
Is this by design or ist there any possibility to send vendor control requests to the USB device while a CDC driver is active?
Topic:
App & System Services
SubTopic:
Drivers
Hello everyone,
I am migrating a legacy KEXT to a DriverKit (DEXT) architecture. While the DEXT itself is working correctly, I am completely blocked by a code signing issue when trying to establish the UserClient connection from our SwiftUI management app.
Project Goal & Status:
Our DEXT (com.accusys.Acxxx.driver) activates successfully (systemextensionsctl list confirms [activated enabled]).
The core functionality is working (diskutil list shows the corresponding disk device node).
The Core Problem: The userclient-access Signing Error
To allow the app to connect to the DEXT, the com.apple.developer.driverkit.userclient-access entitlement is required in the app's .entitlements file.
However, as soon as this entitlement is added, the build fails.
Both automatic and manual signing fail with the same error:
`Provisioning profile ... doesn't match the entitlements file's value for the ... userclient-access entitlement.`
This build failure prevents the generation of an .app bundle, making it impossible to inspect the final entitlements with codesign.
What We've Confirmed:
The necessary capabilities (like DriverKit Communicates with Drivers) are visible and enabled for our App ID on the developer portal.
The issue persists on a clean system state and on the latest macOS Sequoia 15.7.1.
Our Research and Hypothesis:
We have reviewed the official documentation "Diagnosing issues with entitlements" (TN3125).
According to the documentation, a "doesn't match" error implies a discrepancy between the entitlements file and the provisioning profile.
Given that we have tried both automatic and manual profiles (after enabling the capability online), our hypothesis is that the provisioning profile generation process on Apple's backend is not correctly including the approved userclient-access entitlement into the profile file itself. The build fails because Xcode correctly detects this discrepancy.
Our Questions:
Did we misunderstand a step in the process, or is the issue not with the entitlement request at all? Alternatively, are there any other modifications we can make to successfully connect our App to the DEXT and trigger NewUserClient?
Thank you for any guidance.
I'm trying to implement a virtual serial port driver for my ham radio projects which require emulating some serial port devices and I need to have a "backend" to translate the commands received by the virtual serial port into some network-based communications. I think the best way to do that is to subclass IOUserSerial? Based on the available docs on this class (https://developer.apple.com/documentation/serialdriverkit/iouserserial), I've done the basic implementation below. When the driver gets loaded, I can see sth like tty.serial-1000008DD in /dev and I can use picocom to do I/O on the virtual serial port. And I see TxDataAvailable() gets called every time I type a character in picocom.
The problems are however, firstly, when TxDataAvailable() is called, the TX buffer is all-zero so although the driver knows there is some incoming data received from picocom, it cannot actually see the data in neither Tx/Rx buffers.
Secondly, I couldn't figure out how to notify the system that there are data available for sending back to picocom. I call RxDataAvailable(), but nothing appears on picocom, and RxFreeSpaceAvailable() never gets called back. So I think I must be doing something wrong somewhere. Really appreciate it if anyone could point out how should I fix it, many thanks!
VirtualSerialPortDriver.cpp:
constexpr int bufferSize = 2048;
using SerialPortInterface = driverkit::serial::SerialPortInterface;
struct VirtualSerialPortDriver_IVars
{
IOBufferMemoryDescriptor *ifmd, *rxq, *txq;
SerialPortInterface *interface;
uint64_t rx_buf, tx_buf;
bool dtr, rts;
};
bool VirtualSerialPortDriver::init()
{
bool result = false;
result = super::init();
if (result != true)
{
goto Exit;
}
ivars = IONewZero(VirtualSerialPortDriver_IVars, 1);
if (ivars == nullptr)
{
goto Exit;
}
kern_return_t ret;
ret = ivars->rxq->Create(kIOMemoryDirectionInOut, bufferSize, 0, &ivars->rxq);
if (ret != kIOReturnSuccess) {
goto Exit;
}
ret = ivars->txq->Create(kIOMemoryDirectionInOut, bufferSize, 0, &ivars->txq);
if (ret != kIOReturnSuccess) {
goto Exit;
}
IOAddressSegment ioaddrseg;
ivars->rxq->GetAddressRange(&ioaddrseg);
ivars->rx_buf = ioaddrseg.address;
ivars->txq->GetAddressRange(&ioaddrseg);
ivars->tx_buf = ioaddrseg.address;
return true;
Exit:
return false;
}
kern_return_t
IMPL(VirtualSerialPortDriver, HwActivate)
{
kern_return_t ret;
ret = HwActivate(SUPERDISPATCH);
if (ret != kIOReturnSuccess) {
goto Exit;
}
// Loopback, set CTS to RTS, set DSR and DCD to DTR
ret = SetModemStatus(ivars->rts, ivars->dtr, false, ivars->dtr);
if (ret != kIOReturnSuccess) {
goto Exit;
}
Exit:
return ret;
}
kern_return_t
IMPL(VirtualSerialPortDriver, HwDeactivate)
{
kern_return_t ret;
ret = HwDeactivate(SUPERDISPATCH);
if (ret != kIOReturnSuccess) {
goto Exit;
}
Exit:
return ret;
}
kern_return_t
IMPL(VirtualSerialPortDriver, Start)
{
kern_return_t ret;
ret = Start(provider, SUPERDISPATCH);
if (ret != kIOReturnSuccess) {
return ret;
}
IOMemoryDescriptor *rxq_, *txq_;
ret = ConnectQueues(&ivars->ifmd, &rxq_, &txq_, ivars->rxq, ivars->txq, 0, 0, 11, 11);
if (ret != kIOReturnSuccess) {
return ret;
}
IOAddressSegment ioaddrseg;
ivars->ifmd->GetAddressRange(&ioaddrseg);
ivars->interface = reinterpret_cast<SerialPortInterface*>(ioaddrseg.address);
SerialPortInterface &intf = *ivars->interface;
ret = RegisterService();
if (ret != kIOReturnSuccess) {
goto Exit;
}
TxFreeSpaceAvailable();
Exit:
return ret;
}
void
IMPL(VirtualSerialPortDriver, TxDataAvailable)
{
SerialPortInterface &intf = *ivars->interface;
// Loopback
// FIXME consider wrapped case
size_t tx_buf_sz = intf.txPI - intf.txCI;
void *src = reinterpret_cast<void *>(ivars->tx_buf + intf.txCI);
// char src[] = "Hello, World!";
void *dest = reinterpret_cast<void *>(ivars->rx_buf + intf.rxPI);
memcpy(dest, src, tx_buf_sz);
intf.rxPI += tx_buf_sz;
RxDataAvailable();
intf.txCI = intf.txPI;
TxFreeSpaceAvailable();
Log("[TX Buf]: %{public}s", reinterpret_cast<char *>(ivars->tx_buf));
Log("[RX Buf]: %{public}s", reinterpret_cast<char *>(ivars->rx_buf));
// dmesg confirms both buffers are all-zero
Log("[TX] txPI: %d, txCI: %d, rxPI: %d, rxCI: %d, txqoffset: %d, rxqoffset: %d, txlogsz: %d, rxlogsz: %d",
intf.txPI, intf.txCI, intf.rxPI, intf.rxCI, intf.txqoffset, intf.rxqoffset, intf.txqlogsz, intf.rxqlogsz);
}
void
IMPL(VirtualSerialPortDriver, RxFreeSpaceAvailable)
{
Log("RxFreeSpaceAvailable() called!");
}
kern_return_t IMPL(VirtualSerialPortDriver,HwResetFIFO){
Log("HwResetFIFO() called with tx: %d, rx: %d!", tx, rx);
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwSendBreak){
Log("HwSendBreak() called!");
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramUART){
Log("HwProgramUART() called, BaudRate: %u, nD: %d, nS: %d, P: %d!", baudRate, nDataBits, nHalfStopBits, parity);
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramBaudRate){
Log("HwProgramBaudRate() called, BaudRate = %d!", baudRate);
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramMCR){
Log("HwProgramMCR() called, DTR: %d, RTS: %d!", dtr, rts);
ivars->dtr = dtr;
ivars->rts = rts;
kern_return_t ret = kIOReturnSuccess;
Exit:
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver, HwGetModemStatus){
*cts = ivars->rts;
*dsr = ivars->dtr;
*ri = false;
*dcd = ivars->dtr;
Log("HwGetModemStatus() called, returning CTS=%d, DSR=%d, RI=%d, DCD=%d!", *cts, *dsr, *ri, *dcd);
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramLatencyTimer){
Log("HwProgramLatencyTimer() called!");
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramFlowControl){
Log("HwProgramFlowControl() called! arg: %u, xon: %d, xoff: %d", arg, xon, xoff);
kern_return_t ret = kIOReturnSuccess;
Exit:
return ret;
}
I am porting a working kernel extension IOKit driver to a DriverKit system extension. Our device is a PCI device accessed through Thunderbolt. The change from IOPCIFamily to PCIDriverKit has some differences in approach, though.
Namely, in IOKit / IOPCIFamily, this was the correct way to become Bus Leader:
mPCIDevice->setBusLeadEnable(true); // setBusMasterEnable(..) deprecated in OS 12.4
but now, PCIDriverKit's IOPCIDevice does not have that function. Instead I am doing the following:
// Set Bus Leader and Memory Space enable
uint16_t commandRegister = 0;
ivars->mPCIDevice->ConfigurationRead16(kIOPCIConfigurationOffsetCommand, &commandRegister);
commandRegister |= (kIOPCICommandBusLead | kIOPCICommandMemorySpace);
ivars->mPCIDevice->ConfigurationWrite16(kIOPCIConfigurationOffsetCommand, commandRegister);
But I am not convinced this is working (I am still experiencing unexpected errors when attempting to DMA from our device, using the same steps that work for the kernel extension).
The only hint I can find in the online documentation is here, which reads:
Note
The endpoint driver is responsible for enabling the Memory Space Enable and Bus Master Enable settings each time it configures the PCI device. When a crash occurs, or when the system unloads your driver, the system disables these features.
...but that does not state directly how to enable bus leader status. What is the "PCIDriverKit approved" way to become bus leader?
Is there a way to verify/confirm that a device is bus leader? (This would be helpful to prove that bus leadership is not the issue for DMA errors, as well as to confirm that bus leadership was granted).
Thanks in advance!
Hello,
I'm developing a custom AU plugin that needs to connect to a DriverKit driver. Using the DriverKit sample (https://github.com/DanBurkhardt/DriverKitUserClientSample.git), I was able to get a standalone app to connect to the driver successfully. However, the AU plugin, running in a sandboxed host, isn't establishing the connection. I’ve already added the app sandbox entitlement, but it hasn’t helped. Any advice on what might be missing?
TIA!