Context
I trained a LoRA adapter for Apple’s on-device language model using the Foundation Models Adapter Training Toolkit v0.2.0 on macOS 26 beta 4. Although training completes successfully, loading the resulting .fmadapter package fails with:
Adapter is not compatible with the current system base model.
What I’ve Observed,
- Hard-coded Signature: In export/constants.py, the toolkit sets,
BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
- Metadata Injection: The export_fmadapter.py script writes this value into the adapter’s metadata:
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE
- Compatibility Check: At runtime, the Foundation Models framework compares the adapter’s baseModelSignature against the OS’s system model signature, and reports compatibleAdapterNotFound if they don’t match—without revealing the expected signature.
Questions
- Signature Generation - What exactly does the toolkit hash to derive BASE_SIGNATURE? Is it a straight SHA-1 of base-model.pt, or is there an additional transformation?
- Recomputing for Beta 4 - Is there a way to locally compute the correct signature for the macOS 26 beta 4 system model?
- Toolkit Updates - Will Apple release Adapter Training Toolkit v0.3.0 with an updated BASE_SIGNATURE for beta 4, or is there an alternative workaround to generate it myself?
Any guidance on how the Foundation Models framework derives and verifies the base model signature—or how to regenerate it for beta 4—would be greatly appreciated.