Foundation Models Adapter Training Toolkit v0.2.0 LoRA Adapter Incompatible with macOS 26 Beta 4 Base Model

Context

I trained a LoRA adapter for Apple’s on-device language model using the Foundation Models Adapter Training Toolkit v0.2.0 on macOS 26 beta 4. Although training completes successfully, loading the resulting .fmadapter package fails with:

Adapter is not compatible with the current system base model.

What I’ve Observed,

  • Hard-coded Signature: In export/constants.py, the toolkit sets,
BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
  • Metadata Injection: The export_fmadapter.py script writes this value into the adapter’s metadata:
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE
  • Compatibility Check: At runtime, the Foundation Models framework compares the adapter’s baseModelSignature against the OS’s system model signature, and reports compatibleAdapterNotFound if they don’t match—without revealing the expected signature.

Questions

  1. Signature Generation - What exactly does the toolkit hash to derive BASE_SIGNATURE? Is it a straight SHA-1 of base-model.pt, or is there an additional transformation?
  2. Recomputing for Beta 4 - Is there a way to locally compute the correct signature for the macOS 26 beta 4 system model?
  3. Toolkit Updates - Will Apple release Adapter Training Toolkit v0.3.0 with an updated BASE_SIGNATURE for beta 4, or is there an alternative workaround to generate it myself?

Any guidance on how the Foundation Models framework derives and verifies the base model signature—or how to regenerate it for beta 4—would be greatly appreciated.

Answered by Frameworks Engineer in 852047022

Thank you @illidan80. We found the underlying cause. The framework currently only supports adapters whose identifiers match the regex /fmadapter-\w+-\w+/. Here, your adapter name "foundation-lab" contains a hyphen, which tripped our name validation logic.

If you update the adapter name in the toolkit to foundation_lab, that should resolve the issue. To test this quickly without re-exporting the adapter, you can manually edit the "adapterIdentifier" entry in the metadata.json file to fmadapter-foundation_lab-9799725.

Thank you for reporting the issue and the follow-up debugging steps. We will use your feedback FB19237327 to track an improvement to the adapter training toolkit to perform this validation at adapter export time. Please don't hesitate to reach out again if you need more help with custom adapters!

Could you right-click the .fmadapter file, click "Show Package Contents", and share the metadata.json?

Could you also share the code that you used to load this adapter?

Thank you for responding!

  1. metadata.json from the .fmadapter package:
{
    "adapterIdentifier": "fmadapter-foundation-lab-9799725",
    "author": "Foundation Lab",
    "baseModelSignature": "9799725ff8e851184037110b422d891ad3b92ec1",
    "creatorDefined": {},
    "description": "Tool adapter.",
    "license": "",
    "loraRank": 32,
    "speculativeDecodingDraftTokenCount": 5
}
  1. Code used to load the adapter:
  // From test-adapter-minimal.swift:39
  let adapter = try SystemLanguageModel.Adapter(fileURL: adapterURL)

  // Also tried the name-based initializer:
  adapter = try await SystemLanguageModel.Adapter(name: adapterName)

Please help make progress here. Thank you!

BTW, I do see a popup but that message in the popup is misleading because i have verified that i am using Xcode Beta 4, not Beta 3 or lower, this is the popup i see,

“/Applications/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin” Needs to Be Updated.
 This app needs an update in order to work with the latest Apple Intelligence models. Some features may not work as expected.

Are you running this in a macOS app, or in an iOS or visionOS app in the simulator? Could you file a report using Feedback Assistant and attach system diagnostics?

Thank you. I am running this in MacOS Beta 4, in sandbox. Filed FB19237327 The Feedback tool didn't even have Beta 4 build in the dropdown.

As a step towards debugging this, would you mind renaming the .fmadapter file and changing the code you used to load it to load it from the new URL?

Hello! thanks for the follow up. i confirmed that it still doesn't work with the renamed adapter file. I am using the APIs exactly as documented,

  • ✅ init(fileURL:) - Direct file URL loading
  • ✅ init(name:) - Name-based loading

the problem persists. Let me know if you need any more data. for the time being. i am working around the issue but adapter has to be working for me to make real progress. Thank you.

@illidan80 Would you mind attaching the full .fmadapter file to the feedback? Or at least the metadata.json file inside of it? It would be really helpful for us to debug this.

Thank you, @Frameworks Engineer. I got your comment above. I take it that i won't work still until the signature issue is resolved. On that note, I uploaded the fmadapter as you asked to the feedback. Please take a look. Thank you!

Accepted Answer

Thank you @illidan80. We found the underlying cause. The framework currently only supports adapters whose identifiers match the regex /fmadapter-\w+-\w+/. Here, your adapter name "foundation-lab" contains a hyphen, which tripped our name validation logic.

If you update the adapter name in the toolkit to foundation_lab, that should resolve the issue. To test this quickly without re-exporting the adapter, you can manually edit the "adapterIdentifier" entry in the metadata.json file to fmadapter-foundation_lab-9799725.

Thank you for reporting the issue and the follow-up debugging steps. We will use your feedback FB19237327 to track an improvement to the adapter training toolkit to perform this validation at adapter export time. Please don't hesitate to reach out again if you need more help with custom adapters!

it worked! thank you for the follow-ups and unblocking me!

This workaround worked for me too, thank you.

On a separate but related question, is there any chance someone can post a very simple training record example that shows tool invocation within training data? The Schema.md file seems incomplete in this regard where a record shows an array of tools, but it doesn't show the interactive conversation on how that tool gets invoked within a conversation. I tried several combinations or formats and trainings, only to either error out before training starts or when it does complete, the tool never gets invoked. My tools do get invoked when not using an adapter though, its just that the adapter brings SME to the model, so I need both and adapter and a tool call.

Foundation Models Adapter Training Toolkit v0.2.0 LoRA Adapter Incompatible with macOS 26 Beta 4 Base Model
 
 
Q