Foundation Models Framework with specialized models

Hello folks! Taking a look at https://developer.apple.com/documentation/foundationmodels it’s not clear how to use another models there.

Do anyone knows if it’s possible use one trained model from outside (imported) here in foundation models framework?

Thanks!

Hi @fbalancin,

The Foundation Models framework gives developers access to Apple's on-device large language model that ships with the user's operating system. One advantage of using it is that your app is accessing the model that's already on-device and you don't have to include a language model in your bundle, which takes up storage space and download bandwidth.

That said, you can indeed use your own model in your app. Our new framework MLX is one tool for training your own, it's optimized for Apple Silicon and has tight integration with Hugging Face. There are also other third-party AI models that have slimmed down versions of their models that can be included in your app and run on-device depending on your needs.

We also have frameworks like CoreML for integrating machine learning and you can explore a lot of what's offered for developers in our documentation.

Best,

-J

As my colleague mentioned, your app can definitely use Apple Foundation Models together with other models. One example is that you can use your own model to implement a use case-specific filter to filter the responses that the Apple Foundation Models generated but don't fit your concrete use case.

However, if your question is whether you can use the API the FoundationModels framework provides to access your own model, the answer is no – You can't replace the system-provided models with your own and still use the features the framework provides, like guided generation and tool calling.

Best,
——
Ziqiao Chen
 Worldwide Developer Relations.

You can add your own model with MLX. I've used MLX_LM.lora to 'upgrade' Mistral-7B-Instruct-v0.3-4bit with my own content. The 'adapter' folder that is created can be added to the original model with mlx_lm.fuse and tested with mlx_lm.generate in the Terminal. I then added the resulting Models folder into my macOS/iOS Swift app. The app works on my Mac as expected. It worked on my iPhone a few days ago but now crashes in the simulator. I don't want to try it my phone until it works (again) on the simulator.

The models folder contains the files created by the fuse command.

I'm not sure that I changed anything in the last few days but it seems to be crashing in the loadmodel function with a lot of 'strange' output that I don't understand.

My loadModel func

`func loadModel() async throws -> Void { // return // one test for model not loaded alert // Avoid reloading if the model is already loaded let isLoaded = await MainActor.run { self.model != nil } if isLoaded { return }

let modelFactory = LLMModelFactory.shared
let configuration = MLXLMCommon.ModelConfiguration(
    directory: Bundle.main.resourceURL!,
)

// Load the model off the main actor, then assign on the main actor
let loaded = try await modelFactory.loadContainer(configuration: configuration)
await MainActor.run {
    self.model = loaded
}

}`

I plan to back track to previous commits to see where the problem might be. I may have used some ChatGPT help which isn't always helpful ;-)

Hmm, the app works as expected on my iPhone with iOS 26.0.1. The iPhone simulator is on iOS 26.0

Foundation Models Framework with specialized models
 
 
Q