Foundation Models

RSS for tag

Discuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.

Foundation Models Documentation

Posts under Foundation Models subtopic

Post

Replies

Boosts

Views

Activity

ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
Just tried to write a very simple test of using foundation models, but it gave me the error like this "ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference establishment of session failed with Missing entitlement: com.apple.modelmanager.inference" The simple code is listed below: let session: LanguageModelSession = LanguageModelSession() let response = try? await session.respond(to: "What is the capital of France?") print("Response: (response)") So what's the problem of this one?
2
0
238
Jul ’25
Rate limit exceeded when using Foundation Model framework
When I use the FoundationModel framework to generate long text, it will always hit an error. "Passing along Client rate limit exceeded, try again later in response to ExecuteRequest" And stop generating. eg. for the prompt "Write a long story", it will almost certainly hit that error after 17 seconds of generation. do{ let session = LanguageModelSession() let prompt: String = "Write a long story" let response = try await session.respond(to: prompt) }catch{} If possible, I want to know how to prevent that error or at least how to handle it.
2
1
695
Jul ’25
Setting Required Capabilities for Foundation Models
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable. While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
2
2
224
Aug ’25
How to encode Tool.Output (aka PromptRepresentable)?
Hey, I've been trying to write an AI agent for OpenAI's GPT-5, but using the @Generable Tool types from the FoundationModels framework, which is super awesome btw! I'm having trouble implementing the tool calling, though. When I receive a tool call from the OpenAI api, I do the following: Find the tool in my [any Tool] array via the tool name I get from the model if let tool = tools.first(where: { $0.name == functionCall.name }) { // ... } Parse the arguments of the tool call via GeneratedContent(json:) let generatedContent = try GeneratedContent(json: functionCall.arguments) Pass the tool and arguments to a function that calls tool.call(arguments: arguments) and returns the tool's output type private func execute<T: Tool>(_ tool: T, with generatedContent: GeneratedContent) async throws -> T.Output { let arguments = try T.Arguments.init(generatedContent) return try await tool.call(arguments: arguments) } Up to this point, everything is working as expected. However, the tool's output type is any PromptRepresentable and I have no idea how to turn that into something that I can encode and send back to the model. I assumed there might be a way to turn it into a GeneratedContent but there is no fitting initializer. Am I missing something or is this not supported? Without a way to return the output to an external provider, it wouldn't really be possible to use FoundationModels Tool type I think. That would be unfortunate because it's implemented so elegantly. Thanks!
2
0
214
Aug ’25
Apple ANE Peformance - throttling?
I can no longer achieve 100% ANE usage since upgrading to MacOS26 Beta 5. I used to be able to get 100%. Has Apple activated throttling or power saving features in the new Betas? Is there any new rate limiting on the API? I can hardly get above 3w or 40%. I have a M4 Pro mini (64GB) with High Power energy setting. MacOS 26 Beta 5.
2
0
312
Aug ’25
`LanguageModelSession.respond()` never resolves in Beta 5
Hi all, I noticed on Friday that on the new Beta 5 using FoundationModels on a simulator LanguageModelSession.respond() neither resolves nor throws most of the time. The SwiftUI test app below was working perfectly in Xcode 16 Beta 4 and iOS 26 Beta 4 (simulator). import SwiftUI import FoundationModels struct ContentView: View { var body: some View { VStack { Image(systemName: "globe") .imageScale(.large) .foregroundStyle(.tint) Text("Hello, world!") } .padding() .onAppear { Task { do { let session = LanguageModelSession() let response = try await session.respond(to: "are cats better than dogs ???") print(response.content) } catch { print("error") } } } } } After updating to Xcode 16 Beta 5 and iOS 26 Beta 5 (simulator), the code now often hangs. Occasionally it will work if I toggle Apple Intelligence on and off in Settings, but it’s unreliable.
2
0
340
Aug ’25
FoundationModels tool calling not working (iOS 26, beta 6)
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response. However, I cannot get the language model session to see/use my tool. I have code like this passing the tool to my prompt: class Parser { func populate(locations: String, latitude: Double, longitude: Double) async { let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude) let session = LanguageModelSession(tools: [findLatLonTool]) { """ A prompt that populates a model with a list of locations. """ """ Use the findLatLon tool to populate the latitude and longitude for the name of each location. """ } let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self) let locationsModel = LocationsModels(); do { for try await partialParsedLocations in stream { locationsModel.parsedLocations = partialParsedLocations.content } } catch { print("Error parsing") } } } And then the tool that looks something like this: import Foundation import FoundationModels import MapKit struct FindLatLonTool: Tool { typealias Output = GeneratedContent let name = "findLatLon" let description = "Find the latitude / longitude of a location for a place name." let latitude: Double let longitude: Double @Generable struct Arguments { @Guide(description: "This is the location name to look up.") let locationName: String } func call(arguments: Arguments) async throws -> GeneratedContent { let request = MKLocalSearch.Request() request.naturalLanguageQuery = arguments.locationName request.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude), latitudinalMeters: 1_000_000, longitudinalMeters: 1_000_000 ) let search = MKLocalSearch(request: request) let coordinate = try await search.start().mapItems.first?.location.coordinate if let coordinate = coordinate { return GeneratedContent( LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude) ) } return GeneratedContent("Location was not found - no latitude / longitude is available.") } } But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code. Has anybody successfully gotten a tool to be called?
2
1
393
Aug ’25
Restricting App Installation to Devices Supporting Apple Intelligence Without Triggering Game Mode
Hello, My app fully relies on the new Foundation Models. Since Foundation Models require Apple Intelligence, I want to ensure that only devices capable of running Apple Intelligence can install my app. When checking the UIRequiredDeviceCapabilities property for a suitable value, I found that iphone-performance-gaming-tier seems the closest match. Based on my research: On iPhone, this effectively limits installation to iPhone 15 Pro or later. On iPad, it ensures M1 or newer devices. This exactly matches the hardware requirements for Apple Intelligence. However, after setting iphone-performance-gaming-tier, I noticed that on iPad, Game Mode (Game Overlay) is automatically activated, and my app is treated as a game. My questions are: Is there a more appropriate UIRequiredDeviceCapabilities value that would enforce the same Apple Intelligence hardware requirements without triggering Game Mode? If not, is there another way to restrict installation to devices meeting Apple Intelligence requirements? Is there a way to prevent Game Mode from appearing for my app while still using this capability restriction? Thanks in advance for your help.
2
0
418
Aug ’25
Keep getting exceededContextWindowSize with Foundation Models
I'm a bit new to the LLM stuff and with Foundation Models. My understanding is that there is a token limit of around 4K. I want to process the contents of files which may be quite large. I first tried going the Tool route but that didn't work out so I then tried manually chunking the text to keep things under the limit. It mostly works except that every now and then it'll exceed the limit. This happens even when the chunks are less than 100 characters. Instructions themselves are about 500 characters but still overall, well below 1000 characters per prompt, all told, which, in my limited understanding, should not result in 4K tokens being parsed. Any ideas on what is going on here?
2
0
299
Aug ’25
Error with guardrailViolation and underlyingErrors
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model. my set up is: Mac M1 Pro MacOS 26 Beta Version 26.0 beta 3 Apple Intelligence &amp; Siri --&gt; On here is the code, func generate() { Task { isGenerating = true output = "⏳ Thinking..." do { let session = LanguageModelSession( instructions: """ Extract time from a message. Example Q: Golfing at 6PM A: 6PM """) let response = try await session.respond(to: "Go to gym at 7PM") output = response.content } catch { output = "❌ Error:, \(error)" print(output) } isGenerating = false } and I get these errors guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog])) Can you help me get through this?
2
0
290
Aug ’25
Foundation model adapter assets are invalid
I've tried creating a Lora adapter using the example dataset, scripts as part of the adapter_training_toolkit_v26_0_0 (last available) on MacOs 26 Beta 6. import SwiftUI import FoundationModels import Playgrounds #Playground { // The absolute path to your adapter. let localURL = URL(filePath: "/Users/syl/Downloads/adapter_training_toolkit_v26_0_0/train/test-lora.fmadapter") // Initialize the adapter by using the local URL. let adapter = try SystemLanguageModel.Adapter(fileURL: localURL) // An instance of the the system language model using your adapter. let customAdapterModel = SystemLanguageModel(adapter: adapter) // Create a session and prompt the model. let session = LanguageModelSession(model: customAdapterModel) let response = try await session.respond(to: "hello") } I get Adapter assets are invalid error. I've added the entitlements Is adapter_training_toolkit_v26_0_0 up to date?
2
0
223
Aug ’25
Does Foundation Models ever do off-device computation?
I want to use Foundation Models in a project, but I know my users will want to avoid environmentally intensive AI work in data centers. Does Foundation Models ever use Private Compute Cloud or any other kind of cloud-based AI system? I'd like to be able to assure my users that the LLM usage is relatively environmentally friendly. It would be great to be able to cite a specific Apple page explaining that Foundation Models work is always done locally. If there's any chance that work can be done in the cloud, is there a way to opt out of that?
2
0
253
Oct ’25
Failing to run SystemLanguageModel inference with custom adapter
Hi, I have trained a basic adapter using the adapter training toolkit. I am trying a very basic example of loading it and running inference with it, but am getting the following error: Passing along InferenceError::inferenceFailed::loadFailed::Error Domain=com.apple.TokenGenerationInference.E5Runner Code=0 "Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006)." UserInfo={NSLocalizedDescription=Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006).} in response to ExecuteRequest Any ideas / direction? For testing I am including the .fmadapter file inside the app bundle. This is where I load it: @State private var session: LanguageModelSession? // = LanguageModelSession() func loadAdapter() async throws { if let assetURL = Bundle.main.url(forResource: "qasc---afm---4-epochs-adapter", withExtension: "fmadapter") { print("Asset URL: \(assetURL)") let adapter = try SystemLanguageModel.Adapter(fileURL: assetURL) let adaptedModel = SystemLanguageModel(adapter: adapter) session = LanguageModelSession(model: adaptedModel) print("Loaded adapter and updated session") } else { print("Asset not found in the main bundle.") } } This seems to work fine as I get to the log Loaded adapter and updated session. However when the below inference code runs I get the aforementioned error: func sendMessage(_ msg: String) { self.loading = true if let session = session { Task { do { let modelResponse = try await session.respond(to: msg) DispatchQueue.main.async { self.response = modelResponse.content self.loading = false } } catch { print("Error: \(error)") DispatchQueue.main.async { self.loading = false } } } } }
3
0
205
Jun ’25
Foundation Model Always modelNotReady
I'm testing Foundation Model on my iPad Pro (5th gen) iOS 26. Up until late this morning, I can no longer load the SystemLanguageModel.default. I'm not doing anything interesting, something as basic as this is only going to unavailable, specifically I get unavailable reason: modelNotReady. let model = SystemLanguageModel.default ... switch model.availability { case .available: print("LM available") case .unavailable(let reason): print("unavailable reason: ", String(describing: reason)) } I also ran the FoundationModelsTripPlanner app, same thing. It was working yesterday, I have not modified that project either. Why is the Model not ready? How do I fix this? Yes, I tried restarting both my laptop and iPad, no luck.
3
0
261
Jul ’25
LanguageModelSession always returns very lengthy responses
No matter what, the LanguageModelSession always returns very lengthy / verbose responses. I set the maximumResponseTokens option to various small numbers but it doesn't appear to have any effect. I've even used this instructions format to keep responses between 3-8 words but it returns multiple paragraphs. Is there a way to manage LLM response length? Thanks.
3
0
216
Sep ’25