Foundation Models

RSS for tag

Discuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.

Foundation Models Documentation

Posts under Foundation Models subtopic

Post

Replies

Boosts

Views

Created

FoundationModels tool calling not working (iOS 26, beta 6)
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response. However, I cannot get the language model session to see/use my tool. I have code like this passing the tool to my prompt: class Parser { func populate(locations: String, latitude: Double, longitude: Double) async { let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude) let session = LanguageModelSession(tools: [findLatLonTool]) { """ A prompt that populates a model with a list of locations. """ """ Use the findLatLon tool to populate the latitude and longitude for the name of each location. """ } let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self) let locationsModel = LocationsModels(); do { for try await partialParsedLocations in stream { locationsModel.parsedLocations = partialParsedLocations.content } } catch { print("Error parsing") } } } And then the tool that looks something like this: import Foundation import FoundationModels import MapKit struct FindLatLonTool: Tool { typealias Output = GeneratedContent let name = "findLatLon" let description = "Find the latitude / longitude of a location for a place name." let latitude: Double let longitude: Double @Generable struct Arguments { @Guide(description: "This is the location name to look up.") let locationName: String } func call(arguments: Arguments) async throws -> GeneratedContent { let request = MKLocalSearch.Request() request.naturalLanguageQuery = arguments.locationName request.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude), latitudinalMeters: 1_000_000, longitudinalMeters: 1_000_000 ) let search = MKLocalSearch(request: request) let coordinate = try await search.start().mapItems.first?.location.coordinate if let coordinate = coordinate { return GeneratedContent( LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude) ) } return GeneratedContent("Location was not found - no latitude / longitude is available.") } } But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code. Has anybody successfully gotten a tool to be called?
2
1
343
Aug ’25
`LanguageModelSession.respond()` never resolves in Beta 5
Hi all, I noticed on Friday that on the new Beta 5 using FoundationModels on a simulator LanguageModelSession.respond() neither resolves nor throws most of the time. The SwiftUI test app below was working perfectly in Xcode 16 Beta 4 and iOS 26 Beta 4 (simulator). import SwiftUI import FoundationModels struct ContentView: View { var body: some View { VStack { Image(systemName: "globe") .imageScale(.large) .foregroundStyle(.tint) Text("Hello, world!") } .padding() .onAppear { Task { do { let session = LanguageModelSession() let response = try await session.respond(to: "are cats better than dogs ???") print(response.content) } catch { print("error") } } } } } After updating to Xcode 16 Beta 5 and iOS 26 Beta 5 (simulator), the code now often hangs. Occasionally it will work if I toggle Apple Intelligence on and off in Settings, but it’s unreliable.
2
0
324
Aug ’25
Apple ANE Peformance - throttling?
I can no longer achieve 100% ANE usage since upgrading to MacOS26 Beta 5. I used to be able to get 100%. Has Apple activated throttling or power saving features in the new Betas? Is there any new rate limiting on the API? I can hardly get above 3w or 40%. I have a M4 Pro mini (64GB) with High Power energy setting. MacOS 26 Beta 5.
2
0
291
Aug ’25
ANE Performance for on-device Foundation model
I'm running MacOs 26 Beta 5. I noticed that I can no longer achieve 100% usage on the ANE as I could before with Apple Foundations on-device model. Has Apple activated some kind of throttling or power limiting of the ANE? I cannot get above 3w or 40% usage now since upgrading. I'm on the high power energy mode. I there an API rate limit being applied? I kave a M4 Pro mini with 64 GB of memory.
0
0
292
Aug ’25
How to encode Tool.Output (aka PromptRepresentable)?
Hey, I've been trying to write an AI agent for OpenAI's GPT-5, but using the @Generable Tool types from the FoundationModels framework, which is super awesome btw! I'm having trouble implementing the tool calling, though. When I receive a tool call from the OpenAI api, I do the following: Find the tool in my [any Tool] array via the tool name I get from the model if let tool = tools.first(where: { $0.name == functionCall.name }) { // ... } Parse the arguments of the tool call via GeneratedContent(json:) let generatedContent = try GeneratedContent(json: functionCall.arguments) Pass the tool and arguments to a function that calls tool.call(arguments: arguments) and returns the tool's output type private func execute<T: Tool>(_ tool: T, with generatedContent: GeneratedContent) async throws -> T.Output { let arguments = try T.Arguments.init(generatedContent) return try await tool.call(arguments: arguments) } Up to this point, everything is working as expected. However, the tool's output type is any PromptRepresentable and I have no idea how to turn that into something that I can encode and send back to the model. I assumed there might be a way to turn it into a GeneratedContent but there is no fitting initializer. Am I missing something or is this not supported? Without a way to return the output to an external provider, it wouldn't really be possible to use FoundationModels Tool type I think. That would be unfortunate because it's implemented so elegantly. Thanks!
2
0
190
Aug ’25
All generations in #Playground macro are throwing "unsafe" Generation Errors
I'm using Xcode 26 Beta 5 and get errors on any generation I try, however harmless, when wrapped in the #Playground macro. #Playground { let session = LanguageModelSession() let topic = "pandas" let prompt = "Write a safe and respectful story about (topic)." let response = try await session.respond(to: prompt) Not seeing any issues on simulator or device. Anyone else seeing this or have any ideas? Thanks for any help! Version 26.0 beta 5 (17A5295f) macOS 26.0 Beta (25A5316i)
4
0
126
Aug ’25
Initializing LanguageModelSession crashes app on macOS
Whenever I try to initialize a LanguageModelSession (let session = LanguageModelSession()), my app crashes with EXC_BAD_ACCESS. SystemLanguageModel.default.availability returns available. I tried running the two sample projects I found that use Foundation Models, FoundationModelsTripPlanner and SwiftTranscriptionSampleApp, and they both also crash—immediately on launch. I commented out the Foundation Models logic from the SwiftTranscriptionSampleApp and ran it again, and it no longer crashed. I'm on macOS 26 Beta 4 on an M1 Pro device. I'm based in Austria (EU), if that matters.
7
4
266
Aug ’25
Cannot find type ToolOutput in scope
My sample app has been working with the following code: func call(arguments: Arguments) async throws -&gt; ToolOutput { var temp:Int switch arguments.city { case .singapore: temp = Int.random(in: 30..&lt;40) case .china: temp = Int.random(in: 10..&lt;30) } let content = GeneratedContent(temp) let output = ToolOutput(content) return output } However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
3
0
208
Aug ’25
Setting Required Capabilities for Foundation Models
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable. While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
2
2
199
Aug ’25
Localizing prompts that has string interpolated generable objects
I'm working on localizing my prompts to support multiple languages, and in some cases my prompts has String interpolated Generable objects. for example: "Given the following workout routine: \(routine), suggest one additional exercise to complement it." In the Strings dictionary, I'm only able to select String, Int or Double parameters using %@ and %lld. Has anyone found a way to accomplish this?
1
0
328
Jul ’25
Memory Attribution for Foundation Models in iOS 26
Hi, I’m developing an app targeting iOS 26, using the new FoundationModels framework to perform on-device LLM inference. I’m currently testing memory usage. Does the memory used by FoundationModels—including model weights, KV cache, and any inference-related buffers—count toward my app’s Jetsam memory limit, or is any of it managed separately by the system? I may need to run two concurrent inferences, each with a 4096-token context window. Is this explicitly supported or allowed by FoundationModels on iOS 26? Would this significantly increase the risk of memory-based termination? Thanks in advance for any clarification. Thanks.
1
0
386
Jul ’25
Foundation Models Adapter Training Toolkit v0.2.0 LoRA Adapter Incompatible with macOS 26 Beta 4 Base Model
Context I trained a LoRA adapter for Apple’s on-device language model using the Foundation Models Adapter Training Toolkit v0.2.0 on macOS 26 beta 4. Although training completes successfully, loading the resulting .fmadapter package fails with: Adapter is not compatible with the current system base model. What I’ve Observed, Hard-coded Signature: In export/constants.py, the toolkit sets, BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1" Metadata Injection: The export_fmadapter.py script writes this value into the adapter’s metadata: self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE Compatibility Check: At runtime, the Foundation Models framework compares the adapter’s baseModelSignature against the OS’s system model signature, and reports compatibleAdapterNotFound if they don’t match—without revealing the expected signature. Questions Signature Generation - What exactly does the toolkit hash to derive BASE_SIGNATURE? Is it a straight SHA-1 of base-model.pt, or is there an additional transformation? Recomputing for Beta 4 - Is there a way to locally compute the correct signature for the macOS 26 beta 4 system model? Toolkit Updates - Will Apple release Adapter Training Toolkit v0.3.0 with an updated BASE_SIGNATURE for beta 4, or is there an alternative workaround to generate it myself? Any guidance on how the Foundation Models framework derives and verifies the base model signature—or how to regenerate it for beta 4—would be greatly appreciated.
12
0
468
Jul ’25
The asset pack with the ID “testVideoAssetPack” couldn’t be looked up: Could not connect to the server.
On macOS Tahoe26.0, iOS 26.0 (23A5287g) not emulator, Xcode 26.0 beta 3 (17A5276g) Follow this tutorial Testing your asset packs locally The start the test server command I use this command line to start the test server:xcrun ba-serve --host 192.168.0.109 test.aar The terminal showThe content displayed on the terminal is: Loading asset packs… Loading the asset pack at “test.aar”… Listening on port 63125…… Choose an identity in the panel to continue. Listening on port 63125… running the project, Xcode reports an error:Download failed: Could not connect to the server. I use iPhone safari visit this website: https://192.168.0.109:63125, on the page display "Hello, world!" There are too few error messages in both of the above questions. I have no idea what the specific reasons are.I hope someone can offer some guidance. Best Regards. { "assetPackID": "testVideoAssetPack", "downloadPolicy": { "prefetch": { "installationEventTypes": ["firstInstallation", "subsequentUpdate"] } }, "fileSelectors": [ { "file": "video/test.mp4" } ], "platforms": [ "iOS" ] } this is my Manifest.json
1
0
274
Jul ’25
TAMM toolkit v0.2.0 is for base model older than base model in macOS 26 beta 4
Problem: We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification) toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the current system base model." TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1" Findings: Adapter Export Process: In export_fmadapter.py def write_metadata(...): self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value The Compatibility Check: - When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model - If they don't match: compatibleAdapterNotFound error - The error doesn't reveal the expected signature Questions: - How is BASE_SIGNATURE derived from the base model? - Is it SHA-1 of base-model.pt or some other computation? - Can we compute the correct signature for beta 4? - Or do we need Apple to release TAMM v0.3.0 with updated signature?
1
0
500
Jul ’25
LanguageModelStream and collecting the final output
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content. At the end, I want to pass the final version (not partially generated) to another function. I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable. Am I missing some functionality in the APIs?
4
0
553
Jul ’25
Inference Provider crashed with 2:5
I am trying to create a slightly different version of the content tagging code in the documentation: https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/contenttagging In the playground I am getting an "Inference Provider crashed with 2:5" error. I have no idea what that means or how to address the error. Any assistance would be appreciated.
1
0
515
Jul ’25
Automated Testing and Performance Validation for Foundation Models Framework
I've been successfully integrating the Foundation Models framework into my healthcare app using structured generation with @Generable schemas. While my initial testing (20-30 iterations) shows promising results, I need to validate consistency and reliability at scale before production deployment. Question Is there a recommended approach for automated, large-scale testing of Foundation Models responses? Specifically, I'm looking to: Automate 1000+ test iterations with consistent prompts and structured schemas Measure response consistency across identical inputs Validate structured output reliability (proper schema adherence, no generation failures) Collect performance metrics (TTFT, TPS) for optimization Specific Questions Framework Limitations: Are there any undocumented rate limits or thermal throttling considerations for rapid session creation/destruction? Performance Tools: Can Xcode's Foundation Models Instrument be used programmatically, or only through Instruments UI? Automation Integration: Any recommendations for integrating with testing frameworks? Session Reuse: Is it better to reuse a single LanguageModelSession or create fresh sessions for each test iteration? Use Case Context My wellness app provides medically safe activity recommendations based on user health profiles. The Foundation Models framework processes health context and generates structured recommendations for exercises, nutrition, and lifestyle activities. Given the safety implications of providing health-related guidance, I need rigorous validation to ensure the model consistently produces appropriate, well-formed recommendations across diverse user scenarios and health conditions. Has anyone in the community built similar large-scale testing infrastructure for Foundation Models? Any insights on best practices or potential pitfalls would be greatly appreciated.
1
0
183
Jul ’25