Download the Foundation Models Adaptor Training Toolkit
Hi, after I clicked on the download button, I was redirected to this page https://developer.apple.com and did not download the toolkit.
Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
When using Foundation Models, is it possible to ask the model to produce output in a specific language, apart from giving an instruction like "Provide answers in ." ? (I tried that and it kind of worked, but it seems fragile.)
I haven't noticed an API to do so and have a use-case where the output should be in a user-selectable language that is not the current system language.
was that Spokane, Washington my fresh my fresh basket and they’re using a expired Wi-Fi certification domain through godaddy.com that expire April 30, 2020 I have a complete information on it if anybody needs me to forward it or wants to examine it their selves but be wary when you connected to the Wi-Fi over at my fresh basket at in Spokane, Washington
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response.
However, I cannot get the language model session to see/use my tool.
I have code like this passing the tool to my prompt:
class Parser {
func populate(locations: String, latitude: Double, longitude: Double) async {
let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude)
let session = LanguageModelSession(tools: [findLatLonTool]) {
"""
A prompt that populates a model with a list of locations.
"""
"""
Use the findLatLon tool to populate the latitude and longitude for the name of each location.
"""
}
let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self)
let locationsModel = LocationsModels();
do {
for try await partialParsedLocations in stream {
locationsModel.parsedLocations = partialParsedLocations.content
}
} catch {
print("Error parsing")
}
}
}
And then the tool that looks something like this:
import Foundation
import FoundationModels
import MapKit
struct FindLatLonTool: Tool {
typealias Output = GeneratedContent
let name = "findLatLon"
let description = "Find the latitude / longitude of a location for a place name."
let latitude: Double
let longitude: Double
@Generable
struct Arguments {
@Guide(description: "This is the location name to look up.")
let locationName: String
}
func call(arguments: Arguments) async throws -> GeneratedContent {
let request = MKLocalSearch.Request()
request.naturalLanguageQuery = arguments.locationName
request.region = MKCoordinateRegion(
center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude),
latitudinalMeters: 1_000_000,
longitudinalMeters: 1_000_000
)
let search = MKLocalSearch(request: request)
let coordinate = try await search.start().mapItems.first?.location.coordinate
if let coordinate = coordinate {
return GeneratedContent(
LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude)
)
}
return GeneratedContent("Location was not found - no latitude / longitude is available.")
}
}
But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code.
Has anybody successfully gotten a tool to be called?
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing
I running an MLModel loaded from a .mlmodelc file.
On the current iOS version 18.6.2 the model is running as expected with no issues.
However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it.
Below is the error I am seeing when I attempt to run an inference.
at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18.
Any help getting this to run again would be greatly appreciated.
Thank you,
processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 :
[Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error}
[Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1
Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).
Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence