When using Foundation Models, is it possible to ask the model to produce output in a specific language, apart from giving an instruction like "Provide answers in ." ? (I tried that and it kind of worked, but it seems fragile.)
I haven't noticed an API to do so and have a use-case where the output should be in a user-selectable language that is not the current system language.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
This is my code:
witch SystemLanguageModel.default.availability {
case .available:
ContentView()
.popover(isPresented: $showSettings) {
SettingsView().presentationCompactAdaptation(.popover)
}
case .unavailable(.modelNotReady):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("Please come back later."))
case .unavailable(.appleIntelligenceNotEnabled):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("Please turn on Apple Intelligence."))
case .unavailable(.deviceNotEligible):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("This device is not eligible for Apple Intelligence."))
case .unavailable:
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark")
}
When I switch off Apple Intelligence, I expected "Please turn on Apple Intelligence.", but instead I get "Please come back later."
This seems to be wrong error?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
In this WWDC25 session, it is explictely mentioned that apps should support AttributedString for text parameters to their App Intents.
However, I have not gotten this to work. Whenever I pass rich text (either generated by the new "Use Model" intent or generated manually for example using "Make Rich Text from Markdown"), my Intent gets an AttributedString with the correct characters, but with all attributes stripped (so in effect just plain text).
struct TestIntent: AppIntent {
static var title = LocalizedStringResource(stringLiteral: "Test Intent")
static var description = IntentDescription("Tests Attributed Strings in Intent Parameters.")
@Parameter
var text: AttributedString
func perform() async throws -> some IntentResult & ReturnsValue<AttributedString> {
return .result(value: text)
}
}
Is there anything else I am missing?
According to the Tool documentation, the arguments to the tool are specified as a static struct type T, which is given to tool.call(argument: T) However, if the arguments are not known until runtime, is it possible to still create a Tool object with the proper parameters? Let's say a JSON-style dictionary is passed into the Tool init function to specify T, is this achievable?
Just tried to write a very simple test of using foundation models, but it gave me the error like this
"ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
establishment of session failed with Missing entitlement: com.apple.modelmanager.inference"
The simple code is listed below:
let session: LanguageModelSession = LanguageModelSession()
let response = try? await session.respond(to: "What is the capital of France?")
print("Response: (response)")
So what's the problem of this one?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
I am developing an iOS app that uses machine learning models.
To improve accuracy and user experience, I would like to download .mlmodel files (compiled and compressed as zip files) from our own server after the app is installed, and use them for inference within the app.
No executable code, scripts, or dynamic libraries will be downloaded—only model data files are used.
According to App Store Review Guideline 2.5.2, I understand that apps may not download or execute code which introduces or changes features or functionality.
In this case, are compiled and zip-compressed .mlmodel files considered "data" rather than "code", and is it allowed to download and use them in the app?
If there are any restrictions or best practices related to this, please let me know.
Thank you.
Does the Foundation Model provide Objective-C compatible APIs?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
WWDC25: Combine Metal 4 machine learning and graphics
Demonstrated a way to combine neural network in the graphics pipeline directly through the shaders, using an example of Texture Compression. However there is no mention of using which ML technique texture is compressed.
Can anyone point me to some well known model/s for this particular use case shown in WWDC25.
Hello everybody!
I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase.
Reproduction Context
I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic.
To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications.
Simplified Working Example
I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine:
import Foundation
import Playgrounds
import FoundationModels
@Generable
struct GeneableLandmark {
@Guide(description: "Name of the landmark to visit")
var name: String
}
final class LandmarkSuggestionGenerator {
var landmarkSuggestion: GeneableLandmark.PartiallyGenerated?
private var session: LanguageModelSession
init(){
self.session = LanguageModelSession(
instructions: Instructions {
"""
generate a list of landmarks to visit
"""
}
)
}
func createLandmarkSuggestion(location: String) async throws {
let stream = session.streamResponse(
generating: GeneableLandmark.self,
options: GenerationOptions(sampling: .greedy),
includeSchemaInPrompt: false
) {
"""
Generate a list of landmarks to viist in \(location)
"""
}
for try await partialResponse in stream {
landmarkSuggestion = partialResponse
}
}
}
#Playground {
let generator = LandmarkSuggestionGenerator()
Task {
do {
try await generator.createLandmarkSuggestion(location: "New york")
if let suggestion = generator.landmarkSuggestion {
print("Suggested landmark: \(suggestion)")
} else {
print("No suggestion generated.")
}
} catch {
print("Error generating landmark suggestion: \(error)")
}
}
}
But as soon as I use the Sample ItineraryPlanner:
#Playground {
// Example landmark for demonstration
let exampleLandmark = Landmark(
id: 1,
name: "San Francisco",
continent: "North America",
description: "A vibrant city by the bay known for the Golden Gate Bridge.",
shortDescription: "Iconic Californian city.",
latitude: 37.7749,
longitude: -122.4194,
span: 0.2,
placeID: nil
)
let planner = ItineraryPlanner(landmark: exampleLandmark)
Task {
do {
try await planner.suggestItinerary(dayCount: 3)
if let itinerary = planner.itinerary {
print("Suggested itinerary: \(itinerary)")
} else {
print("No itinerary generated.")
}
} catch {
print("Error generating itinerary: \(error)")
}
}
}
The error pops up:
Multiline
Error generating itinerary:
guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug
Description: "May contain sensitive or unsafe content", >underlyingErrors:
[FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))]))
Based on my tests:
The error may not be tied to structure complexity (since more nested structures work)
The issue may stem from the tools or prompt content used inside the ItineraryPlanner
The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas
Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them.
Best,
Sasha Morozov
When I am doing an uncached load of CoreML model on ANE, I received this warning in Xcode console
Type of hiddenStates in function main's I/O contains unknown strides. Using unknown strides for MIL tensor buffers with unknown shapes is not recommended in E5ML. Please use row_alignment_in_bytes property instead. Refer to https://e5-ml.apple.com/more-info/memory-layouts.html for more information.
However, the web link does not seem to be working. Where can I find more information about about this and how can I fix it?
Topic:
Machine Learning & AI
SubTopic:
Core ML
I have been using "apple" to test foundation models.
I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com!
Does foundation models connect to Internet for answer? Using beta 3.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions.
Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens.
That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Apologies if this is obvious to everyone but me... I'm using the Tahoe AI foundation models. When I get an error, I'm trying to handle it properly.
I see the errors described here: https://developer.apple.com/documentation/foundationmodels/languagemodelsession/generationerror/context, as well as in the headers. But all I can figure out how to see is error.localizedDescription which doesn't give me much to go on.
For example, an error's description is:
The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 2.
That doesn't give me much to go on. How do I get the actual error number/enum value out of this, short of parsing that text to look for the int at the end?
This one is:
case guardrailViolation(LanguageModelSession.GenerationError.Context)
So I'd like to know how to get from the catch for session.respond to something I can act on. I feel like it's there, but I'm missing it.
Thanks!
In the play ground I'm trying to bias my LanguageModel to use a tool I registered, but I don't see it actually calling the tool. I'm following the developer video on landmarks itinerary generation tutorial almost verbatim. Is this a prompt engineering thing I'm missing? Or is it possible that I'm injecting my tool wrong?
Hey
Tried using a few regular expressions and all fail with an error:
Unhandled error streaming response: A generation guide with an unsupported pattern was used.
Is there are a list of supported features? I don't see it in docs, and it takes RegExp.
Anything with e.g. [A-Z] fails.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm implementing an App Intent for my iOS app that helps users plan trip activities. It only works when run as a shortcut but not using voice through Siri. There are 2 issues:
The ShortcutsTripEntity will only accept a voice input for a specific trip but not others.
I'm stuck with a throwing error when trying to use requestDisambiguation() on the activity day @Parameter property.
How do I rectify these issues.
This is blocking me from completing a critical feature that lets users quickly plan activities through Siri and Shortcuts.
Expected behavior for trip input: The intent should make Siri accept the spoken trip input from any of the options.
Actual behavior for trip input: Siri only accepts the same trip when spoken but accepts any when selected by click/touch.
Expected behavior for day input: Siri should accept the spoken selected option.
Actual behavior for day input: Siri only accepts an input by click/touch but yet throws an error at runtime I'm happy to provide more code. But here's the relevant code:
struct PlanActivityTestIntent: AppIntent {
@Parameter(title: "Activity Day")
var activityDay: ShortcutsItineraryDayEntity
@Parameter(
title: "Trip",
description: "The trip to plan an activity for",
default: ShortcutsTripEntity(id: UUID().uuidString, title: "Untitled trip"),
requestValueDialog: "Which trip would you like to add an activity to?"
)
var tripEntity: ShortcutsTripEntity
@Parameter(title: "Activity Title", description: "The title of the activity", requestValueDialog: "What do you want to do or see?")
var title: String
@Parameter(title: "Activity Day", description: "Activity Day", default: ShortcutsItineraryDayEntity(itineraryDay: .init(itineraryId: UUID(), date: .now), timeZoneIdentifier: "UTC"))
var activityDay: ShortcutsItineraryDayEntity
func perform() async throws -> some ProvidesDialog {
// ...other code...
let tripsStore = TripsStore()
// load trips and map them to entities
try? await tripsStore.getTrips()
let tripsAsEntities = tripsStore.trips.map { trip in
let id = trip.id ?? UUID()
let title = trip.title
return ShortcutsTripEntity(id: id.uuidString, title: title, trip: trip)
}
// Ask user to select a trip. This line would doesn't accept a voice // answer. Why?
let selectedTrip = try await $tripEntity.requestDisambiguation(
among: tripsAsEntities,
dialog: .init(
full: "Which of the \(tripsAsEntities.count) trip would you like to add an activity to?",
supporting: "Select a trip",
systemImageName: "safari.fill"
)
)
// This line throws an error
let selectedDay = try await $activityDay.requestDisambiguation(
among: daysAsEntities,
dialog:"Which day would you like to plan an activity for?"
)
}
}
Here are some related images that might help:
@Generable
enum Breakfast {
case waffles
case pancakes
case bagels
case eggs
}
do {
let session = LanguageModelSession()
let userInput = "I want something sweet."
let prompt = "Pick the ideal breakfast for request: (userInput)"
let response = try await session.respond(to: prompt,generating: Breakfast.self)
print(response.content)
} catch let error {
print(error)
}
i want to test the @Generable demo but get error with below:decodingFailure(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Failed to convert text into into GeneratedContent\nText: waffles", underlyingErrors: [Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "The given data was not valid JSON.", underlyingError: Optional(Error Domain=NSCocoaErrorDomain Code=3840 "Unexpected character 'w' around line 1, column 1." UserInfo={NSJSONSerializationErrorIndex=0, NSDebugDescription=Unexpected character 'w' around line 1, column 1.})))]))
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Almost everywhere else you see Apple Intelligence, you get to select whether it's on device, private cloud compute, or ChatGPT. Is there a way to do that via code in the Foundation Model? I searched through the docs and couldn't find anything, but maybe I missed it.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In WWDC25 Metal 4 released quite excited new features for machine learning optimization, but as we all know the pytorch based on metal shader performance (mps) is the one of most important tools for Mac machine learning area.but on mps introduced website we cannot see any support information for metal4.