Hello,
My app fully relies on the new Foundation Models. Since Foundation Models require Apple Intelligence, I want to ensure that only devices capable of running Apple Intelligence can install my app.
When checking the UIRequiredDeviceCapabilities property for a suitable value, I found that iphone-performance-gaming-tier seems the closest match. Based on my research:
On iPhone, this effectively limits installation to iPhone 15 Pro or later.
On iPad, it ensures M1 or newer devices.
This exactly matches the hardware requirements for Apple Intelligence.
However, after setting iphone-performance-gaming-tier, I noticed that on iPad, Game Mode (Game Overlay) is automatically activated, and my app is treated as a game.
My questions are:
Is there a more appropriate UIRequiredDeviceCapabilities value that would enforce the same Apple Intelligence hardware requirements without triggering Game Mode?
If not, is there another way to restrict installation to devices meeting Apple Intelligence requirements?
Is there a way to prevent Game Mode from appearing for my app while still using this capability restriction?
Thanks in advance for your help.
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I keep getting the error “An unsupported language or locale was used.”
Is there any documentation that specifies the accepted languages or locales in Foundation model?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm a bit new to the LLM stuff and with Foundation Models. My understanding is that there is a token limit of around 4K.
I want to process the contents of files which may be quite large. I first tried going the Tool route but that didn't work out so I then tried manually chunking the text to keep things under the limit.
It mostly works except that every now and then it'll exceed the limit. This happens even when the chunks are less than 100 characters. Instructions themselves are about 500 characters but still overall, well below 1000 characters per prompt, all told, which, in my limited understanding, should not result in 4K tokens being parsed.
Any ideas on what is going on here?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model.
my set up is:
Mac M1 Pro
MacOS 26 Beta
Version 26.0 beta 3
Apple Intelligence & Siri --> On
here is the code,
func generate() {
Task {
isGenerating = true
output = "⏳ Thinking..."
do {
let session = LanguageModelSession( instructions: """
Extract time from a message. Example
Q: Golfing at 6PM
A: 6PM
""")
let response = try await session.respond(to: "Go to gym at 7PM")
output = response.content
} catch {
output = "❌ Error:, \(error)"
print(output)
}
isGenerating = false
}
and I get these errors
guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog]))
Can you help me get through this?
I've tried creating a Lora adapter using the example dataset, scripts as part of the adapter_training_toolkit_v26_0_0 (last available) on MacOs 26 Beta 6.
import SwiftUI
import FoundationModels
import Playgrounds
#Playground {
// The absolute path to your adapter.
let localURL = URL(filePath: "/Users/syl/Downloads/adapter_training_toolkit_v26_0_0/train/test-lora.fmadapter")
// Initialize the adapter by using the local URL.
let adapter = try SystemLanguageModel.Adapter(fileURL: localURL)
// An instance of the the system language model using your adapter.
let customAdapterModel = SystemLanguageModel(adapter: adapter)
// Create a session and prompt the model.
let session = LanguageModelSession(model: customAdapterModel)
let response = try await session.respond(to: "hello")
}
I get Adapter assets are invalid error.
I've added the entitlements
Is adapter_training_toolkit_v26_0_0 up to date?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi,
Are there rules around using Foundation Models:
In a background task/session?
Concurrently, i.e. a bunch simultaneously using Swift Concurrency?
I couldn't find this in the docs (sorry if I missed it) so wondering what's supported and what the best practice is here.
In case it matters, my primary platform is Vision Pro (so, M2).
Hey everyone,
Is it possible to generate XML using the “Generable” macro of the Foundation Model Framework?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Seeing this error from time to time:
Context(debugDescription: "Content contains 4089 tokens, which exceeds the maximum allowed context size of 4096.", underlyingErrors: [])
Of course, 4089 is less than 4096 so what is this telling me and how do I work around it? Is the limit actually lower than 4096?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I installed Xcode 26.0 beta and downloaded the generative models sample from here:
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models
But when I run it in the iOS 26.0 simulator, I get the error shown here. What's going wrong?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi,
I have an app that uses Core Data to store user information and display it in various views. I want to know if it's possible to easily integrate this setup with FoundationModels to make it easier for the user to query and manipulate the information, and if so, how would I go about it? Can the model be pointed to the database schema file and the SQLite file sitting in the user's app group container to parse out the information needed? And/or should the NSManagedObjects be made @Generable for better output? Any guidance about this would be useful.
I've downloaded the Xcode-beta and run the sample project "FoundationModelsTripPlanner" but I got this error when trying generate the response.
InferenceError::inferenceFailed::Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog}
Device: M1 Pro
Question:
Is it because M1 not supporting this feature?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I downloaded Xcode Beta 1 on my mac (did not upgrade the OS). The target OS level of iOS26 and the device simulator for iOS26 is downloaded and selected as the target.
When I try a simple Playground in Xcode ( #Playground ) I get a session error.
#Playground {
let avail = SystemLanguageModel.default.availability
if avail != .available {
print("SystemLanguageModel not available")
return
}
let session = LanguageModelSession()
do {
let response = try await session.respond(to: "Create a recipe for apple pie")
} catch {
print(error)
}
}
The error I get is:
Asset com.apple.gm.safety_deny_input.foundation_models.framework.api not found in Model Catalog
Is there a way to test drive the FoundationModel code without upgrading to macos26?
How reliable is the Models, to use as a comparison, such as a cholesterol test, to inform, for example, whether it is worth it to go see a doctor?
I would like to use Tool to attach the simple blood test data to the session and with this the Model can analyse and made a simple suggestion if is necessary to see a doctor etc.. ?
ps.: Local model
I am experimenting with Foundation Models in my time tracking app to analyze users tracked events, but I am finding that the model struggles with even basic computation of time. Specifically converting from seconds to hours and minutes.
To give just one example, when I prompt:
"Convert 3672 seconds to hours, minutes, and seconds. Don't include the calculations in the resulting output"
I get this:
"3672 seconds is equal to 1 hour, 0 minutes, and 36 seconds".
Which is clearly wrong - it should be 1 hour, 1 minute, and 12 seconds. Another issue that I saw a lot is that seconds were considered to be minutes, or that the hours were just completely off.
What can I do to make the support for math better? Or is that just something that the model is not meant to be used for?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Documentation on adapter train is lacking any details related to training on dataset with tool calling. And page about tool calling itself only explain how to use it from Swift without any internal details useful in training.
Question is how schema should looks like for including tool calling in dataset?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm developing a macOS application using the FoundationModels framework
(LanguageModelSession) and encountering issues with the content sanitizer
blocking legitimate text input.
** Issue Description:**
The content sanitizer is flagging text strings that contain certain
substrings, even when they represent legitimate technical content. For
example:
F_SEEL_***1S.wav (sE Electronics ***1S microphone model)
Technical product identifiers
Serial numbers and version codes
** Broader Concern:**
The content sanitizer appears to be applying restrictions that seem
inappropriate for user-owned content. Even if a filename were something
like "human ***.wav", users should have the right to process their own
legitimate files on their own devices without content filtering
interference.
** Error Messages:**
SensitiveContentSettings: Sanitizer model found unsafe content in value
FoundationModels.LanguageModelSession.GenerationError error 2
** Questions:**
Is there a way to disable content sanitization for processing
user-owned content?
2. What's the recommended approach for applications that need to handle
arbitrary user text?
3. Are there APIs to process personal content without filtering
restrictions?
** Environment:**
macOS 26.0
FoundationModels framework
LanguageModelSession
Any guidance would be appreciated.
Hi everyone,
I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on.
The following code gives the error message in the title:
import NaturalLanguage
@available(iOS 18.0, *)
func testSystemModel() {
let model = SystemLanguageModel.default
print(model)
}
What am I missing?
Hi, I just upgraded to macOS Tahoe Beta 2 and now I'm getting this error when I try to initialize my Foundation Models' session:
Error Resource (Local Sanitizer Asset) unavailable error.
import FoundationModels
#Playground {
let session = LanguageModelSession()
do {
let result = try await session.respond(to: "Tell me 3 colors")
print(result.content)
} catch {
print("Error", error)
}
}
I couldn't find any resource guiding me on how to solve this. Any help/workaround?
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
When I initialize a session with an existing transcript using this initializer:
public convenience init(model: SystemLanguageModel = .default, guardrails: LanguageModelSession.Guardrails = .default, tools: [any Tool] = [], transcript: Transcript)
The tools get ignored. I noticed that when doing that, the model never use the tools. When inspecting the transcript, I can see that the instruction entry does not have any tools available to it.
I tried this for both transcripts that already include an instruction entry and ones that don't - both yielding the same result..
Is this the intended behavior / am I missing something here?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Download the Foundation Models Adaptor Training Toolkit
Hi, after I clicked on the download button, I was redirected to this page https://developer.apple.com and did not download the toolkit.