Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model.
my set up is:
Mac M1 Pro
MacOS 26 Beta
Version 26.0 beta 3
Apple Intelligence & Siri --> On
here is the code,
func generate() {
Task {
isGenerating = true
output = "⏳ Thinking..."
do {
let session = LanguageModelSession( instructions: """
Extract time from a message. Example
Q: Golfing at 6PM
A: 6PM
""")
let response = try await session.respond(to: "Go to gym at 7PM")
output = response.content
} catch {
output = "❌ Error:, \(error)"
print(output)
}
isGenerating = false
}
and I get these errors
guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog]))
Can you help me get through this?
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Foundation Models framework worked perfectly on macOS 26 Beta 2, but starting from Beta 3 and continuing through Beta 6 (latest), I get dyld symbol errors even
with the exact code from Apple's documentation.
Environment:
macOS 26.0 Beta 6 (25A5351b)
Xcode 26 Beta 6
M4 Max MacBook Pro
Apple Intelligence enabled and downloaded
Error Details:
dyld[Process]: Symbol not found:
_$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC
Referenced from: /path/to/app.debug.dylib
Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Code Used (Exact from Documentation):
import FoundationModels
// This worked on Beta 2, crashes on Beta 3+
let model = SystemLanguageModel.default
let session = LanguageModelSession(model: model)
let response = try await session.respond(to: "Hello")
What I've Verified:
FoundationModels.framework exists in /System/Library/Frameworks/
Framework is properly linked in Xcode project
Apple Intelligence is enabled and working
Same code works in older beta versions
Issue persists even with completely fresh Xcode projects
Analysis:
The dyld error suggests the LanguageModelSession(model:) constructor is missing. The symbol shows it's looking for a constructor with parameters
(model:guardrails:tools:instructions:), but the documentation still shows the simple (model:) constructor.
Questions:
Has the LanguageModelSession API changed since Beta 2?
Should we now use the constructor with guardrails/tools/instructions parameters?
Is this a known issue with recent betas?
Are there updated code samples for the current API?
Additional Context:
This affects both basic SystemLanguageModel usage AND custom adapter loading. The same dyld symbol errors occur when trying to create
SystemLanguageModel(adapter: adapter) as well.
Any guidance on the correct API usage for current betas would be greatly appreciated. The documentation appears to be out of sync with the actual framework
implementation.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is the face and body detection service in the Vision framework a local model or a cloud model? Is there a performance report?
https://developer.apple.com/documentation/vision
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence
Seeing this error from time to time:
Context(debugDescription: "Content contains 4089 tokens, which exceeds the maximum allowed context size of 4096.", underlyingErrors: [])
Of course, 4089 is less than 4096 so what is this telling me and how do I work around it? Is the limit actually lower than 4096?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In this online session, you can code along with us as we build generative AI features into a sample app live in Xcode. We'll guide you through implementing core features like basic text generation, as well as advanced topics like guided generation for structured data output, streaming responses for dynamic UI updates, and tool calling to retrieve data or take an action.
Check out these resources to get started:
Download the project files: https://developer.apple.com/events/re...
Explore the code along guide: https://developer.apple.com/events/re...
Join the live Q&A: https://developer.apple.com/videos/pl...
Agenda – All times PDT
10 a.m.: Welcome and Xcode setup
10:15 a.m.: Framework basics, guided generation, and building prompts
11 a.m.: Break
11:10 a.m.: UI streaming, tool calling, and performance optimization
11:50 a.m.: Wrap up
All are welcome to attend the session. To actively code along, you'll need a Mac with Apple silicon that supports Apple Intelligence running the latest release of macOS Tahoe 26 and Xcode 26.
If you have questions after the code along concludes please share a post here in the forums and engage with the community.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
I am studying macOS26 Apple Intelligence features.
I have created a basic swift program with Xcode. This program is sending prompts to FoundationModels.LanguageModelSession.
It works fine but this model is not trained for programming or code completion.
Xcode has an AI code completion feature. It is called "Predictive Code completion model".
So, there are multiple on-device models on macOS26 ?
Are there others ?
Is there a way for me to send prompts to this "Predictive Code completion model" from my program ?
Thanks
Hello!
I'm following the Foundation Models adapter training guide (https://developer.apple.com/apple-intelligence/foundation-models-adapter/) on my NVIDIA DGX Spark box. I'm able to train on my own data but the example notebook fails when I try to export the artifact as an fmadapter. I get the following error for the code block I'm trying to run. I haven't touched any of the code in the export folder. I tried exporting it on my Mac too and got the same error as well (given below). Would appreciate some more clarity around this. Thank you.
Code Block:
from export.export_fmadapter import Metadata, export_fmadapter
metadata = Metadata(
author="3P developer",
description="An adapter that writes play scripts.",
)
export_fmadapter(
output_dir="./",
adapter_name="myPlaywritingAdapter",
metadata=metadata,
checkpoint="adapter-final.pt",
draft_checkpoint="draft-model-final.pt",
)
Error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from export.export_fmadapter import Metadata, export_fmadapter
3 metadata = Metadata(
4 author="3P developer",
5 description="An adapter that writes play scripts.",
6 )
8 export_fmadapter(
9 output_dir="./",
10 adapter_name="myPlaywritingAdapter",
(...) 13 draft_checkpoint="draft-model-final.pt",
14 )
File /workspace/export/export_fmadapter.py:11
8 from typing import Any
10 from .constants import BASE_SIGNATURE, MIL_PATH
---> 11 from .export_utils import AdapterConverter, AdapterSpec, DraftModelConverter, camelize
13 logger = logging.getLogger(__name__)
16 class MetadataKeys(enum.StrEnum):
File /workspace/export/export_utils.py:15
13 import torch
14 import yaml
---> 15 from coremltools.libmilstoragepython import _BlobStorageWriter as BlobWriter
16 from coremltools.models.neural_network.quantization_utils import _get_kmeans_lookup_table_and_weight
17 from coremltools.optimize._utils import LutParams
ModuleNotFoundError: No module named 'coremltools.libmilstoragepython'
I have been able to train an adapter on Google's Colaboratory.
I am able to start a LanguageModelSession and load it with my adapter.
The problem is that after one simple prompt, the context window is 90% full.
If I start the session without the adapter, the same simple prompt consumes only 1% of the context window.
Has anyone encountered this? I asked Claude AI and it seems to think that my training script needs adjusting. Grok on the other hand is (wrongly, I tried) convinced that I just need to tweak some parameters of LanguageModelSession or SystemLanguageModel.
Thanks for any tips.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I’d like to submit a feature request regarding the availability of Foundation Models in MessageFilter extensions.
Background
MessageFilter extensions play a critical role in protecting users from spam, phishing, and unwanted messages. With the introduction of Foundation Models and Apple Intelligence, Apple has provided powerful on-device natural language understanding capabilities that are highly aligned with the goals of MessageFilter.
However, Foundation Models are currently unavailable in MessageFilter extensions.
Why Foundation Models Are a Great Fit for MessageFilter
Message filtering is fundamentally a natural language classification problem. Foundation Models would significantly improve:
Detection of phishing and scam messages
Classification of promotional vs transactional content
Understanding intent, tone, and semantic context beyond keyword matching
Adaptation to evolving scam patterns without server-side processing
All of this can be done fully on-device, preserving user privacy and aligning with Apple’s privacy-first design principles.
Current Limitations
Today, MessageFilter extensions are limited to relatively simple heuristics or lightweight models. This often results in:
Higher false positives
Lower recall for sophisticated scam messages
Increased development complexity to compensate for limited NLP capabilities
Request
Could Apple consider one of the following:
Allowing Foundation Models to be used directly within MessageFilter extensions
Providing a constrained or optimized Foundation Model API specifically designed for MessageFilter
Enabling a supported mechanism for MessageFilter extensions to delegate inference to the containing app using Foundation Models
Even limited access (e.g. short text only, strict execution limits) would be extremely valuable.
Closing
Foundation Models have the potential to significantly raise the quality and effectiveness of message filtering on Apple platforms while maintaining strong privacy guarantees. Supporting them in MessageFilter extensions would be a major improvement for both developers and users.
Thank you for your consideration and for continuing to invest in on-device intelligence.
I've built an iOS app with a novel approach to AI safety: a deterministic, pre-inference validation layer called Newton Engine.
Instead of relying on the LLM to self-moderate, Newton validates every prompt BEFORE it reaches the model. It uses shape theory and semantic analysis to detect:
• Corrosive frames (self-harm language patterns)
• Logical contradictions (requests that undermine themselves)
• Delegation attempts (asking AI to make human decisions)
• Jailbreak patterns (prompt injection, role-play escapes)
• Hallucination triggers (requests for fabricated citations)
The system achieves a 96% adversarial catch rate across 847 test cases, with zero false positives on benign prompts.
Key technical details:
• Pure Swift/SwiftUI, no external dependencies
• Runs entirely on-device (no server calls for validation)
• Deterministic (same input always produces same output)
• Auditable (full trace logging for every validation)
I'm preparing to submit to the App Store and wanted to ask:
Are there specific App Review guidelines I should reference for AI safety claims?
Is there interest from Apple in deterministic governance layers for Apple Intelligence integration?
Any recommendations for demonstrating safety compliance during review?
The app is called Ada, and the engine is open source at: github.com/jaredlewiswechs/ada-newton
Happy to share technical documentation or discuss the architecture with anyone interested.
See: parcri.net
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone,
I've been building an on-device AI safety layer called Newton Engine, designed to validate prompts before they reach FoundationModels (or any LLM). Wanted to share v1.3 and get feedback from the community.
The Problem
Current AI safety is post-training — baked into the model, probabilistic, not auditable. When Apple Intelligence ships with FoundationModels, developers will need a way to catch unsafe prompts before inference, with deterministic results they can log and explain.
What Newton Does
Newton validates every prompt pre-inference and returns:
Phase (0/1/7/8/9)
Shape classification
Confidence score
Full audit trace
If validation fails, generation is blocked. If it passes (Phase 9), the prompt proceeds to the model.
v1.3 Detection Categories (14 total)
Jailbreak / prompt injection
Corrosive self-negation ("I hate myself")
Hedged corrosive ("Not saying I'm worthless, but...")
Emotional dependency ("You're the only one who understands")
Third-person manipulation ("If you refuse, you're proving nobody cares")
Logical contradictions ("Prove truth doesn't exist")
Self-referential paradox ("Prove that proof is impossible")
Semantic inversion ("Explain how truth can be false")
Definitional impossibility ("Square circle")
Delegated agency ("Decide for me")
Hallucination-risk prompts ("Cite the 2025 CDC report")
Unbounded recursion ("Repeat forever")
Conditional unbounded ("Until you can't")
Nonsense / low semantic density
Test Results
94.3% catch rate on 35 adversarial test cases (33/35 passed).
Architecture
User Input
↓
[ Newton ] → Validates prompt, assigns Phase
↓
Phase 9? → [ FoundationModels ] → Response
Phase 1/7/8? → Blocked with explanation
Key Properties
Deterministic (same input → same output)
Fully auditable (ValidationTrace on every prompt)
On-device (no network required)
Native Swift / SwiftUI
String Catalog localization (EN/ES/FR)
FoundationModels-ready (#if canImport)
Code Sample — Validation
let governor = NewtonGovernor()
let result = governor.validate(prompt: userInput)
if result.permitted {
// Proceed to FoundationModels
let session = LanguageModelSession()
let response = try await session.respond(to: userInput)
} else {
// Handle block
print("Blocked: Phase \(result.phase.rawValue) — \(result.reasoning)")
print(result.trace.summary) // Full audit trace
}
Questions for the Community
Anyone else building pre-inference validation for FoundationModels?
Thoughts on the Phase system (0/1/7/8/9) vs. simple pass/fail?
Interest in Shape Theory classification for prompt complexity?
Best practices for integrating with LanguageModelSession?
Links
GitHub: https://github.com/jaredlewiswechs/ada-newton
Technical overview: parcri.net
Happy to share more implementation details. Looking for feedback, collaborators, and anyone else thinking about deterministic AI safety on-device.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Swift Packages
Machine Learning
Apple Intelligence
I'm experimenting with Foundation Models and I'm trying to understand how to define a Tool whose input argument is defined at runtime. Specifically, I want a Tool that takes a single String parameter that can only take certain values defined at runtime.
I think my question is basically the same as this one: https://developer.apple.com/forums/thread/793471 However, the answer provided by the engineer doesn't actually demonstrate how to create the GenerationSchema. Trying to piece things together from the documentation that the engineer linked to, I came up with this:
let citiesDefinedAtRuntime = ["London", "New York", "Paris"]
let citySchema = DynamicGenerationSchema(
name: "CityList",
properties: [
DynamicGenerationSchema.Property(
name: "city",
schema: DynamicGenerationSchema(
name: "city",
anyOf: citiesDefinedAtRuntime
)
)
]
)
let generationSchema = try GenerationSchema(root: citySchema, dependencies: [])
let tools = [CityInfo(parameters: generationSchema)]
let session = LanguageModelSession(tools: tools, instructions: "...")
With the CityInfo Tool defined like this:
struct CityInfo: Tool {
let name: String = "getCityInfo"
let description: String = "Get information about a city."
let parameters: GenerationSchema
func call(arguments: GeneratedContent) throws -> String {
let cityName = try arguments.value(String.self, forProperty: "city")
print("Requested info about \(cityName)")
let cityInfo = getCityInfo(for: cityName)
return cityInfo
}
func getCityInfo(for city: String) -> String {
// some backend that provides the info
}
}
This compiles and usually seems to work. However, sometimes the model will try to request info about a city that is not in citiesDefinedAtRuntime. For example, if I prompt the model with "I want to travel to Tokyo in Japan, can you tell me about this city?", the model will try to request info about Tokyo, even though this is not in the citiesDefinedAtRuntime array.
My understanding is that this should not be possible – constrained generation should only allow the LLM to generate an input argument from the list of cities defined in the schema.
Am I missing something here or overcomplicating things?
What's the correct way to make sure the LLM can only call a Tool with an input parameter from a set of possible values defined at runtime?
Many thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I can’t seem to find a way to include an image when prompting the new on-device model in Xcode, even though Apple explicitly states that the model was trained and tested with image data (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates).
Has anyone managed to get this working, or are VLM-style capabilities simply not exposed yet?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I've created the following Foundation Models Tool, which uses the .anyOf guide to constrain the LLM's generation of suitable input arguments. When calling the tool, the model is only allowed to request one of a fixed set of sections, as defined in the sections array.
struct SectionReader: Tool {
let article: Article
let sections: [String]
let name: String = "readSection"
let description: String = "Read a specific section from the article."
var parameters: GenerationSchema {
GenerationSchema(
type: GeneratedContent.self,
properties: [
GenerationSchema.Property(
name: "section",
description: "The article section to access.",
type: String.self,
guides: [.anyOf(sections)]
)
]
)
}
func call(arguments: GeneratedContent) async throws -> String {
let requestedSectionName = try arguments.value(String.self, forProperty: "section")
...
}
}
However, I have found that the model will sometimes call the tool with invalid (but plausible) section names, meaning that .anyOf is not actually doing its job (i.e. requestedSectionName is sometimes not a member of sections).
The documentation for the .anyOf guide says, "Enforces that the string be one of the provided values."
Is this a bug or have I made a mistake somewhere?
Many thanks for any help you provide!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi,
I have an app that uses Core Data to store user information and display it in various views. I want to know if it's possible to easily integrate this setup with FoundationModels to make it easier for the user to query and manipulate the information, and if so, how would I go about it? Can the model be pointed to the database schema file and the SQLite file sitting in the user's app group container to parse out the information needed? And/or should the NSManagedObjects be made @Generable for better output? Any guidance about this would be useful.
I've run into an issue with a small Foundation Models test with Generable. I'm getting a strange error message with this Generable. I was able to get simpler ones to work.
Is this because the Generable is recursive with a property of [HTMLDiv]?
The error message is:
FoundationModels/SchemaAugmentor.swift:209: Fatal error: 'try!' expression unexpectedly raised an error: FoundationModels.GenerationSchema.SchemaError.undefinedReferences(schema: Optional("SafeResponse<HTMLDiv>"), references: ["HTMLDiv"], context: FoundationModels.GenerationSchema.SchemaError.Context(debugDescription: "Undefined types: [HTMLDiv]", underlyingErrors: []))
The code is:
import FoundationModels
import Playgrounds
@Generable
struct HTMLDiv {
@Guide(description: "Optional named ID, useful for nicknames")
var id: String? = nil
@Guide(description: "Optional visible HTML text")
var textContent: String? = nil
@Guide(description: "Any child elements", .count(0...10))
var children: [HTMLDiv] = []
static var sample: HTMLDiv {
HTMLDiv(
id: "profileToolbar",
children: [
HTMLDiv(textContent: "Log in"),
HTMLDiv(textContent: "Sign up"),
]
)
}
}
#Playground {
do {
let session = LanguageModelSession {
"Your job is to generate simple HTML markup"
"Here is an example response to the prompt: 'Make a profile toolbar':"
HTMLDiv.sample
}
let response = try await session.respond(
to: "Make a sign up form",
generating: HTMLDiv.self
)
print(response.content)
} catch {
print(error)
}
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
How reliable is the Models, to use as a comparison, such as a cholesterol test, to inform, for example, whether it is worth it to go see a doctor?
I would like to use Tool to attach the simple blood test data to the session and with this the Model can analyse and made a simple suggestion if is necessary to see a doctor etc.. ?
ps.: Local model
I am experimenting with Foundation Models in my time tracking app to analyze users tracked events, but I am finding that the model struggles with even basic computation of time. Specifically converting from seconds to hours and minutes.
To give just one example, when I prompt:
"Convert 3672 seconds to hours, minutes, and seconds. Don't include the calculations in the resulting output"
I get this:
"3672 seconds is equal to 1 hour, 0 minutes, and 36 seconds".
Which is clearly wrong - it should be 1 hour, 1 minute, and 12 seconds. Another issue that I saw a lot is that seconds were considered to be minutes, or that the hours were just completely off.
What can I do to make the support for math better? Or is that just something that the model is not meant to be used for?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi,
I have trained a basic adapter using the adapter training toolkit. I am trying a very basic example of loading it and running inference with it, but am getting the following error:
Passing along InferenceError::inferenceFailed::loadFailed::Error Domain=com.apple.TokenGenerationInference.E5Runner Code=0 "Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006)." UserInfo={NSLocalizedDescription=Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006).} in response to ExecuteRequest
Any ideas / direction?
For testing I am including the .fmadapter file inside the app bundle. This is where I load it:
@State private var session: LanguageModelSession? // = LanguageModelSession()
func loadAdapter() async throws {
if let assetURL = Bundle.main.url(forResource: "qasc---afm---4-epochs-adapter", withExtension: "fmadapter") {
print("Asset URL: \(assetURL)")
let adapter = try SystemLanguageModel.Adapter(fileURL: assetURL)
let adaptedModel = SystemLanguageModel(adapter: adapter)
session = LanguageModelSession(model: adaptedModel)
print("Loaded adapter and updated session")
} else {
print("Asset not found in the main bundle.")
}
}
This seems to work fine as I get to the log Loaded adapter and updated session. However when the below inference code runs I get the aforementioned error:
func sendMessage(_ msg: String) {
self.loading = true
if let session = session {
Task {
do {
let modelResponse = try await session.respond(to: msg)
DispatchQueue.main.async {
self.response = modelResponse.content
self.loading = false
}
} catch {
print("Error: \(error)")
DispatchQueue.main.async {
self.loading = false
}
}
}
}
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models