Hello,
I am studying macOS26 Apple Intelligence features.
I have created a basic swift program with Xcode. This program is sending prompts to FoundationModels.LanguageModelSession.
It works fine but this model is not trained for programming or code completion.
Xcode has an AI code completion feature. It is called "Predictive Code completion model".
So, there are multiple on-device models on macOS26 ?
Are there others ?
Is there a way for me to send prompts to this "Predictive Code completion model" from my program ?
Thanks
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Pretty much as per the title and I suspect I know the answer. Given that Foundation Models run on device, is it possible to use Foundation Models framework inside of a DeviceActivityReport? I've been tinkering with it, and all I get is errors and "Sandbox restrictions". Am I missing something? Seems like a missed trick to utilise on device AI/ML with other frameworks.
I have an app that streams in data from the Foundation Model and I have a card that shows one of the outputs. I want my card to accept a partially generated model but I keep getting a nonsensical error.
The error I get on line 59 is:
Cannot convert value of type 'FrostDate.VegetableSuggestion.PartiallyGenerated' (aka 'FrostDate.VegetableSuggestion') to expected argument type 'FrostDate.VegetableSuggestion.PartiallyGenerated'
Here is my card with preview:
import SwiftUI
import FoundationModels
struct VegetableSuggestionCard: View {
let vegetableSuggestion: VegetableSuggestion.PartiallyGenerated
init(vegetableSuggestion: VegetableSuggestion.PartiallyGenerated) {
self.vegetableSuggestion = vegetableSuggestion
}
var body: some View {
VStack(alignment: .leading, spacing: 8) {
if let name = vegetableSuggestion.vegetableName {
Text(name)
.font(.headline)
.frame(maxWidth: .infinity, alignment: .leading)
}
if let startIndoors = vegetableSuggestion.startSeedsIndoors {
Text("Start indoors: \(startIndoors)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let startOutdoors = vegetableSuggestion.startSeedsOutdoors {
Text("Start outdoors: \(startOutdoors)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let transplant = vegetableSuggestion.transplantSeedlingsOutdoors {
Text("Transplant: \(transplant)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let tips = vegetableSuggestion.tips {
Text("Tips: \(tips)")
.foregroundStyle(.secondary)
.frame(maxWidth: .infinity, alignment: .leading)
}
}
.padding(16)
.frame(maxWidth: .infinity, alignment: .leading)
.background(
RoundedRectangle(cornerRadius: 16, style: .continuous)
.fill(.background)
.overlay(
RoundedRectangle(cornerRadius: 16, style: .continuous)
.strokeBorder(.quaternary, lineWidth: 1)
)
.shadow(color: Color.black.opacity(0.05), radius: 6, x: 0, y: 2)
)
}
}
#Preview("Vegetable Suggestion Card") {
let sample = VegetableSuggestion.PartiallyGenerated(
vegetableName: "Tomato",
startSeedsIndoors: "6–8 weeks before last frost",
startSeedsOutdoors: "After last frost when soil is warm",
transplantSeedlingsOutdoors: "1–2 weeks after last frost",
tips: "Harden off seedlings; provide full sun and consistent moisture."
)
VegetableSuggestionCard(vegetableSuggestion: sample)
.padding()
.previewLayout(.sizeThatFits)
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone,
I'm developing an iOS app using Foundation Models and I've hit a critical limitation that I believe affects many developers and millions of users.
The Issue
Foundation Models requires the device system language to be one of the supported languages. If a user has their device set to an unsupported language (Catalan, Dutch, Swedish, Polish, Danish, Norwegian, Finnish, Czech, Hungarian, Greek, Romanian, and many others), SystemLanguageModel.isSupported returns false and the framework is completely unavailable.
Why This Is Problematic
Scenario: A Catalan user has their iPhone in Catalan (native language). They want to use an AI chat app in Spanish or English (languages they speak fluently).
Current situation:
❌ Foundation Models: Completely unavailable
✅ OpenAI GPT-4: Works perfectly
✅ Anthropic Claude: Works perfectly
✅ Any cloud-based AI: Works perfectly
The user must choose between:
Keep device in Catalan → Cannot use Foundation Models at all
Change entire device to Spanish → Can use Foundation Models but terrible UX
Impact
This affects:
Millions of users in regions where unsupported languages are official
Multilingual users who prefer their device in their native language but can comfortably interact with AI in English/Spanish
Developers who cannot deploy Foundation Models-based apps in these markets
Privacy-conscious users who are ironically forced to use cloud AI instead of on-device AI
What We Need
One of these solutions would solve the problem:
Option 1: Per-app language override (preferred)
// Proposed API
let session = try await LanguageModelSession(preferredLanguage: "es-ES")
Option 2: Faster rollout of additional languages (particularly EU languages)
Option 3: Allow fallback to user-selected supported language when system language is unsupported
Technical Details
Current behavior:
// Device in Catalan
let isAvailable = SystemLanguageModel.isSupported
// Returns false
// No way to override or specify alternative language
Why This Matters
Apple Intelligence and Foundation Models are amazing for privacy and performance. But this language restriction makes the most privacy-focused AI solution less accessible than cloud alternatives. This seems contrary to Apple's values of accessibility and user choice.
Questions for the Community
Has anyone else encountered this limitation?
Are there any workarounds I'm missing?
Has anyone successfully filed feedback about this?(Please share FB number so we can reference it)
Are there any sessions or labs where this has been discussed?
Thanks for reading. I'd love to hear if others are facing this and how you're handling it.
Hello
It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics.
The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions.
Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions.
Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account?
I'm trying to provide a real-time question feature for chatGPT, a logged-in extension account, while leveraging Apple Intelligence's LLM. Is there an API that also affects the extension login account?
Is foundation models matured enough to take input from the Apple Vision framework to generate responses? Something similar to what google's gemini does although in a much smaller scale and for a very specific niche.
Hi,
I am developing an iOS application that utilizes Apple’s Foundation Models to perform certain summarization tasks. I would like to understand whether user data is transferred to Private Cloud Compute (PCC) in cases where the computation cannot be performed entirely on-device.
This information is critical for our internal security and compliance reviews. I would appreciate your clarification on this matter.
Thank you.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I’d like to submit a feature request regarding the availability of Foundation Models in MessageFilter extensions.
Background
MessageFilter extensions play a critical role in protecting users from spam, phishing, and unwanted messages. With the introduction of Foundation Models and Apple Intelligence, Apple has provided powerful on-device natural language understanding capabilities that are highly aligned with the goals of MessageFilter.
However, Foundation Models are currently unavailable in MessageFilter extensions.
Why Foundation Models Are a Great Fit for MessageFilter
Message filtering is fundamentally a natural language classification problem. Foundation Models would significantly improve:
Detection of phishing and scam messages
Classification of promotional vs transactional content
Understanding intent, tone, and semantic context beyond keyword matching
Adaptation to evolving scam patterns without server-side processing
All of this can be done fully on-device, preserving user privacy and aligning with Apple’s privacy-first design principles.
Current Limitations
Today, MessageFilter extensions are limited to relatively simple heuristics or lightweight models. This often results in:
Higher false positives
Lower recall for sophisticated scam messages
Increased development complexity to compensate for limited NLP capabilities
Request
Could Apple consider one of the following:
Allowing Foundation Models to be used directly within MessageFilter extensions
Providing a constrained or optimized Foundation Model API specifically designed for MessageFilter
Enabling a supported mechanism for MessageFilter extensions to delegate inference to the containing app using Foundation Models
Even limited access (e.g. short text only, strict execution limits) would be extremely valuable.
Closing
Foundation Models have the potential to significantly raise the quality and effectiveness of message filtering on Apple platforms while maintaining strong privacy guarantees. Supporting them in MessageFilter extensions would be a major improvement for both developers and users.
Thank you for your consideration and for continuing to invest in on-device intelligence.
I've built an iOS app with a novel approach to AI safety: a deterministic, pre-inference validation layer called Newton Engine.
Instead of relying on the LLM to self-moderate, Newton validates every prompt BEFORE it reaches the model. It uses shape theory and semantic analysis to detect:
• Corrosive frames (self-harm language patterns)
• Logical contradictions (requests that undermine themselves)
• Delegation attempts (asking AI to make human decisions)
• Jailbreak patterns (prompt injection, role-play escapes)
• Hallucination triggers (requests for fabricated citations)
The system achieves a 96% adversarial catch rate across 847 test cases, with zero false positives on benign prompts.
Key technical details:
• Pure Swift/SwiftUI, no external dependencies
• Runs entirely on-device (no server calls for validation)
• Deterministic (same input always produces same output)
• Auditable (full trace logging for every validation)
I'm preparing to submit to the App Store and wanted to ask:
Are there specific App Review guidelines I should reference for AI safety claims?
Is there interest from Apple in deterministic governance layers for Apple Intelligence integration?
Any recommendations for demonstrating safety compliance during review?
The app is called Ada, and the engine is open source at: github.com/jaredlewiswechs/ada-newton
Happy to share technical documentation or discuss the architecture with anyone interested.
See: parcri.net
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone,
I've been building an on-device AI safety layer called Newton Engine, designed to validate prompts before they reach FoundationModels (or any LLM). Wanted to share v1.3 and get feedback from the community.
The Problem
Current AI safety is post-training — baked into the model, probabilistic, not auditable. When Apple Intelligence ships with FoundationModels, developers will need a way to catch unsafe prompts before inference, with deterministic results they can log and explain.
What Newton Does
Newton validates every prompt pre-inference and returns:
Phase (0/1/7/8/9)
Shape classification
Confidence score
Full audit trace
If validation fails, generation is blocked. If it passes (Phase 9), the prompt proceeds to the model.
v1.3 Detection Categories (14 total)
Jailbreak / prompt injection
Corrosive self-negation ("I hate myself")
Hedged corrosive ("Not saying I'm worthless, but...")
Emotional dependency ("You're the only one who understands")
Third-person manipulation ("If you refuse, you're proving nobody cares")
Logical contradictions ("Prove truth doesn't exist")
Self-referential paradox ("Prove that proof is impossible")
Semantic inversion ("Explain how truth can be false")
Definitional impossibility ("Square circle")
Delegated agency ("Decide for me")
Hallucination-risk prompts ("Cite the 2025 CDC report")
Unbounded recursion ("Repeat forever")
Conditional unbounded ("Until you can't")
Nonsense / low semantic density
Test Results
94.3% catch rate on 35 adversarial test cases (33/35 passed).
Architecture
User Input
↓
[ Newton ] → Validates prompt, assigns Phase
↓
Phase 9? → [ FoundationModels ] → Response
Phase 1/7/8? → Blocked with explanation
Key Properties
Deterministic (same input → same output)
Fully auditable (ValidationTrace on every prompt)
On-device (no network required)
Native Swift / SwiftUI
String Catalog localization (EN/ES/FR)
FoundationModels-ready (#if canImport)
Code Sample — Validation
let governor = NewtonGovernor()
let result = governor.validate(prompt: userInput)
if result.permitted {
// Proceed to FoundationModels
let session = LanguageModelSession()
let response = try await session.respond(to: userInput)
} else {
// Handle block
print("Blocked: Phase \(result.phase.rawValue) — \(result.reasoning)")
print(result.trace.summary) // Full audit trace
}
Questions for the Community
Anyone else building pre-inference validation for FoundationModels?
Thoughts on the Phase system (0/1/7/8/9) vs. simple pass/fail?
Interest in Shape Theory classification for prompt complexity?
Best practices for integrating with LanguageModelSession?
Links
GitHub: https://github.com/jaredlewiswechs/ada-newton
Technical overview: parcri.net
Happy to share more implementation details. Looking for feedback, collaborators, and anyone else thinking about deterministic AI safety on-device.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Swift Packages
Machine Learning
Apple Intelligence
Hi,
I'm using LanguageModelSession and giving it two different tools to query data from a local database. I'm wondering how I can have the session generate structured content as the response that includes data one or both tools (or no tool at all).
Here is an example of what I'm trying to do:
Let's say the app has access to a database that contains information about exercise and sleep data (this is just an analogy). There are two tools, GetExerciseData() and GetSleepData(). The user may then prompt something like, "how well did I sleep in November". I have this working so that it calls through to the right tool, which would return a SleepSummary. However, I can't figure out how to have the session return the right structured data.
I can do this and get back good text data:
let response = session.respond(to: userInput), but I believe I want to do something like:
let response = session.respond(to: trimmed, generating: <SomeStructure?>) Sometimes the model I run one tool or the other, or both tools, or no tool at all.
Any help of what the right way to go about this would be much appreciated. Most of the example I found have to do with 1 tool.
I can’t seem to find a way to include an image when prompting the new on-device model in Xcode, even though Apple explicitly states that the model was trained and tested with image data (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates).
Has anyone managed to get this working, or are VLM-style capabilities simply not exposed yet?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
We are really excited to have introduced the Foundation Models framework in WWDC25. When using the framework, you might have feedback about how it can better fit your use cases.
Starting in macOS/iOS 26 Beta 4, the best way to provide feedback is to use #Playground in Xcode. To do so:
In Xcode, create a playground using #Playground. Fore more information, see Running code snippets using the playground macro.
Reproduce the issue by setting up a session and generating a response with your prompt.
In the canvas on the right, click the thumbs-up icon to the right of the response.
Follow the instructions on the pop-up window and submit your feedback by clicking Share with Apple.
Another way to provide your feedback is to file a feedback report with relevant details. Specific to the Foundation Models framework, it’s super important to add the following information in your report:
Language model feedback
This feedback contains the session transcript, including the instructions, the prompts, the responses, etc. Without that, we can’t reason the model’s behavior, and hence can hardly take any action.
Use logFeedbackAttachment(sentiment:issues:desiredOutput: ) to retrieve the feedback data of your current model session, as shown in the usage example, write the data into a file, and then attach the file to your feedback report.
If you believe what you’d report is related to the system configuration, please capture a sysdiagnose and attach it to your feedback report as well.
The framework is still new. Your actionable feedback helps us evolve the framework quickly, and we appreciate that.
Thanks,
The Foundation Models framework team
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am writing a custom package wrapping Foundation Models which provides a chain-of-thought with intermittent self-evaluation among other things. At first I was designing this package with the command line in mind, but after seeing how well it augments the models and makes them more intelligent I wanted to try and build a SwiftUI wrapper around the package.
When I started I was using synchronous generation rather than streaming, but to give the best user experience (as I've seen in the WWDC sessions) it is necessary to provide constant feedback to the user that something is happening.
I have created a super simplified example of my setup so it's easier to understand.
First, there is the Reasoning conversation item, which can be converted to an XML representation which is then fed back into the model (I've found XML works best for structured input)
public typealias ConversationContext = XMLDocument
extension ConversationContext {
public func toPlainText() -> String {
return xmlString(options: [.nodePrettyPrint])
}
}
/// Represents a reasoning item in a conversation, which includes a title and reasoning content.
/// Reasoning items are used to provide detailed explanations or justifications for certain decisions or responses within a conversation.
@Generable(description: "A reasoning item in a conversation, containing content and a title.")
struct ConversationReasoningItem: ConversationItem {
@Guide(description: "The content of the reasoning item, which is your thinking process or explanation")
public var reasoningContent: String
@Guide(description: "A short summary of the reasoning content, digestible in an interface.")
public var title: String
@Guide(description: "Indicates whether reasoning is complete")
public var done: Bool
}
extension ConversationReasoningItem: ConversationContextProvider {
public func toContext() -> ConversationContext {
// <ReasoningItem title="${title}">
// ${reasoningContent}
// </ReasoningItem>
let root = XMLElement(name: "ReasoningItem")
root.addAttribute(XMLNode.attribute(withName: "title", stringValue: title) as! XMLNode)
root.stringValue = reasoningContent
return ConversationContext(rootElement: root)
}
}
Then there is the generator, which creates a reasoning item from a user query and previously generated items:
struct ReasoningItemGenerator {
var instructions: String {
"""
<omitted for brevity>
"""
}
func generate(from input: (String, [ConversationReasoningItem])) async throws -> sending LanguageModelSession.ResponseStream<ConversationReasoningItem> {
let session = LanguageModelSession(instructions: instructions)
// build the context for the reasoning item out of the user's query and the previous reasoning items
let userQuery = "User's query: \(input.0)"
let reasoningItemsText = input.1.map { $0.toContext().toPlainText() }.joined(separator: "\n")
let context = userQuery + "\n" + reasoningItemsText
let reasoningItemResponse = try await session.streamResponse(
to: context, generating: ConversationReasoningItem.self)
return reasoningItemResponse
}
}
I'm not sure if returning LanguageModelSession.ResponseStream<ConversationReasoningItem> is the right move, I am just trying to imitate what session.streamResponse returns.
Then there is the orchestrator, which I can't figure out. It receives the streamed ConversationReasoningItems from the Generator and is responsible for streaming those to SwiftUI later and also for evaluating each reasoning item after it is complete to see if it needs to be regenerated (to keep the model on-track). I want the users of the orchestrator to receive partially generated reasoning items as they are being generated by the generator. Later, when they finish, if the evaluation passes, the item is kept, but if it fails, the reasoning item should be removed from the stream before a new one is generated. So in-flight reasoning items should be outputted aggresively.
I really am having trouble figuring this out so if someone with more knowledge about asynchronous stuff in Swift, or- even better- someone who has worked on the Foundation Models framework could point me in the right direction, that would be awesome!
Hi everyone,
I’m currently exploring the use of Foundation models on Apple platforms to build a chatbot-style assistant within an app. While the integration part is straightforward using the new FoundationModel APIs, I’m trying to figure out how to control the assistant’s responses more tightly — particularly:
Ensuring the assistant adheres to a specific tone, context, or domain (e.g. hospitality, healthcare, etc.)
Preventing hallucinations or unrelated outputs
Constraining responses based on app-specific rules, structured data, or recent interactions
I’ve experimented with prompt, systemMessage, and few-shot examples to steer outputs, but even with carefully generated prompts, the model occasionally produces incorrect or out-of-scope responses.
Additionally, when using multiple tools, I'm unsure how best to structure the setup so the model can select the correct pathway/tool and respond appropriately. Is there a recommended approach to guiding the model's decision-making when several tools or structured contexts are involved?
Looking forward to hearing your thoughts or being pointed toward related WWDC sessions, Apple docs, or sample projects.
I'm running MacOs 26 Beta 5. I noticed that I can no longer achieve 100% usage on the ANE as I could before with Apple Foundations on-device model. Has Apple activated some kind of throttling or power limiting of the ANE? I cannot get above 3w or 40% usage now since upgrading. I'm on the high power energy mode. I there an API rate limit being applied?
I kave a M4 Pro mini with 64 GB of memory.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In this online session, you can code along with us as we build generative AI features into a sample app live in Xcode. We'll guide you through implementing core features like basic text generation, as well as advanced topics like guided generation for structured data output, streaming responses for dynamic UI updates, and tool calling to retrieve data or take an action.
Check out these resources to get started:
Download the project files: https://developer.apple.com/events/re...
Explore the code along guide: https://developer.apple.com/events/re...
Join the live Q&A: https://developer.apple.com/videos/pl...
Agenda – All times PDT
10 a.m.: Welcome and Xcode setup
10:15 a.m.: Framework basics, guided generation, and building prompts
11 a.m.: Break
11:10 a.m.: UI streaming, tool calling, and performance optimization
11:50 a.m.: Wrap up
All are welcome to attend the session. To actively code along, you'll need a Mac with Apple silicon that supports Apple Intelligence running the latest release of macOS Tahoe 26 and Xcode 26.
If you have questions after the code along concludes please share a post here in the forums and engage with the community.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models