Foundation Models are driving me up the wall.
My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations...
... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use.
I instructed the model to tell me why it failed if that happens.
Examples of various refusals for news articles from major sources:
"The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26."
"The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines."
"the article contains unsafe content."
"The article is biased and opinionated and focuses on the author's opinion."
(this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses)
I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work.
Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence
Hey everyone,
Is it possible to generate XML using the “Generable” macro of the Foundation Model Framework?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Seeing this error from time to time:
Context(debugDescription: "Content contains 4089 tokens, which exceeds the maximum allowed context size of 4096.", underlyingErrors: [])
Of course, 4089 is less than 4096 so what is this telling me and how do I work around it? Is the limit actually lower than 4096?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am excited to try Foundation Models during WWDC, but it doesn't work at all for me. When running on my iPad Pro M4 with iPadOS 26 seed 1, I get the following error even when running the simplest query:
let prompt = "How are you?"
let stream = session.streamResponse(to: prompt)
for try await partial in stream {
self.answer = partial
self.resultString = partial
}
In the Xcode console, I see the following error:
assetsUnavailable(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Model is unavailable", underlyingErrors: []))
I have verified that Apple Intelligence is enabled on my iPad. Any tips on how can I get it working? I have also submitted this feedback: FB17896752
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In this online session, you can code along with us as we build generative AI features into a sample app live in Xcode. We'll guide you through implementing core features like basic text generation, as well as advanced topics like guided generation for structured data output, streaming responses for dynamic UI updates, and tool calling to retrieve data or take an action.
Check out these resources to get started:
Download the project files: https://developer.apple.com/events/re...
Explore the code along guide: https://developer.apple.com/events/re...
Join the live Q&A: https://developer.apple.com/videos/pl...
Agenda – All times PDT
10 a.m.: Welcome and Xcode setup
10:15 a.m.: Framework basics, guided generation, and building prompts
11 a.m.: Break
11:10 a.m.: UI streaming, tool calling, and performance optimization
11:50 a.m.: Wrap up
All are welcome to attend the session. To actively code along, you'll need a Mac with Apple silicon that supports Apple Intelligence running the latest release of macOS Tahoe 26 and Xcode 26.
If you have questions after the code along concludes please share a post here in the forums and engage with the community.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I would like to write a macOS application that uses on-device AI (FoundationModels).
I don’t understand how to, practically, give it access to my documents, photos, or contacts and be able to ask it a question like: “Find the document that talks about this topic.”
Do I need to manually retrieve the data and provide it in the form of a prompt? Or is FoundationModels capable of accessing it on its own?
Thanks
Hello,
I am studying macOS26 Apple Intelligence features.
I have created a basic swift program with Xcode. This program is sending prompts to FoundationModels.LanguageModelSession.
It works fine but this model is not trained for programming or code completion.
Xcode has an AI code completion feature. It is called "Predictive Code completion model".
So, there are multiple on-device models on macOS26 ?
Are there others ?
Is there a way for me to send prompts to this "Predictive Code completion model" from my program ?
Thanks
Hello
I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points:
• Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model?
• Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS?
• When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)?
• Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply?
Thanks
Hello, I was trying to test out Foundation Model however it says Model assets are unavailable. I got my MacBook M1 back in China when i was living there. is this due to region lock?
I want to use Foundation Models in a project, but I know my users will want to avoid environmentally intensive AI work in data centers.
Does Foundation Models ever use Private Compute Cloud or any other kind of cloud-based AI system?
I'd like to be able to assure my users that the LLM usage is relatively environmentally friendly. It would be great to be able to cite a specific Apple page explaining that Foundation Models work is always done locally.
If there's any chance that work can be done in the cloud, is there a way to opt out of that?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In working with Apple's foundation models, we often want to provide as much context as possible. However, since the model has a context size limit of 4096 tokens, is there a way to estimate the number of tokens beforehand?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Pretty much as per the title and I suspect I know the answer. Given that Foundation Models run on device, is it possible to use Foundation Models framework inside of a DeviceActivityReport? I've been tinkering with it, and all I get is errors and "Sandbox restrictions". Am I missing something? Seems like a missed trick to utilise on device AI/ML with other frameworks.
Hi
For certain tasks, such as qualitative analysis or tagging, it is advisable to provide the AI with the option to respond with a joker / wild card answer when it encounters difficulties in tagging or scoring. For instance, you can include this slot in the prompt as follows:
output must be "not data to score" when there isn't information to score.
In the absence of these types of slots, AI trends to provide a solution even when there is insufficient information.
Foundations Models are told to be prompted with simple prompts. I wonder: Is recommended keep this slot though adds verbose complexity? Is the best place the comment of a guided attribute? other tips?
Another use case is when you want the AI to be tied to the information provided in the prompt and not take information from its data set. What is the best approach to this purpose?
Thanks in advance for any suggestion.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have an app that streams in data from the Foundation Model and I have a card that shows one of the outputs. I want my card to accept a partially generated model but I keep getting a nonsensical error.
The error I get on line 59 is:
Cannot convert value of type 'FrostDate.VegetableSuggestion.PartiallyGenerated' (aka 'FrostDate.VegetableSuggestion') to expected argument type 'FrostDate.VegetableSuggestion.PartiallyGenerated'
Here is my card with preview:
import SwiftUI
import FoundationModels
struct VegetableSuggestionCard: View {
let vegetableSuggestion: VegetableSuggestion.PartiallyGenerated
init(vegetableSuggestion: VegetableSuggestion.PartiallyGenerated) {
self.vegetableSuggestion = vegetableSuggestion
}
var body: some View {
VStack(alignment: .leading, spacing: 8) {
if let name = vegetableSuggestion.vegetableName {
Text(name)
.font(.headline)
.frame(maxWidth: .infinity, alignment: .leading)
}
if let startIndoors = vegetableSuggestion.startSeedsIndoors {
Text("Start indoors: \(startIndoors)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let startOutdoors = vegetableSuggestion.startSeedsOutdoors {
Text("Start outdoors: \(startOutdoors)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let transplant = vegetableSuggestion.transplantSeedlingsOutdoors {
Text("Transplant: \(transplant)")
.frame(maxWidth: .infinity, alignment: .leading)
}
if let tips = vegetableSuggestion.tips {
Text("Tips: \(tips)")
.foregroundStyle(.secondary)
.frame(maxWidth: .infinity, alignment: .leading)
}
}
.padding(16)
.frame(maxWidth: .infinity, alignment: .leading)
.background(
RoundedRectangle(cornerRadius: 16, style: .continuous)
.fill(.background)
.overlay(
RoundedRectangle(cornerRadius: 16, style: .continuous)
.strokeBorder(.quaternary, lineWidth: 1)
)
.shadow(color: Color.black.opacity(0.05), radius: 6, x: 0, y: 2)
)
}
}
#Preview("Vegetable Suggestion Card") {
let sample = VegetableSuggestion.PartiallyGenerated(
vegetableName: "Tomato",
startSeedsIndoors: "6–8 weeks before last frost",
startSeedsOutdoors: "After last frost when soil is warm",
transplantSeedlingsOutdoors: "1–2 weeks after last frost",
tips: "Harden off seedlings; provide full sun and consistent moisture."
)
VegetableSuggestionCard(vegetableSuggestion: sample)
.padding()
.previewLayout(.sizeThatFits)
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
I have created this basic swift program:
let session = LanguageModelSession(
model: .default,
instructions: "bla bla bla.")
I want to understand what I can put in model parameter (instead of .default).
How can I choose between on-device local model (.default I suppose) and apple private cloud model (or any other ?)
Thanks
Hello!
I'm following the Foundation Models adapter training guide (https://developer.apple.com/apple-intelligence/foundation-models-adapter/) on my NVIDIA DGX Spark box. I'm able to train on my own data but the example notebook fails when I try to export the artifact as an fmadapter. I get the following error for the code block I'm trying to run. I haven't touched any of the code in the export folder. I tried exporting it on my Mac too and got the same error as well (given below). Would appreciate some more clarity around this. Thank you.
Code Block:
from export.export_fmadapter import Metadata, export_fmadapter
metadata = Metadata(
author="3P developer",
description="An adapter that writes play scripts.",
)
export_fmadapter(
output_dir="./",
adapter_name="myPlaywritingAdapter",
metadata=metadata,
checkpoint="adapter-final.pt",
draft_checkpoint="draft-model-final.pt",
)
Error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from export.export_fmadapter import Metadata, export_fmadapter
3 metadata = Metadata(
4 author="3P developer",
5 description="An adapter that writes play scripts.",
6 )
8 export_fmadapter(
9 output_dir="./",
10 adapter_name="myPlaywritingAdapter",
(...) 13 draft_checkpoint="draft-model-final.pt",
14 )
File /workspace/export/export_fmadapter.py:11
8 from typing import Any
10 from .constants import BASE_SIGNATURE, MIL_PATH
---> 11 from .export_utils import AdapterConverter, AdapterSpec, DraftModelConverter, camelize
13 logger = logging.getLogger(__name__)
16 class MetadataKeys(enum.StrEnum):
File /workspace/export/export_utils.py:15
13 import torch
14 import yaml
---> 15 from coremltools.libmilstoragepython import _BlobStorageWriter as BlobWriter
16 from coremltools.models.neural_network.quantization_utils import _get_kmeans_lookup_table_and_weight
17 from coremltools.optimize._utils import LutParams
ModuleNotFoundError: No module named 'coremltools.libmilstoragepython'
Hello
It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics.
The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions.
Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions.
Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone,
I'm developing an iOS app using Foundation Models and I've hit a critical limitation that I believe affects many developers and millions of users.
The Issue
Foundation Models requires the device system language to be one of the supported languages. If a user has their device set to an unsupported language (Catalan, Dutch, Swedish, Polish, Danish, Norwegian, Finnish, Czech, Hungarian, Greek, Romanian, and many others), SystemLanguageModel.isSupported returns false and the framework is completely unavailable.
Why This Is Problematic
Scenario: A Catalan user has their iPhone in Catalan (native language). They want to use an AI chat app in Spanish or English (languages they speak fluently).
Current situation:
❌ Foundation Models: Completely unavailable
✅ OpenAI GPT-4: Works perfectly
✅ Anthropic Claude: Works perfectly
✅ Any cloud-based AI: Works perfectly
The user must choose between:
Keep device in Catalan → Cannot use Foundation Models at all
Change entire device to Spanish → Can use Foundation Models but terrible UX
Impact
This affects:
Millions of users in regions where unsupported languages are official
Multilingual users who prefer their device in their native language but can comfortably interact with AI in English/Spanish
Developers who cannot deploy Foundation Models-based apps in these markets
Privacy-conscious users who are ironically forced to use cloud AI instead of on-device AI
What We Need
One of these solutions would solve the problem:
Option 1: Per-app language override (preferred)
// Proposed API
let session = try await LanguageModelSession(preferredLanguage: "es-ES")
Option 2: Faster rollout of additional languages (particularly EU languages)
Option 3: Allow fallback to user-selected supported language when system language is unsupported
Technical Details
Current behavior:
// Device in Catalan
let isAvailable = SystemLanguageModel.isSupported
// Returns false
// No way to override or specify alternative language
Why This Matters
Apple Intelligence and Foundation Models are amazing for privacy and performance. But this language restriction makes the most privacy-focused AI solution less accessible than cloud alternatives. This seems contrary to Apple's values of accessibility and user choice.
Questions for the Community
Has anyone else encountered this limitation?
Are there any workarounds I'm missing?
Has anyone successfully filed feedback about this?(Please share FB number so we can reference it)
Are there any sessions or labs where this has been discussed?
Thanks for reading. I'd love to hear if others are facing this and how you're handling it.
I'm experimenting with using the Foundation Models framework to do news summarization in an RSS app but I'm finding that a lot of articles are getting kicked back with a vague message about guardrails.
This seems really common with political news but we're talking mainstream stuff, i.e. Politico, etc.
If the models are this restrictive, this will be tough to use. Is this intended?
FB17904424
Topic:
Machine Learning & AI
SubTopic:
Foundation Models