Hello,
I have created this basic swift program:
let session = LanguageModelSession(
model: .default,
instructions: "bla bla bla.")
I want to understand what I can put in model parameter (instead of .default).
How can I choose between on-device local model (.default I suppose) and apple private cloud model (or any other ?)
Thanks
Apple Intelligence
RSS for tagApple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.
Posts under Apple Intelligence tag
115 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hey dear developers!
This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri.
My change of many:
Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
iPhone
Siri Event Suggestions Markup
Siri and Voice
Apple Intelligence
Hello, I was trying to test out Foundation Model however it says Model assets are unavailable. I got my MacBook M1 back in China when i was living there. is this due to region lock?
Hello
I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points:
• Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model?
• Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS?
• When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)?
• Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply?
Thanks
Hello,
I am studying macOS26 Apple Intelligence features.
I have created a basic swift program with Xcode. This program is sending prompts to FoundationModels.LanguageModelSession.
It works fine but this model is not trained for programming or code completion.
Xcode has an AI code completion feature. It is called "Predictive Code completion model".
So, there are multiple on-device models on macOS26 ?
Are there others ?
Is there a way for me to send prompts to this "Predictive Code completion model" from my program ?
Thanks
I didn't check this out specifically on earlier betas, but AI is allowed to create new .swift files, yes? I'm only seeing proposals to create files (as opposed to code which it can change automatically), and the CREATE FILE button to the right of any proposed file creation does nothing.
Next I'll mention running local models, but file creation does not seem to happen for neither ChatGPT nor any local model.
Also I'm experimenting with LM Studio and it is reporting my client is timing out. So I guess Xcode is not waiting long enough for a response? The local models are slow, yes. But is there a setting for the AI timeout value?
I told the LLM "Every 1 minute send me an update so I know you are still working so my client does not time out" which seems to have no visible effect, it hasn't timed out yet, but I don't visibly see that message.
I would like to write a macOS application that uses on-device AI (FoundationModels).
I don’t understand how to, practically, give it access to my documents, photos, or contacts and be able to ask it a question like: “Find the document that talks about this topic.”
Do I need to manually retrieve the data and provide it in the form of a prompt? Or is FoundationModels capable of accessing it on its own?
Thanks
After updating to the newest beta on my iPhone 16 I'm stuck on this screen after accepting the terms:
No Internet Connection
Setup and activation of Apple Intelligence is unavailable when your device is offline. Connect to the internet and try again.
I tried several different networks and 5G but no luck..
I found what might be a bug with enabling Apple Intelligence when switching languages. When my iPhone's language is set to Catalan, the Apple Intelligence is disabled because it is not available for that language. Switching to Spanish doesn't activate it, and it still shows the same message of being unavailable, this time saying not available in Spanish (which is not true). However, it is enabled when the phone is rebooted.
Once at this point, the bug becomes even weirder. Having the iPhone language set to Spanish and with Apple Intelligence on, I switch the language to Catalan, and the feature remains enabled. After I ask a query in Catalan, it surprisingly understands it and works, but then it gets disabled.
Apart from that, as user feedback, I would love to activate Apple Intelligence in an available language other than my device's language. That's how I always used Siri (iPhone in Catalan, Siri in Spanish).
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
Internationalization
Localization
Apple Intelligence
I am using the iPhone 17 Pro simulator that was included with Xcode 26.0.1. My Mac is running macOS 26. When I started the simulator for the first time I got the "Ready for Apple Intelligence" notification but when I access Image Playground in my app it says it is not available on this iPhone. Any solution to get it working on the simulator?
Installed xCode 26 RC.
other Components: Predictive Code Completion Model is installed.
Go to Settings and Intelligence should be under Apple Accounts. However, it is not.
How do I get Intelligence turned on?
I have Xcode 26 on one Mac, but LM Studio running on another. LM Studio does not have an api key so I have provided "dummy-api-key" but whenever I try to connect, it gives the error "Models could not be fetched with the provided account details"
I have the IP and port (1234) of LM Study
The server in LM Studio is running.
Has anyone got this to work? And if so, what details did you enter?
I have a WKWebView that contains a js text editor, built on top of a content editable div. Currently, inline predictions on Mac break the text editors functionality significantly every time there's a prediction. There's not a way for me to fix this on the js side unless I can know when a prediction is shown.
I've tried disabling inline predictions and writing tools with the web view config, but it doesn't work:
let config = WKWebViewConfiguration()
config.writingToolsBehavior = .none
config.allowsInlinePredictions = false
let webView = WKWebView(frame: .zero, configuration: config)
I've also tried disabling all spellcheck and autocorrect features in the html but that doesn't work either:
<div contenteditable="true" spellcheck="false" autocomplete="off" autocorrect="off" autocapitalize="off"></div>
Is there anything I can do to turn it off? Or, is it possible to know when the WebView is predicting text?
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence
Foundation Models are driving me up the wall.
My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations...
... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use.
I instructed the model to tell me why it failed if that happens.
Examples of various refusals for news articles from major sources:
"The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26."
"The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines."
"the article contains unsafe content."
"The article is biased and opinionated and focuses on the author's opinion."
(this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses)
I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work.
Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max).
What is the best way for me to determine availability to set this TipKit rule?
Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
I have managed to configure Apple Intelligence in Xcode to access my deployed models in Azure AI Foundry.
However, when I would try to chat with the selected model, I get the following error:
Message from Azure OpenAI: Failed request due to: bad request
Additional context:
{"error": {"code": "BadRequest", "message": "Response API v1 is in GA, please use remove api-version or use api-version=latest."}}
Is there a way to remove v1 from the endpoint used or to define an explicit api-version?
Or do I have to change my deployment in Azure AI Foundry?
Problem:
We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification)
toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the
current system base model."
TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
Findings:
Adapter Export Process:
In export_fmadapter.py
def write_metadata(...):
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value
The Compatibility Check:
- When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model
- If they don't match: compatibleAdapterNotFound error
- The error doesn't reveal the expected signature
Questions:
- How is BASE_SIGNATURE derived from the base model?
- Is it SHA-1 of base-model.pt or some other computation?
- Can we compute the correct signature for beta 4?
- Or do we need Apple to release TAMM v0.3.0 with updated signature?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Core ML
Create ML
tensorflow-metal
Apple Intelligence
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model.
my set up is:
Mac M1 Pro
MacOS 26 Beta
Version 26.0 beta 3
Apple Intelligence & Siri --> On
here is the code,
func generate() {
Task {
isGenerating = true
output = "⏳ Thinking..."
do {
let session = LanguageModelSession( instructions: """
Extract time from a message. Example
Q: Golfing at 6PM
A: 6PM
""")
let response = try await session.respond(to: "Go to gym at 7PM")
output = response.content
} catch {
output = "❌ Error:, \(error)"
print(output)
}
isGenerating = false
}
and I get these errors
guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog]))
Can you help me get through this?
After installing ios26 beta 7 developer on my iPhone 16e, I can’t see anymore the toggle to activate apple intelligence on my phone under settings->apple intelligence.
could you please explain how to activate it ?
Many thanks,