After updating to the newest beta on my iPhone 16 I'm stuck on this screen after accepting the terms:
No Internet Connection
Setup and activation of Apple Intelligence is unavailable when your device is offline. Connect to the internet and try again.
I tried several different networks and 5G but no luck..
Apple Intelligence
RSS for tagApple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.
Posts under Apple Intelligence tag
123 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I would like to write a macOS application that uses on-device AI (FoundationModels).
I don’t understand how to, practically, give it access to my documents, photos, or contacts and be able to ask it a question like: “Find the document that talks about this topic.”
Do I need to manually retrieve the data and provide it in the form of a prompt? Or is FoundationModels capable of accessing it on its own?
Thanks
Hello, I was trying to test out Foundation Model however it says Model assets are unavailable. I got my MacBook M1 back in China when i was living there. is this due to region lock?
Hello
I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points:
• Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model?
• Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS?
• When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)?
• Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply?
Thanks
I found what might be a bug with enabling Apple Intelligence when switching languages. When my iPhone's language is set to Catalan, the Apple Intelligence is disabled because it is not available for that language. Switching to Spanish doesn't activate it, and it still shows the same message of being unavailable, this time saying not available in Spanish (which is not true). However, it is enabled when the phone is rebooted.
Once at this point, the bug becomes even weirder. Having the iPhone language set to Spanish and with Apple Intelligence on, I switch the language to Catalan, and the feature remains enabled. After I ask a query in Catalan, it surprisingly understands it and works, but then it gets disabled.
Apart from that, as user feedback, I would love to activate Apple Intelligence in an available language other than my device's language. That's how I always used Siri (iPhone in Catalan, Siri in Spanish).
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
Internationalization
Localization
Apple Intelligence
I am using the iPhone 17 Pro simulator that was included with Xcode 26.0.1. My Mac is running macOS 26. When I started the simulator for the first time I got the "Ready for Apple Intelligence" notification but when I access Image Playground in my app it says it is not available on this iPhone. Any solution to get it working on the simulator?
Installed xCode 26 RC.
other Components: Predictive Code Completion Model is installed.
Go to Settings and Intelligence should be under Apple Accounts. However, it is not.
How do I get Intelligence turned on?
I have Xcode 26 on one Mac, but LM Studio running on another. LM Studio does not have an api key so I have provided "dummy-api-key" but whenever I try to connect, it gives the error "Models could not be fetched with the provided account details"
I have the IP and port (1234) of LM Study
The server in LM Studio is running.
Has anyone got this to work? And if so, what details did you enter?
I have a WKWebView that contains a js text editor, built on top of a content editable div. Currently, inline predictions on Mac break the text editors functionality significantly every time there's a prediction. There's not a way for me to fix this on the js side unless I can know when a prediction is shown.
I've tried disabling inline predictions and writing tools with the web view config, but it doesn't work:
let config = WKWebViewConfiguration()
config.writingToolsBehavior = .none
config.allowsInlinePredictions = false
let webView = WKWebView(frame: .zero, configuration: config)
I've also tried disabling all spellcheck and autocorrect features in the html but that doesn't work either:
<div contenteditable="true" spellcheck="false" autocomplete="off" autocorrect="off" autocapitalize="off"></div>
Is there anything I can do to turn it off? Or, is it possible to know when the WebView is predicting text?
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence
I didn't check this out specifically on earlier betas, but AI is allowed to create new .swift files, yes? I'm only seeing proposals to create files (as opposed to code which it can change automatically), and the CREATE FILE button to the right of any proposed file creation does nothing.
Next I'll mention running local models, but file creation does not seem to happen for neither ChatGPT nor any local model.
Also I'm experimenting with LM Studio and it is reporting my client is timing out. So I guess Xcode is not waiting long enough for a response? The local models are slow, yes. But is there a setting for the AI timeout value?
I told the LLM "Every 1 minute send me an update so I know you are still working so my client does not time out" which seems to have no visible effect, it hasn't timed out yet, but I don't visibly see that message.
Foundation Models are driving me up the wall.
My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations...
... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use.
I instructed the model to tell me why it failed if that happens.
Examples of various refusals for news articles from major sources:
"The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26."
"The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines."
"the article contains unsafe content."
"The article is biased and opinionated and focuses on the author's opinion."
(this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses)
I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work.
Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max).
What is the best way for me to determine availability to set this TipKit rule?
Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
I have managed to configure Apple Intelligence in Xcode to access my deployed models in Azure AI Foundry.
However, when I would try to chat with the selected model, I get the following error:
Message from Azure OpenAI: Failed request due to: bad request
Additional context:
{"error": {"code": "BadRequest", "message": "Response API v1 is in GA, please use remove api-version or use api-version=latest."}}
Is there a way to remove v1 from the endpoint used or to define an explicit api-version?
Or do I have to change my deployment in Azure AI Foundry?
Problem:
We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification)
toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the
current system base model."
TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
Findings:
Adapter Export Process:
In export_fmadapter.py
def write_metadata(...):
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value
The Compatibility Check:
- When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model
- If they don't match: compatibleAdapterNotFound error
- The error doesn't reveal the expected signature
Questions:
- How is BASE_SIGNATURE derived from the base model?
- Is it SHA-1 of base-model.pt or some other computation?
- Can we compute the correct signature for beta 4?
- Or do we need Apple to release TAMM v0.3.0 with updated signature?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Core ML
Create ML
tensorflow-metal
Apple Intelligence
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model.
my set up is:
Mac M1 Pro
MacOS 26 Beta
Version 26.0 beta 3
Apple Intelligence & Siri --> On
here is the code,
func generate() {
Task {
isGenerating = true
output = "⏳ Thinking..."
do {
let session = LanguageModelSession( instructions: """
Extract time from a message. Example
Q: Golfing at 6PM
A: 6PM
""")
let response = try await session.respond(to: "Go to gym at 7PM")
output = response.content
} catch {
output = "❌ Error:, \(error)"
print(output)
}
isGenerating = false
}
and I get these errors
guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog]))
Can you help me get through this?
After installing ios26 beta 7 developer on my iPhone 16e, I can’t see anymore the toggle to activate apple intelligence on my phone under settings->apple intelligence.
could you please explain how to activate it ?
Many thanks,
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response.
However, I cannot get the language model session to see/use my tool.
I have code like this passing the tool to my prompt:
class Parser {
func populate(locations: String, latitude: Double, longitude: Double) async {
let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude)
let session = LanguageModelSession(tools: [findLatLonTool]) {
"""
A prompt that populates a model with a list of locations.
"""
"""
Use the findLatLon tool to populate the latitude and longitude for the name of each location.
"""
}
let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self)
let locationsModel = LocationsModels();
do {
for try await partialParsedLocations in stream {
locationsModel.parsedLocations = partialParsedLocations.content
}
} catch {
print("Error parsing")
}
}
}
And then the tool that looks something like this:
import Foundation
import FoundationModels
import MapKit
struct FindLatLonTool: Tool {
typealias Output = GeneratedContent
let name = "findLatLon"
let description = "Find the latitude / longitude of a location for a place name."
let latitude: Double
let longitude: Double
@Generable
struct Arguments {
@Guide(description: "This is the location name to look up.")
let locationName: String
}
func call(arguments: Arguments) async throws -> GeneratedContent {
let request = MKLocalSearch.Request()
request.naturalLanguageQuery = arguments.locationName
request.region = MKCoordinateRegion(
center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude),
latitudinalMeters: 1_000_000,
longitudinalMeters: 1_000_000
)
let search = MKLocalSearch(request: request)
let coordinate = try await search.start().mapItems.first?.location.coordinate
if let coordinate = coordinate {
return GeneratedContent(
LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude)
)
}
return GeneratedContent("Location was not found - no latitude / longitude is available.")
}
}
But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code.
Has anybody successfully gotten a tool to be called?
Hello,
My app fully relies on the new Foundation Models. Since Foundation Models require Apple Intelligence, I want to ensure that only devices capable of running Apple Intelligence can install my app.
When checking the UIRequiredDeviceCapabilities property for a suitable value, I found that iphone-performance-gaming-tier seems the closest match. Based on my research:
On iPhone, this effectively limits installation to iPhone 15 Pro or later.
On iPad, it ensures M1 or newer devices.
This exactly matches the hardware requirements for Apple Intelligence.
However, after setting iphone-performance-gaming-tier, I noticed that on iPad, Game Mode (Game Overlay) is automatically activated, and my app is treated as a game.
My questions are:
Is there a more appropriate UIRequiredDeviceCapabilities value that would enforce the same Apple Intelligence hardware requirements without triggering Game Mode?
If not, is there another way to restrict installation to devices meeting Apple Intelligence requirements?
Is there a way to prevent Game Mode from appearing for my app while still using this capability restriction?
Thanks in advance for your help.
Cross-posting this from https://developer.apple.com/forums/thread/795707 per ask from DTS Engineer:
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into for Apple Intelligence-capable devices, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities
The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable since we're really wanting the devices with Apple Neural Engine sufficient for Apple Intelligence features in the SDKs (like Foundaton Models, Image Playgrounds, Audio Transcription, etc.)
While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, your device is not capable of Apple Intelligence and so can't use this app"
I've created a Feedback Assistant ticket tracking the question/ask here: FB19366221
Topic:
App Store Distribution & Marketing
SubTopic:
App Store Connect
Tags:
App Store
iOS
Apple Intelligence