Installed xCode 26 RC.
other Components: Predictive Code Completion Model is installed.
Go to Settings and Intelligence should be under Apple Accounts. However, it is not.
How do I get Intelligence turned on?
Apple Intelligence
RSS for tagApple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.
Posts under Apple Intelligence tag
93 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have Xcode 26 on one Mac, but LM Studio running on another. LM Studio does not have an api key so I have provided "dummy-api-key" but whenever I try to connect, it gives the error "Models could not be fetched with the provided account details"
I have the IP and port (1234) of LM Study
The server in LM Studio is running.
Has anyone got this to work? And if so, what details did you enter?
I have a WKWebView that contains a js text editor, built on top of a content editable div. Currently, inline predictions on Mac break the text editors functionality significantly every time there's a prediction. There's not a way for me to fix this on the js side unless I can know when a prediction is shown.
I've tried disabling inline predictions and writing tools with the web view config, but it doesn't work:
let config = WKWebViewConfiguration()
config.writingToolsBehavior = .none
config.allowsInlinePredictions = false
let webView = WKWebView(frame: .zero, configuration: config)
I've also tried disabling all spellcheck and autocorrect features in the html but that doesn't work either:
<div contenteditable="true" spellcheck="false" autocomplete="off" autocorrect="off" autocapitalize="off"></div>
Is there anything I can do to turn it off? Or, is it possible to know when the WebView is predicting text?
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence
Foundation Models are driving me up the wall.
My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations...
... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use.
I instructed the model to tell me why it failed if that happens.
Examples of various refusals for news articles from major sources:
"The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26."
"The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines."
"the article contains unsafe content."
"The article is biased and opinionated and focuses on the author's opinion."
(this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses)
I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work.
Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max).
What is the best way for me to determine availability to set this TipKit rule?
Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
I have managed to configure Apple Intelligence in Xcode to access my deployed models in Azure AI Foundry.
However, when I would try to chat with the selected model, I get the following error:
Message from Azure OpenAI: Failed request due to: bad request
Additional context:
{"error": {"code": "BadRequest", "message": "Response API v1 is in GA, please use remove api-version or use api-version=latest."}}
Is there a way to remove v1 from the endpoint used or to define an explicit api-version?
Or do I have to change my deployment in Azure AI Foundry?
Problem:
We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification)
toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the
current system base model."
TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
Findings:
Adapter Export Process:
In export_fmadapter.py
def write_metadata(...):
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value
The Compatibility Check:
- When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model
- If they don't match: compatibleAdapterNotFound error
- The error doesn't reveal the expected signature
Questions:
- How is BASE_SIGNATURE derived from the base model?
- Is it SHA-1 of base-model.pt or some other computation?
- Can we compute the correct signature for beta 4?
- Or do we need Apple to release TAMM v0.3.0 with updated signature?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Core ML
Create ML
tensorflow-metal
Apple Intelligence
After installing ios26 beta 7 developer on my iPhone 16e, I can’t see anymore the toggle to activate apple intelligence on my phone under settings->apple intelligence.
could you please explain how to activate it ?
Many thanks,
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response.
However, I cannot get the language model session to see/use my tool.
I have code like this passing the tool to my prompt:
class Parser {
func populate(locations: String, latitude: Double, longitude: Double) async {
let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude)
let session = LanguageModelSession(tools: [findLatLonTool]) {
"""
A prompt that populates a model with a list of locations.
"""
"""
Use the findLatLon tool to populate the latitude and longitude for the name of each location.
"""
}
let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self)
let locationsModel = LocationsModels();
do {
for try await partialParsedLocations in stream {
locationsModel.parsedLocations = partialParsedLocations.content
}
} catch {
print("Error parsing")
}
}
}
And then the tool that looks something like this:
import Foundation
import FoundationModels
import MapKit
struct FindLatLonTool: Tool {
typealias Output = GeneratedContent
let name = "findLatLon"
let description = "Find the latitude / longitude of a location for a place name."
let latitude: Double
let longitude: Double
@Generable
struct Arguments {
@Guide(description: "This is the location name to look up.")
let locationName: String
}
func call(arguments: Arguments) async throws -> GeneratedContent {
let request = MKLocalSearch.Request()
request.naturalLanguageQuery = arguments.locationName
request.region = MKCoordinateRegion(
center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude),
latitudinalMeters: 1_000_000,
longitudinalMeters: 1_000_000
)
let search = MKLocalSearch(request: request)
let coordinate = try await search.start().mapItems.first?.location.coordinate
if let coordinate = coordinate {
return GeneratedContent(
LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude)
)
}
return GeneratedContent("Location was not found - no latitude / longitude is available.")
}
}
But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code.
Has anybody successfully gotten a tool to be called?
Hello,
My app fully relies on the new Foundation Models. Since Foundation Models require Apple Intelligence, I want to ensure that only devices capable of running Apple Intelligence can install my app.
When checking the UIRequiredDeviceCapabilities property for a suitable value, I found that iphone-performance-gaming-tier seems the closest match. Based on my research:
On iPhone, this effectively limits installation to iPhone 15 Pro or later.
On iPad, it ensures M1 or newer devices.
This exactly matches the hardware requirements for Apple Intelligence.
However, after setting iphone-performance-gaming-tier, I noticed that on iPad, Game Mode (Game Overlay) is automatically activated, and my app is treated as a game.
My questions are:
Is there a more appropriate UIRequiredDeviceCapabilities value that would enforce the same Apple Intelligence hardware requirements without triggering Game Mode?
If not, is there another way to restrict installation to devices meeting Apple Intelligence requirements?
Is there a way to prevent Game Mode from appearing for my app while still using this capability restriction?
Thanks in advance for your help.
Cross-posting this from https://developer.apple.com/forums/thread/795707 per ask from DTS Engineer:
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into for Apple Intelligence-capable devices, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities
The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable since we're really wanting the devices with Apple Neural Engine sufficient for Apple Intelligence features in the SDKs (like Foundaton Models, Image Playgrounds, Audio Transcription, etc.)
While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, your device is not capable of Apple Intelligence and so can't use this app"
I've created a Feedback Assistant ticket tracking the question/ask here: FB19366221
Topic:
App Store Distribution & Marketing
SubTopic:
App Store Connect
Tags:
App Store
iOS
Apple Intelligence
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities
The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable.
While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
When I ran the following code on a physical iPhone device that supports Apple Intelligence, I encountered the following error log.
What does this internal error code mean?
Image generation failed with NSError in a different domain: Error Domain=ImagePlaygroundInternal.ImageGeneration.GenerationError Code=11 “(null)”, returning a generic error instead
let imageCreator = try await ImageCreator()
let style = imageCreator.availableStyles.first ?? .animation
let stream = imageCreator.images(for: [.text("cat")], style: style, limit: 1)
for try await result in stream { // error: ImagePlayground.ImageCreator.Error.creationFailed
_ = result.cgImage
}
Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
I'm attempting to run a basic Foundation Model prototype in Xcode 26, but I'm getting the error below, using the iPhone 16 simulator with iOS 26. Should these models be working yet? Do I need to be running macOS 26 for these to work? (I hope that's not it)
Error:
Passing along Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} in response to ExecuteRequest
Playground to reproduce:
#Playground {
let session = LanguageModelSession()
do {
let response = try await session.respond(to: "What's happening?")
} catch {
let error = error
}
}
When I use the FoundationModel framework to generate long text, it will always hit an error.
"Passing along Client rate limit exceeded, try again later in response to ExecuteRequest"
And stop generating.
eg. for the prompt "Write a long story", it will almost certainly hit that error after 17 seconds of generation.
do{
let session = LanguageModelSession()
let prompt: String = "Write a long story"
let response = try await session.respond(to: prompt)
}catch{}
If possible, I want to know how to prevent that error or at least how to handle it.
I downloaded iOS 26 beta 3. I was very happy with how it turned out, but when I activated Siri, I noticed the rainbow pulsing glow that bordered the phone was missing, and all that was left was the original Siri bubble. I was very disappointed, does anyone know how to get this back? I loved that design feature.
According to the Tool documentation, the arguments to the tool are specified as a static struct type T, which is given to tool.call(argument: T) However, if the arguments are not known until runtime, is it possible to still create a Tool object with the proper parameters? Let's say a JSON-style dictionary is passed into the Tool init function to specify T, is this achievable?
Apologies if this is obvious to everyone but me... I'm using the Tahoe AI foundation models. When I get an error, I'm trying to handle it properly.
I see the errors described here: https://developer.apple.com/documentation/foundationmodels/languagemodelsession/generationerror/context, as well as in the headers. But all I can figure out how to see is error.localizedDescription which doesn't give me much to go on.
For example, an error's description is:
The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 2.
That doesn't give me much to go on. How do I get the actual error number/enum value out of this, short of parsing that text to look for the int at the end?
This one is:
case guardrailViolation(LanguageModelSession.GenerationError.Context)
So I'd like to know how to get from the catch for session.respond to something I can act on. I feel like it's there, but I'm missing it.
Thanks!