I am using Foundation Models for the first time and no response is being provided to me.
Code
import Playgrounds
import FoundationModels
#Playground {
let session = LanguageModelSession()
let result = try await session.respond(to: "List all the states in the USA")
print(result.content)
}
Canvas Output
What I did
New file
Code
Canvas refreshes but nothing happens
Am I missing a step or setup here? Please help. Something so basic is not working I do not know what to do.
Running 40GPU, 16CPU MacBook Pro.. IOS26/Xcodebeta2/Tahoe allocated 8CPU, 48GB memory in Parallels VM.
Settings for Playgrounds in Xcode
Thank you for your help in advance.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've been following along with "App Shortcuts" development but cannot get Siri to run my Intent. The intent on its own works in Shortcuts, along with a couple others that aren't in the AppShortcutsProvder structure.
I keep getting the following two errors, but cannot figure out why this is occurring with documentation or other forum posts.
No ConnectionContext found for 12909953344
Attempted to fetch App Shortcuts, but couldn't find the AppShortcutsProvider.
Here are the relevant snippets of code -
(1) The AppIntent definition
struct SetBrightnessIntent: AppIntent {
static var title = LocalizedStringResource("Set Brightness")
static var description = IntentDescription("Set Glass Display Brightness")
@Parameter(title: "Level")
var level: Int?
static var parameterSummary: some ParameterSummary {
Summary("Set Brightness to \(\.$level)%")
}
func perform() async throws -> some IntentResult {
guard let level = level else {
throw $level.needsValueError("Please provide a brightness value")
}
if level > 100 || level <= 0 {
throw $level.needsValueError("Brightness must be between 1 and 100")
}
// do stuff with level
return .result()
}
}
(2) The AppShortcutsProvider (defined in my iOS app target, there are no other targets)
struct MyAppShortcuts: AppShortcutsProvider {
static var shortcutTileColor: ShortcutTileColor = .grayBlue
@AppShortcutsBuilder
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: SetBrightnessIntent(),
phrases: [
"set \(.applicationName) brightness to \(\.$level)",
"set \(.applicationName) brightness to \(\.$level) percent"
],
shortTitle: LocalizedStringResource("Set Glass Brightness"),
systemImageName: "sun.max"
)
}
}
Does anything here look wrong? Is there some magical key that I need to specify in Info.plist to get Siri to recognize the AppShortcutsProvider?
On Xcode 16.2 and iOS 18.2 (non-beta).
I'm experimenting with downloading an audio file of spoken content, using the Speech framework to transcribe it, then using FoundationModels to clean up the formatting to add paragraph breaks and such. I have this code to do that cleanup:
private func cleanupText(_ text: String) async throws -> String? {
print("Cleaning up text of length \(text.count)...")
let session = LanguageModelSession(instructions: "The content you read is a transcription of a speech. Separate it into paragraphs by adding newlines. Do not modify the content - only add newlines.")
let response = try await session.respond(to: .init(text), generating: String.self)
return response.content
}
The content length is about 29,000 characters. And I get this error:
InferenceError::inferenceFailed::Failed to run inference: Context length of 4096 was exceeded during singleExtend..
Is 4096 a reference to a max input length? Or is this a bug?
This is running on an M1 iPad Air, with iPadOS 26 Seed 1.
Has anyone been able to run Tensorflow > 2.15 with Tensorflow Metal 1.1.0 on M3? I tried several times but was not successful. Seems like development on TensorFlow Metal has paused?
My iOS app supports iOS 18, and I’m using an encrypted CoreML model secured with a key generated from Xcode.
Every few months (around every 3 months), the encrypted model fails to load for both me and my users. When I investigate, I find this error:
coreml Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID
To temporarily fix it, I delete the old key, generate a new one, re-encrypt the model, and submit an app update. This resolves the issue, but only for a while.
This is a terrible experience for users and obviously not a sustainable solution.
I want to understand:
Why is this happening?
Is there a known expiration or invalidation policy for CoreML encryption keys?
How can I prevent this issue permanently?
Any insights or official guidance would be really appreciated.
Testing Foundation Models framework with a health-focused recipe generation app. The on-device approach is appealing but performance is rough. Taking 20+ seconds just to get recipe name and description. Same content from Claude API: 4 seconds.
I know it's beta and on-device has different tradeoffs, but this is approaching unusable territory for real-time user experience. The streaming helps psychologically but doesn't mask the underlying latency.The privacy/cost benefits are compelling but not if users abandon the feature before it completes.
Anyone else seeing similar performance? Is this expected for beta, or are there optimization techniques I'm missing?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Here's the result:
Very weird.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi, The most recent version of tensorflow-metal is only available for macosx 12.0 and python up to version 3.11. Is there any chance it could be updated with wheels for macos 15 and Python 3.12 (which is the default version supported for tensrofllow 2.17+)? I'd note that even downgrading to Python 3.11 would not be sufficient, as the wheels only work for macos 12.
Thanks.
Hello,
I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode:
VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")
It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge?
Thanks
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Swift Student Challenge
Swift
Swift Playground
Vision
I‘m excited at the development possibilities presented by Apple Intelligence and have begun imagining retrieval augmented generation use cases. Writing tools suggest that this is possible, but I have not seen any direct statements by Apple regarding use of AFMs for RAG applications. Have any references to APIs or sample code for RAG applications been published?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I am using a contact tool to help get contact from my address book. but the model ins't invoking my tool call method. Even tried with a simple tool the outcome is the same my simple tool is not being invoked.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I've run into an issue with a small Foundation Models test with Generable. I'm getting a strange error message with this Generable. I was able to get simpler ones to work.
Is this because the Generable is recursive with a property of [HTMLDiv]?
The error message is:
FoundationModels/SchemaAugmentor.swift:209: Fatal error: 'try!' expression unexpectedly raised an error: FoundationModels.GenerationSchema.SchemaError.undefinedReferences(schema: Optional("SafeResponse<HTMLDiv>"), references: ["HTMLDiv"], context: FoundationModels.GenerationSchema.SchemaError.Context(debugDescription: "Undefined types: [HTMLDiv]", underlyingErrors: []))
The code is:
import FoundationModels
import Playgrounds
@Generable
struct HTMLDiv {
@Guide(description: "Optional named ID, useful for nicknames")
var id: String? = nil
@Guide(description: "Optional visible HTML text")
var textContent: String? = nil
@Guide(description: "Any child elements", .count(0...10))
var children: [HTMLDiv] = []
static var sample: HTMLDiv {
HTMLDiv(
id: "profileToolbar",
children: [
HTMLDiv(textContent: "Log in"),
HTMLDiv(textContent: "Sign up"),
]
)
}
}
#Playground {
do {
let session = LanguageModelSession {
"Your job is to generate simple HTML markup"
"Here is an example response to the prompt: 'Make a profile toolbar':"
HTMLDiv.sample
}
let response = try await session.respond(
to: "Make a sign up form",
generating: HTMLDiv.self
)
print(response.content)
} catch {
print(error)
}
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am watching a few WWDC sessions on Foundation Model and its usage and it looks pretty cool.
I was wondering if it is possible to perform RAG on the user documents on the devices and entuallly on iCloud...
Let's say I have a lot of pages documents about me and I want the Foundation model to access those information on the documents to answer questions about me that can be retrieved from the documents.
How can this be done ?
Thanks
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Can someone tell me how to reinstall 18.2 again? I am stuck on the waitlist for more than 48 hours now. Then silly me deleted "Image Playground" thinking I can re-install. Not the case, it's no longer there nor can I find it in the App Store. Anyone has an answer, please?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hi,
I'm working with vision framework to detect barcodes. I tested both ean13 and data matrix detection and both are working fine except for the QuadrilateralProviding values in the returned BarcodeObservation. TopLeft, topRight, bottomRight and bottomLeft coordinates are rotated 90° counter clockwise (physical bottom left of data Matrix, the corner of the "L" is returned as the topLeft point in observation). The same behaviour is happening with EAN13 Barcode.
Did someone else experienced the same issue with orientation? Is it normal behaviour or should we expect a fix in next releases of the Vision Framework?
In the WWDC24 What’s New In Create ML
at 6:03 the presenter introduced TimeSeriesClassifier as a new component of Create ML Components. Where are documentation and code examples for this feature? My app captures accelerometer time series data that I want to classify.
Thank you so much!
Hi all, I'm tuning my app prediction speed with Core ML model. I watched and tried the methods in video: Improve Core ML integration with async prediction and Optimize your Core ML usage. I also use instruments to look what's the bottleneck that my prediction speed cannot be faster.
Below is the instruments result with my app. its prediction duration is 10.29ms
And below is performance report shows the average speed of prediction is 5.55ms, that is about half time of my app prediction!
Below is part of my instruments records. I think the prediction should be considered quite frequent. Could it be faster?
How to be the same prediction speed as performance report? The prediction speed on macbook Pro M2 is nearly the same as macbook Air M1!
I have a model that uses a CoreML delegate, and I’m getting the following warning whenever I set the model to nil. My understanding is that CoreML is creating a cache in the app’s storage but is having issues clearing it. As a result, the app’s storage usage increases every time the model is loaded.
This StackOverflow post explains the problem in detail: App Storage Size Increases with CoreML usage
This is a critical issue because the cache will eventually fill up the phone’s storage:
doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/mobile/Containers/Data/Application/22DDB13E-DABA-4195-846F-F884135F37FE/tmp/F38A9824-3944-420C-BD32-78CE598BE22D-10125-00000586EFDFD7D6.mlmodelc/ : sourceURL= (null) : key={"isegment":0,"inputs":{"0_0":{"shape":[256,256,1,3,1]}},"outputs":{"142_0":{"shape":[16,16,1,222,1]},"138_0":{"shape":[16,16,1,111,1]}}} : identifierSource=0 : cacheURLIdentifier=E0CD0F44FB0417936057FC6375770CFDCCC8C698592ED412DDC9C81E96256872_C9D6E5E73302943871DC2C610588FEBFCB1B1D730C63CA5CED15D2CD5A0AC0DA : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 } : state=3 : programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 : attr={
ANEFModelDescription = {
ANEFModelInput16KAlignmentArray = (
);
ANEFModelOutput16KAlignmentArray = (
);
ANEFModelProcedures = (
{
ANEFModelInputSymbolIndexArray = (
0
);
ANEFModelOutputSymbolIndexArray = (
0,
1
);
ANEFModelProcedureID = 0;
}
);
kANEFModelInputSymbolsArrayKey = (
"0_0"
);
kANEFModelOutputSymbolsArrayKey = (
"138_0@output",
"142_0@output"
);
kANEFModelProcedureNameToIDMapKey = {
net = 0;
};
};
NetworkStatusList = (
{
LiveInputList = (
{
BatchStride = 393216;
Batches = 1;
Channels = 3;
Depth = 1;
DepthStride = 393216;
Height = 256;
Interleave = 1;
Name = "0_0";
PlaneCount = 3;
PlaneStride = 131072;
RowStride = 512;
Symbol = "0_0";
Type = Float16;
Width = 256;
}
);
LiveOutputList = (
{
BatchStride = 113664;
Batches = 1;
Channels = 111;
Depth = 1;
DepthStride = 113664;
Height = 16;
Interleave = 1;
Name = "138_0@output";
PlaneCount = 111;
PlaneStride = 1024;
RowStride = 64;
Symbol = "138_0@output";
Type = Float16;
Width = 16;
},
{
BatchStride = 227328;
Batches = 1;
Channels = 222;
Depth = 1;
DepthStride = 227328;
Height = 16;
Interleave = 1;
Name = "142_0@output";
PlaneCount = 222;
PlaneStride = 1024;
RowStride = 64;
Symbol = "142_0@output";
Type = Float16;
Width = 16;
}
);
Name = net;
}
);
} : perfStatsMask=0} was not loaded by the client.
I noticed that the ChatGPT is listed as an "Extension" in the Apple Intelligence settings on iOS 18.2 beta. Does this mean developers will be able to create their own extensions? Or will this be limited to larger companies to incorporate their own models into Apple Intelligence?
I'm trying to disable Writing Tools for a specific TextField using .writingToolsBehavior(.disabled), but when running the app on my iPhone 16 Pro with Apple Intelligence enabled, I can still use Writing Tools on the text box. I also see no difference with .writingToolsBehavior(.limited).
Is there something I'm doing wrong or is this a bug?
Sample code below:
import SwiftUI
struct ContentView: View {
@State var text = ""
var body: some View {
VStack {
TextField("Enter Text", text: $text)
.writingToolsBehavior(.disabled)
}
.padding()
}
}
#Preview {
ContentView()
}