Hello,
I am testing the sample project provided here: Bringing advanced speech-to-text capabilities to your app.
On both macOS 26.0 beta and iOS 26.0 beta, the app crashes immediately on launch with a dyld "Symbol not found" error related to FoundationModels.framework.
It feels like this may be related to testing primarily on newer Apple Silicon devices, as I am seeing consistent crashes on an Intel MacBook and on an older iPhone device.
I would appreciate any insight, confirmation, or guidance on whether this is a known limitation or if there is a workaround. Is it planned to be resolved soon?
Environment
macOS:
Device: MacBook Pro (Intel)
Processor: 2 GHz Quad-Core Intel Core i5
Graphics: Intel Iris Plus Graphics 1536 MB
Memory: 16 GB 3733 MHz LPDDR4X
OS: macOS Tahoe Version 26.0 Beta (25A5338b)
iOS:
Device: iPhone 11
Model Number: MHDD3HN/A
OS: iOS 26.0
Xcode:
Version: 26.0 beta 3 (17A5276g)
Crash (macOS)
Abort signal received. Excerpt from crash dump:
dyld`__abort_with_payload:
0x7ff80e3ad4a0 <+0>: movl $0x2000209, %eax
0x7ff80e3ad4a5 <+5>: movq %rcx, %r10
0x7ff80e3ad4a8 <+8>: syscall
-> 0x7ff80e3ad4aa <+10>: jae 0x7ff80e3ad4b4
Console:
dyld[9819]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC
Referenced from: /Users/userx/Library/Developer/Xcode/DerivedData/SwiftTranscriptionSampleApp-*/Build/Products/Debug/SwiftTranscriptionSampleApp.app/Contents/MacOS/SwiftTranscriptionSampleApp.debug.dylib
Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Crash (iOS)
Abort signal received. Excerpt from crash dump:
dyld`__abort_with_payload:
0x18f22b4b0 <+0>: mov x16, #0x209
0x18f22b4b4 <+4>: svc #0x80
-> 0x18f22b4b8 <+8>: b.lo 0x18f22b4d8
Console
dyld[2080]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC
Referenced from: /private/var/containers/Bundle/Application/.../SwiftTranscriptionSampleApp.app/SwiftTranscriptionSampleApp.debug.dylib
Expected in: /System/Library/Frameworks/FoundationModels.framework/FoundationModels
Question
Is this crash expected on Intel Macs and older iPhone models with the beta SDKs?
Is there an official statement on whether macOS 26.x releases support Intel, or it exists only until macOS 26.1?
Any suggested workarounds for testing this sample project on current hardware?
Is this a known limitation for the 26.0 beta, and if so, should we expect a fix in 26.0 or only in subsequent releases?
Attaching screenshots for reference.
Thank you in advance.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Dear Apple Foundation Models Development Team,
I am a developer integrating Apple Foundation Models (AFM) into my app and encountered the exceededContextWindowSize error when exceeding the 4096-token limit.
Proposal:
I suggest Apple develop a tool to estimate the token count of a prompt before sending it to the model. This tool could be integrated into FoundationModels Framework for ease of use.
Benefits:
A token estimation tool would help developers manage the context window limit and optimize performance. I hope Apple considers this proposal soon.
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
My app fully relies on the new Foundation Models. Since Foundation Models require Apple Intelligence, I want to ensure that only devices capable of running Apple Intelligence can install my app.
When checking the UIRequiredDeviceCapabilities property for a suitable value, I found that iphone-performance-gaming-tier seems the closest match. Based on my research:
On iPhone, this effectively limits installation to iPhone 15 Pro or later.
On iPad, it ensures M1 or newer devices.
This exactly matches the hardware requirements for Apple Intelligence.
However, after setting iphone-performance-gaming-tier, I noticed that on iPad, Game Mode (Game Overlay) is automatically activated, and my app is treated as a game.
My questions are:
Is there a more appropriate UIRequiredDeviceCapabilities value that would enforce the same Apple Intelligence hardware requirements without triggering Game Mode?
If not, is there another way to restrict installation to devices meeting Apple Intelligence requirements?
Is there a way to prevent Game Mode from appearing for my app while still using this capability restriction?
Thanks in advance for your help.
Encountered a few times when the answer get "stuck" (I am now at beta 6).
This is an example.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response.
However, I cannot get the language model session to see/use my tool.
I have code like this passing the tool to my prompt:
class Parser {
func populate(locations: String, latitude: Double, longitude: Double) async {
let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude)
let session = LanguageModelSession(tools: [findLatLonTool]) {
"""
A prompt that populates a model with a list of locations.
"""
"""
Use the findLatLon tool to populate the latitude and longitude for the name of each location.
"""
}
let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self)
let locationsModel = LocationsModels();
do {
for try await partialParsedLocations in stream {
locationsModel.parsedLocations = partialParsedLocations.content
}
} catch {
print("Error parsing")
}
}
}
And then the tool that looks something like this:
import Foundation
import FoundationModels
import MapKit
struct FindLatLonTool: Tool {
typealias Output = GeneratedContent
let name = "findLatLon"
let description = "Find the latitude / longitude of a location for a place name."
let latitude: Double
let longitude: Double
@Generable
struct Arguments {
@Guide(description: "This is the location name to look up.")
let locationName: String
}
func call(arguments: Arguments) async throws -> GeneratedContent {
let request = MKLocalSearch.Request()
request.naturalLanguageQuery = arguments.locationName
request.region = MKCoordinateRegion(
center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude),
latitudinalMeters: 1_000_000,
longitudinalMeters: 1_000_000
)
let search = MKLocalSearch(request: request)
let coordinate = try await search.start().mapItems.first?.location.coordinate
if let coordinate = coordinate {
return GeneratedContent(
LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude)
)
}
return GeneratedContent("Location was not found - no latitude / longitude is available.")
}
}
But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code.
Has anybody successfully gotten a tool to be called?
About the Core ML model encryption mention in:https://developer.apple.com/documentation/coreml/encrypting-a-model-in-your-app
When I encrypted the model, if the machine is M chip, the model will load perfectly. One the other hand, when I test the executable on an Intel chip macbook, there will be an error:
Error Domain=com.apple.CoreML Code=9 "Operation not supported on this platform." UserInfo={NSLocalizedDescription=Operation not supported on this platform.}
Intel test machine is 2019 macbook air with
CPU: Intel i5-8210Y,
OS: 14.7.6 23H626,
With Apple T2 Security Chip.
The encrypted model do load on M2 and M4 macbook air.
If the model is NOT encrypted, it will also load on the Intel test machine.
I did not find in Core ML document that suggest if the encryption/decryption support Intel chips.
May I check if the decryption indeed does NOT support Intel chip?
Hi all,
I noticed on Friday that on the new Beta 5 using FoundationModels on a simulator LanguageModelSession.respond() neither resolves nor throws most of the time. The SwiftUI test app below was working perfectly in Xcode 16 Beta 4 and iOS 26 Beta 4 (simulator).
import SwiftUI
import FoundationModels
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.onAppear {
Task {
do {
let session = LanguageModelSession()
let response = try await session.respond(to: "are cats better than dogs ???")
print(response.content)
} catch {
print("error")
}
}
}
}
}
After updating to Xcode 16 Beta 5 and iOS 26 Beta 5 (simulator), the code now often hangs.
Occasionally it will work if I toggle Apple Intelligence on and off in Settings, but it’s unreliable.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I can no longer achieve 100% ANE usage since upgrading to MacOS26 Beta 5. I used to be able to get 100%. Has Apple activated throttling or power saving features in the new Betas? Is there any new rate limiting on the API? I can hardly get above 3w or 40%.
I have a M4 Pro mini (64GB) with High Power energy setting. MacOS 26 Beta 5.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm running MacOs 26 Beta 5. I noticed that I can no longer achieve 100% usage on the ANE as I could before with Apple Foundations on-device model. Has Apple activated some kind of throttling or power limiting of the ANE? I cannot get above 3w or 40% usage now since upgrading. I'm on the high power energy mode. I there an API rate limit being applied?
I kave a M4 Pro mini with 64 GB of memory.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey,
I've been trying to write an AI agent for OpenAI's GPT-5, but using the @Generable Tool types from the FoundationModels framework, which is super awesome btw!
I'm having trouble implementing the tool calling, though. When I receive a tool call from the OpenAI api, I do the following:
Find the tool in my [any Tool] array via the tool name I get from the model
if let tool = tools.first(where: { $0.name == functionCall.name }) {
// ...
}
Parse the arguments of the tool call via GeneratedContent(json:)
let generatedContent = try GeneratedContent(json: functionCall.arguments)
Pass the tool and arguments to a function that calls tool.call(arguments: arguments) and returns the tool's output type
private func execute<T: Tool>(_ tool: T, with generatedContent: GeneratedContent) async throws -> T.Output {
let arguments = try T.Arguments.init(generatedContent)
return try await tool.call(arguments: arguments)
}
Up to this point, everything is working as expected. However, the tool's output type is any PromptRepresentable and I have no idea how to turn that into something that I can encode and send back to the model. I assumed there might be a way to turn it into a GeneratedContent but there is no fitting initializer.
Am I missing something or is this not supported? Without a way to return the output to an external provider, it wouldn't really be possible to use FoundationModels Tool type I think. That would be unfortunate because it's implemented so elegantly.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm using Xcode 26 Beta 5 and get errors on any generation I try, however harmless, when wrapped in the #Playground macro.
#Playground {
let session = LanguageModelSession()
let topic = "pandas"
let prompt = "Write a safe and respectful story about (topic)."
let response = try await session.respond(to: prompt)
Not seeing any issues on simulator or device. Anyone else seeing this or have any ideas?
Thanks for any help!
Version 26.0 beta 5 (17A5295f)
macOS 26.0 Beta (25A5316i)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Whenever I try to initialize a LanguageModelSession (let session = LanguageModelSession()), my app crashes with EXC_BAD_ACCESS.
SystemLanguageModel.default.availability returns available.
I tried running the two sample projects I found that use Foundation Models, FoundationModelsTripPlanner and SwiftTranscriptionSampleApp, and they both also crash—immediately on launch.
I commented out the Foundation Models logic from the SwiftTranscriptionSampleApp and ran it again, and it no longer crashed.
I'm on macOS 26 Beta 4 on an M1 Pro device. I'm based in Austria (EU), if that matters.
My sample app has been working with the following code:
func call(arguments: Arguments) async throws -> ToolOutput {
var temp:Int
switch arguments.city {
case .singapore: temp = Int.random(in: 30..<40)
case .china: temp = Int.random(in: 10..<30)
}
let content = GeneratedContent(temp)
let output = ToolOutput(content)
return output
}
However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities
The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable.
While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
Is it possible to expose a custom VirtIO device to a Linux guest running inside a VM — likely using QEMU backed by Hypervisor.framework. The guest would see this device as something like /dev/npu0, and it would use a kernel driver + userspace library to submit inference requests.
On the macOS host, these requests would be executed using CoreML, MPSGraph, or BNNS. The results would be passed back to the guest via IPC.
Does the macOS allow this kind of "fake" NPU / GPU
I'm working on localizing my prompts to support multiple languages, and in some cases my prompts has String interpolated Generable objects. for example:
"Given the following workout routine: \(routine), suggest one additional exercise to complement it."
In the Strings dictionary, I'm only able to select String, Int or Double parameters using %@ and %lld.
Has anyone found a way to accomplish this?
Hi,
I’m developing an app targeting iOS 26, using the new FoundationModels framework to perform on-device LLM inference. I’m currently testing memory usage.
Does the memory used by FoundationModels—including model weights, KV cache, and any inference-related buffers—count toward my app’s Jetsam memory limit, or is any of it managed separately by the system?
I may need to run two concurrent inferences, each with a 4096-token context window. Is this explicitly supported or allowed by FoundationModels on iOS 26? Would this significantly increase the risk of memory-based termination?
Thanks in advance for any clarification.
Thanks.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey Devs,
I'm trying to create my own Real Time Text detection like this Apple project. https://developer.apple.com/documentation/vision/extracting-phone-numbers-from-text-in-images
I want to use the new iOS18 RecognizeTextRequest instead of the old VNRecognizeTextRequest in my SwiftUI project.
This is my delegate code with the camera setup. I removed region of interest for debugging but I'm trying to scan English words in books. The idea is to get one word in the ROI in the future. But I can't even get proper words so testing without ROI incase my math is wrong.
@Observable
class CameraManager: NSObject, AVCapturePhotoCaptureDelegate
...
override init() {
super.init()
setUpVisionRequest()
}
private func setUpVisionRequest() {
textRequest = RecognizeTextRequest(.revision3)
}
...
func setup() -> Bool {
captureSession.beginConfiguration()
guard
let captureDevice = AVCaptureDevice.default(
.builtInWideAngleCamera, for: .video, position: .back)
else {
return false
}
self.captureDevice = captureDevice
guard let deviceInput = try? AVCaptureDeviceInput(device: captureDevice)
else {
return false
}
/// Check whether the session can add input.
guard captureSession.canAddInput(deviceInput) else {
print("Unable to add device input to the capture session.")
return false
}
/// Add the input and output to session
captureSession.addInput(deviceInput)
/// Configure the video data output
videoDataOutput.setSampleBufferDelegate(
self, queue: videoDataOutputQueue)
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
videoDataOutput.connection(with: .video)?
.preferredVideoStabilizationMode = .off
} else {
return false
}
// Set zoom and autofocus to help focus on very small text
do {
try captureDevice.lockForConfiguration()
captureDevice.videoZoomFactor = 2
captureDevice.autoFocusRangeRestriction = .near
captureDevice.unlockForConfiguration()
} catch {
print("Could not set zoom level due to error: \(error)")
return false
}
captureSession.commitConfiguration()
// potential issue with background vs dispatchqueue ??
Task(priority: .background) {
captureSession.startRunning()
}
return true
}
}
// Issue here ???
extension CameraManager: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(
_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection
) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
Task {
textRequest.recognitionLevel = .fast
textRequest.recognitionLanguages = [Locale.Language(identifier: "en-US")]
do {
let observations = try await textRequest.perform(on: pixelBuffer)
for observation in observations {
let recognizedText = observation.topCandidates(1).first
print("recognized text \(recognizedText)")
}
} catch {
print("Recognition error: \(error.localizedDescription)")
}
}
}
}
The results I get look like this ( full page of English from a any book)
recognized text Optional(RecognizedText(string: e bnUI W4, confidence: 0.5))
recognized text Optional(RecognizedText(string: ?'U, confidence: 0.3))
recognized text Optional(RecognizedText(string: traQt4, confidence: 0.3))
recognized text Optional(RecognizedText(string: li, confidence: 0.3))
recognized text Optional(RecognizedText(string: 15,1,#, confidence: 0.3))
recognized text Optional(RecognizedText(string: jllÈ, confidence: 0.3))
recognized text Optional(RecognizedText(string: vtrll, confidence: 0.3))
recognized text Optional(RecognizedText(string: 5,1,: 11, confidence: 0.5))
recognized text Optional(RecognizedText(string: 1141, confidence: 0.3))
recognized text Optional(RecognizedText(string: jllll ljiiilij41, confidence: 0.3))
recognized text Optional(RecognizedText(string: 2f4, confidence: 0.3))
recognized text Optional(RecognizedText(string: ktril, confidence: 0.3))
recognized text Optional(RecognizedText(string: ¥LLI, confidence: 0.3))
recognized text Optional(RecognizedText(string: 11[Itl,, confidence: 0.3))
recognized text Optional(RecognizedText(string: 'rtlÈ131, confidence: 0.3))
Even with ROI set to a specific rectangle Normalized to Vision, I get the same results with single characters returning gibberish.
Any help would be amazing thank you.
Am I using the buffer right ?
Am I using the new perform(on: CVPixelBuffer) right ?
Maybe I didn't set up my camera properly? I can provide code
Context
I trained a LoRA adapter for Apple’s on-device language model using the Foundation Models Adapter Training Toolkit v0.2.0 on macOS 26 beta 4. Although training completes successfully, loading the resulting .fmadapter package fails with:
Adapter is not compatible with the current system base model.
What I’ve Observed,
Hard-coded Signature: In export/constants.py, the toolkit sets,
BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
Metadata Injection: The export_fmadapter.py script writes this value into the adapter’s metadata:
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE
Compatibility Check: At runtime, the Foundation Models framework compares the adapter’s baseModelSignature against the OS’s system model signature, and reports compatibleAdapterNotFound if they don’t match—without revealing the expected signature.
Questions
Signature Generation - What exactly does the toolkit hash to derive BASE_SIGNATURE? Is it a straight SHA-1 of base-model.pt, or is there an additional transformation?
Recomputing for Beta 4 - Is there a way to locally compute the correct signature for the macOS 26 beta 4 system model?
Toolkit Updates - Will Apple release Adapter Training Toolkit v0.3.0 with an updated BASE_SIGNATURE for beta 4, or is there an alternative workaround to generate it myself?
Any guidance on how the Foundation Models framework derives and verifies the base model signature—or how to regenerate it for beta 4—would be greatly appreciated.
I have a question. In China, long pressing a picture in the album can segment the target. Is this model a local model? Is there any information? Can developers use it?