Hello,
I am interested in using jax-metal to train ML models using Apple Silicon. I understand this is experimental.
After installing jax-metal according to https://developer.apple.com/metal/jax/, my python code fails with the following error
JaxRuntimeError: UNKNOWN: -:0:0: error: unknown attribute code: 22
-:0:0: note: in bytecode version 6 produced by: StableHLO_v1.12.1
My issue is identical to the one reported here https://github.com/jax-ml/jax/issues/26968#issuecomment-2733120325, and is fixed by pinning to jax-metal 0.1.1., jax 0.5.0 and jaxlib 0.5.0.
Thank you!
Machine Learning
RSS for tagCreate intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.
Posts under Machine Learning tag
39 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Also submitted as feedback (ID: FB20612561).
Tensorflow-metal fails on tensorflow versions above 2.18.1, but works fine on tensorflow 2.18.1
In a new python 3.12 virtual environment:
pip install tensorflow
pip install tensor flow-metal
python -c "import tensorflow as tf"
Prints error:
Traceback (most recent call last):
File "", line 1, in
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Developer Tools
Metal
Machine Learning
tensorflow-metal
Is foundation models matured enough to take input from the Apple Vision framework to generate responses? Something similar to what google's gemini does although in a much smaller scale and for a very specific niche.
I'm on Tahoe 26.1 / M3 Macbook Air. I'm using VNDetectFaceRectanglesRequest as properly as possible, as in the minimal command line program attached below. For some reason, I always get:
MLE5Engine is disabled through the configuration
printed. I couldn't find any notes on developer docs saying that VNDetectFaceRectanglesRequest can not use the Apple Neural Engine. I'm assuming there is something wrong with my code however I wasn't able to find any remarks from documentation where it might be. I wasn't able to find the above error message online either. I would appreciate your help a lot and thank you in advance.
The code below accesses the video from AVCaptureDevice.DeviceType.builtInWideAngleCamera. Currently it directly chooses the 0th format which has the largest resolution (Full HD on my M3 MBA) and "4:2:0" color "v" reduced color component spectrum encoding ("420v").
After accessing video, it performs a VNDetectFaceRectanglesRequest. It prints "VNDetectFaceRectanglesRequest completion Handler called" many times, then prints the error message above, then continues printing "VNDetectFaceRectanglesRequest completion Handler called" until the user quits it.
To run it in Xcode, File > New project > Mac command line tool. Pasting the code below, then click on the root file > Targets > Signing & Capabilities > Hardened Runtime > Resource Access > Camera.
A possible explanation could be that either Apple's internal CoreML code for this function works on GPU/CPU only or it doesn't accept 420v as supplied by the Macbook Air camera
import AVKit
import Vision
var videoDataOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
var detectionRequests: [VNDetectFaceRectanglesRequest]?
var videoDataOutputQueue: DispatchQueue = DispatchQueue(label: "queue")
class XYZ: /*NSViewController or NSObject*/NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
func viewDidLoad() {
//super.viewDidLoad()
let session = AVCaptureSession()
let inputDevice = try! self.configureFrontCamera(for: session)
self.configureVideoDataOutput(for: inputDevice.device, resolution: inputDevice.resolution, captureSession: session)
self.prepareVisionRequest()
session.startRunning()
}
fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? {
let deviceFormat = device.formats[0]
print(deviceFormat)
let dims = CMVideoFormatDescriptionGetDimensions(deviceFormat.formatDescription)
let resolution = CGSize(width: CGFloat(dims.width), height: CGFloat(dims.height))
return (deviceFormat, resolution)
}
fileprivate func configureFrontCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: .video, position: AVCaptureDevice.Position.unspecified)
let device = deviceDiscoverySession.devices.first!
let deviceInput = try! AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
let highestResolution = self.highestResolution420Format(for: device)!
try! device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
fileprivate func configureVideoDataOutput(for inputDevice: AVCaptureDevice, resolution: CGSize, captureSession: AVCaptureSession) {
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
captureSession.addOutput(videoDataOutput)
}
fileprivate func prepareVisionRequest() {
let faceDetectionRequest: VNDetectFaceRectanglesRequest = VNDetectFaceRectanglesRequest(completionHandler: { (request, error) in
print("VNDetectFaceRectanglesRequest completion Handler called")
})
// Start with detection
detectionRequests = [faceDetectionRequest]
}
// MARK: AVCaptureVideoDataOutputSampleBufferDelegate
// Handle delegate method callback on receiving a sample buffer.
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var requestHandlerOptions: [VNImageOption: AnyObject] = [:]
let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil)
if cameraIntrinsicData != nil {
requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData
}
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
// No tracking object detected, so perform initial detection
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
orientation: CGImagePropertyOrientation.up, options: requestHandlerOptions)
try! imageRequestHandler.perform(detectionRequests!)
}
}
let X = XYZ()
X.viewDidLoad()
sleep(9999999)
Hello folks! Taking a look at https://developer.apple.com/documentation/foundationmodels it’s not clear how to use another models there.
Do anyone knows if it’s possible use one trained model from outside (imported) here in foundation models framework?
Thanks!
Hello!
I'm following the Foundation Models adapter training guide (https://developer.apple.com/apple-intelligence/foundation-models-adapter/) on my NVIDIA DGX Spark box. I'm able to train on my own data but the example notebook fails when I try to export the artifact as an fmadapter. I get the following error for the code block I'm trying to run. I haven't touched any of the code in the export folder. I tried exporting it on my Mac too and got the same error as well (given below). Would appreciate some more clarity around this. Thank you.
Code Block:
from export.export_fmadapter import Metadata, export_fmadapter
metadata = Metadata(
author="3P developer",
description="An adapter that writes play scripts.",
)
export_fmadapter(
output_dir="./",
adapter_name="myPlaywritingAdapter",
metadata=metadata,
checkpoint="adapter-final.pt",
draft_checkpoint="draft-model-final.pt",
)
Error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from export.export_fmadapter import Metadata, export_fmadapter
3 metadata = Metadata(
4 author="3P developer",
5 description="An adapter that writes play scripts.",
6 )
8 export_fmadapter(
9 output_dir="./",
10 adapter_name="myPlaywritingAdapter",
(...) 13 draft_checkpoint="draft-model-final.pt",
14 )
File /workspace/export/export_fmadapter.py:11
8 from typing import Any
10 from .constants import BASE_SIGNATURE, MIL_PATH
---> 11 from .export_utils import AdapterConverter, AdapterSpec, DraftModelConverter, camelize
13 logger = logging.getLogger(__name__)
16 class MetadataKeys(enum.StrEnum):
File /workspace/export/export_utils.py:15
13 import torch
14 import yaml
---> 15 from coremltools.libmilstoragepython import _BlobStorageWriter as BlobWriter
16 from coremltools.models.neural_network.quantization_utils import _get_kmeans_lookup_table_and_weight
17 from coremltools.optimize._utils import LutParams
ModuleNotFoundError: No module named 'coremltools.libmilstoragepython'
The WWDC25: Explore large language models on Apple silicon with MLX video talks about using your own data to fine-tune a large language model. But the video doesn't explain what kind of data can be used. The video just shows the command to use and how to point to the data folder. Can I use PDFs, Word documents, Markdown files to train the model? Are there any code examples on GitHub that demonstrate how to do this?
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing
I running an MLModel loaded from a .mlmodelc file.
On the current iOS version 18.6.2 the model is running as expected with no issues.
However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it.
Below is the error I am seeing when I attempt to run an inference.
at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18.
Any help getting this to run again would be greatly appreciated.
Thank you,
processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 :
[Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error}
[Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1
Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).
Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence
No matter what, the LanguageModelSession always returns very lengthy / verbose responses. I set the maximumResponseTokens option to various small numbers but it doesn't appear to have any effect. I've even used this instructions format to keep responses between 3-8 words but it returns multiple paragraphs. Is there a way to manage LLM response length? Thanks.
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response.
However, I cannot get the language model session to see/use my tool.
I have code like this passing the tool to my prompt:
class Parser {
func populate(locations: String, latitude: Double, longitude: Double) async {
let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude)
let session = LanguageModelSession(tools: [findLatLonTool]) {
"""
A prompt that populates a model with a list of locations.
"""
"""
Use the findLatLon tool to populate the latitude and longitude for the name of each location.
"""
}
let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self)
let locationsModel = LocationsModels();
do {
for try await partialParsedLocations in stream {
locationsModel.parsedLocations = partialParsedLocations.content
}
} catch {
print("Error parsing")
}
}
}
And then the tool that looks something like this:
import Foundation
import FoundationModels
import MapKit
struct FindLatLonTool: Tool {
typealias Output = GeneratedContent
let name = "findLatLon"
let description = "Find the latitude / longitude of a location for a place name."
let latitude: Double
let longitude: Double
@Generable
struct Arguments {
@Guide(description: "This is the location name to look up.")
let locationName: String
}
func call(arguments: Arguments) async throws -> GeneratedContent {
let request = MKLocalSearch.Request()
request.naturalLanguageQuery = arguments.locationName
request.region = MKCoordinateRegion(
center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude),
latitudinalMeters: 1_000_000,
longitudinalMeters: 1_000_000
)
let search = MKLocalSearch(request: request)
let coordinate = try await search.start().mapItems.first?.location.coordinate
if let coordinate = coordinate {
return GeneratedContent(
LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude)
)
}
return GeneratedContent("Location was not found - no latitude / longitude is available.")
}
}
But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code.
Has anybody successfully gotten a tool to be called?
was that Spokane, Washington my fresh my fresh basket and they’re using a expired Wi-Fi certification domain through godaddy.com that expire April 30, 2020 I have a complete information on it if anybody needs me to forward it or wants to examine it their selves but be wary when you connected to the Wi-Fi over at my fresh basket at in Spokane, Washington
Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
When using Foundation Models, is it possible to ask the model to produce output in a specific language, apart from giving an instruction like "Provide answers in ." ? (I tried that and it kind of worked, but it seems fragile.)
I haven't noticed an API to do so and have a use-case where the output should be in a user-selectable language that is not the current system language.
I'm attempting to run a basic Foundation Model prototype in Xcode 26, but I'm getting the error below, using the iPhone 16 simulator with iOS 26. Should these models be working yet? Do I need to be running macOS 26 for these to work? (I hope that's not it)
Error:
Passing along Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} in response to ExecuteRequest
Playground to reproduce:
#Playground {
let session = LanguageModelSession()
do {
let response = try await session.respond(to: "What's happening?")
} catch {
let error = error
}
}
I have a MacBook Pro M3 Pro with 18GB of RAM and was following the instructions to fine tune the foundational model given here: https://developer.apple.com/apple-intelligence/foundation-models-adapter/
However, while following the code sample in the example Jupyter notebook, my Mac hangs on the second code cell. Specifically:
from examples.generate import generate_content, GenerationConfiguration
from examples.data import Message
output = generate_content(
[[
Message.from_system("A conversation between a user and a helpful assistant. Taking the role as a play writer assistant for a kids' play."),
Message.from_user("Write a script about penguins.")
]],
GenerationConfiguration(temperature=0.0, max_new_tokens=128)
)
output[0].response
After some debugging, I was getting the following error:
RuntimeError: MPS backend out of memory (MPS allocated: 22.64 GB, other allocations: 5.78 MB, max allowed: 22.64 GB). Tried to allocate 52.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
So is my machine not capable enough to adapter train Apple's Foundation Model? And if so, what's the recommended spec and could this be specified somewhere? Thanks!
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
Download the Foundation Models Adaptor Training Toolkit
Hi, after I clicked on the download button, I was redirected to this page https://developer.apple.com and did not download the toolkit.
Are there any details available on how Xcode 26 connects to third party model providers? For example, can Xcode only use OpenAI compatible API endpoints?
I'm seeing this error a lot in my console log of my iPhone 15 Pro (Apple Intelligence enabled):
com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1
Are there entitlements / permissions I need to enable in Xcode that I forgot to do?
Code example
Here's how I'm initializing the language model session:
private func setupLanguageModelSession() {
if #available(iOS 26.0, *) {
let instructions = """
my instructions
"""
do {
languageModelSession = try LanguageModelSession(instructions: instructions)
print("Foundation Models language model session initialized")
} catch {
print("Error creating language model session: \(error)")
languageModelSession = nil
}
} else {
print("Device does not support Foundation Models (requires iOS 26.0+)")
languageModelSession = nil
}
}