Hello everyone, I have a visual convolutional model and a video that has been decoded into many frames. When I perform inference on each frame in a loop, the speed is a bit slow. So, I started 4 threads, each running inference simultaneously, but I found that the speed is the same as serial inference, every single forward inference is slower. I used the mactop tool to check the GPU utilization, and it was only around 20%. Is this normal? How can I accelerate it?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
For example:
I have a list of to-dos, each with a unique id (a GUID). I want to feed them to the LLM model and have the model rewrite the items so they start with an action verb.
I'd like to get them back and identify which rewritten item corresponds to which original item. I obviously can't compare the text, as it has changed.
I've tried passing the original GUIDs in with each to-do, but the extra GUID characters pollutes the input and confuses the model.
I've tried numbering them in order and adding an originalSortOrder field to my generable type, but it doesn't work reliably.
Any suggestions?
I could do them one at a time, but I also have a use case where I'm asking for them to be organized in sections, and while I've instructed the model not to rename anything, it still happens. It's just all very nondeterministic.
Seeing this error from time to time:
Context(debugDescription: "Content contains 4089 tokens, which exceeds the maximum allowed context size of 4096.", underlyingErrors: [])
Of course, 4089 is less than 4096 so what is this telling me and how do I work around it? Is the limit actually lower than 4096?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Overview
I'm experiencing a critical issue where TensorFlow-metal and PyArrow seem to be incompatible when installed together in the same environment. Whenever both packages are present, TensorFlow crashes and the kernel dies during execution. Environment Details
Environment Details
macOS Version: 15.3.2
Mac Model: MacBook Pro Max M3
Python Version: 3.11
TensorFlow Version: 2.19
PyArrow Version: 19.0.0
Issue Description:
When both TensorFlow-metal and PyArrow are installed in the same Python environment, any attempt to use TensorFlow results in immediate kernel crashes. The issue appears to be a compatibility problem between these two packages rather than a problem with either package individually.
Steps to Reproduce
Create a new Python environment:
conda create -n tf-metal python=3.11
Install TensorFlow-metal:
pip install tensorflow tensorflow-metal
Install PyArrow: pip install pyarrow
Run the following minimal example:
# Create a simple model
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(2,)),
tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse')
model.summary() # This works fine
# Generate some dummy data
X = np.random.random((100, 2))
y = np.random.random((100, 1))
# The crash happens exactly at this line
model.fit(X, y, epochs=5, batch_size=32) # CRASH: Kernel dies here
Result: Kernel crashes with no error message
What I've Tried
Reinstalling both packages in different orders Using different versions of both packages Creating isolated environments Checking system logs for additional error information
The only workaround I've found is to use separate environments for each package, which isn't practical for my workflow as I need both libraries for my data processing and machine learning pipeline.
Questions
Has anyone else encountered this specific compatibility issue? Are there known workarounds that allow both packages to coexist? Is this a known issue that's being addressed in upcoming releases?
Any insights, suggestions, or assistance would be greatly appreciated. I'm happy to provide any additional information that might help diagnose this problem. Thank you in advance for your help!
Thank you in advance for your help!
Topic:
Machine Learning & AI
SubTopic:
Core ML
With respond() methods, the foundation model works well enough. With streamResponse() methods, the responses are very repetitive, verbose, and messy.
My app with foundation model uses more than 500 MB memory on an iPad Pro when running from Xcode. Devices supporting Apple Intelligence have at least 8GB memory. Should Apple use a bigger model (using 3 ~ 4 GB memory) for better stream responses?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Based on the documentation, it appears that MLTensor can be used to perform tensor operations using the ANE (Apple Neural Engine) by wrapping the tensor operations with withMLTensorComputePolicy with a MLComputePolicy initialized with MLComputeUnits.cpuAndNeuralEngine (it can also be initialized with MLComputeUnits.all to let the OS spread the load between the Neural Engine, GPU and CPU).
However, when using the Instruments app, it appears that the tensor operations never get executed on the Neural Engine.
It would be helpful if someone can guide me on the correct way to ensure that the Nerual Engine is used to perform the tensor operations (not as part of a CoreML model file).
based on this example, I've created a simple code to try it:
import Foundation
import CoreML
print("Starting...")
let semaphore = DispatchSemaphore(value: 0)
Task {
await withMLTensorComputePolicy(.init(MLComputeUnits.cpuAndNeuralEngine)) {
let v1 = MLTensor([1.0, 2.0, 3.0, 4.0])
let v2 = MLTensor([5.0, 6.0, 7.0, 8.0])
let v3 = v1.matmul(v2)
await v3.shapedArray(of: Float.self) // is 70.0
let m1 = MLTensor(shape: [2, 3], scalars: [
1, 2, 3,
4, 5, 6
], scalarType: Float.self)
let m2 = MLTensor(shape: [3, 2], scalars: [
7, 8,
9, 10,
11, 12
], scalarType: Float.self)
let m3 = m1.matmul(m2)
let result = await m3.shapedArray(of: Float.self) // is [[58, 64], [139, 154]]
// Supports broadcasting
let m4 = MLTensor(randomNormal: [3, 1, 1, 4], scalarType: Float.self)
let m5 = MLTensor(randomNormal: [4, 2], scalarType: Float.self)
let m6 = m4.matmul(m5)
print("Done")
return result;
}
semaphore.signal()
}
semaphore.wait()
Here's what I get on the Instruments app:
Notice how the Neural Engine line shows no usage.
Ive run this test on an M1 Max MacBook Pro.
As we described on the title, the model that I have built completely works on iPhone 15 / A16 Bionic, on the other hand it does not run on iPhone 16 / A18 chip with the following error message.
E5RT encountered an STL exception. msg = MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED.
E5RT: MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED (11)
It consumes 1.5 ~ 1.6 GB RAM on the loading the model, then the consumption is decreased to less than 100MB on the both of iPhone 15 and 16. After that, only on iPhone 16, the above error is shown on the Xcode log, the memory consumption is surged to 5 to 6GB, and the system kills the app. It works well only on iPhone 15.
This model is built with the Core ML tools. Until now, I have tried the target iOS 16 to 18 and the compute units of CPU_AND_NE and ALL. But any ways have not solved this issue. Eventually, what kindof fix should I do?
minimum_deployment_target = ct.target.iOS18
compute_units = ct.ComputeUnit.ALL
compute_precision = ct.precision.FLOAT16
Hey,
I've been trying to write an AI agent for OpenAI's GPT-5, but using the @Generable Tool types from the FoundationModels framework, which is super awesome btw!
I'm having trouble implementing the tool calling, though. When I receive a tool call from the OpenAI api, I do the following:
Find the tool in my [any Tool] array via the tool name I get from the model
if let tool = tools.first(where: { $0.name == functionCall.name }) {
// ...
}
Parse the arguments of the tool call via GeneratedContent(json:)
let generatedContent = try GeneratedContent(json: functionCall.arguments)
Pass the tool and arguments to a function that calls tool.call(arguments: arguments) and returns the tool's output type
private func execute<T: Tool>(_ tool: T, with generatedContent: GeneratedContent) async throws -> T.Output {
let arguments = try T.Arguments.init(generatedContent)
return try await tool.call(arguments: arguments)
}
Up to this point, everything is working as expected. However, the tool's output type is any PromptRepresentable and I have no idea how to turn that into something that I can encode and send back to the model. I assumed there might be a way to turn it into a GeneratedContent but there is no fitting initializer.
Am I missing something or is this not supported? Without a way to return the output to an external provider, it wouldn't really be possible to use FoundationModels Tool type I think. That would be unfortunate because it's implemented so elegantly.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Dear Apple Developer Team,
I am writing to request the addition of GS1 DataBar Stacked (both regular and expanded variants) to the barcode symbologies supported by the Vision framework (VNBarcodeSymbology) and VisionKit's DataScannerViewController.
Currently, Vision supports several GS1 DataBar formats, such as:
VNBarcodeSymbology.gs1DataBar
VNBarcodeSymbology.gs1DataBarExpanded
VNBarcodeSymbology.gs1DataBarLimited
However, GS1 DataBar Stacked is widely used in industries such as retail, pharmaceuticals, and logistics, where space constraints prevent the use of the standard GS1 DataBar format. Many businesses rely on this symbology to encode GTINs and other product data, but Apple's barcode scanning API does not explicitly support it.
Why This Feature Matters:
Essential for Small Packaging: GS1 DataBar Stacked is commonly used on small product labels where a standard linear barcode does not fit.
Widespread Industry Adoption: Many point-of-sale (POS) systems and inventory management tools require this symbology.
Improves iOS Adoption for Enterprise Use: Adding support would make Apple’s Vision framework a more viable solution for businesses that currently rely on third-party barcode scanning SDKs.
Feature Request:
Please add GS1 DataBar Stacked and GS1 DataBar Expanded Stacked to the recognized symbologies in:
VNBarcodeSymbology (for Vision framework)
DataScannerViewController (for VisionKit)
This addition would enhance the versatility of Apple’s barcode scanning tools and reduce the need for third-party libraries.
I appreciate your consideration of this request and would be happy to provide more details or test implementations if needed.
Thank you for your time and support!
Best regards
Hey everyone
I'm Manish Mehta, field CTO at Centific. I recently read Apple's white paper, The Illusion of Thinking and it got me thinking about the current state of AI reasoning. Who here has read it?
The paper highlights how LLMs often rely on pattern recognition rather than genuine understanding. When faced with complex tasks, their performance can degrade significantly.
I was just thinking that to move beyond this problem, we need to explore approaches that combines Deeper Reasoning Architectures for true cognitive capability with Deep Human Partnership to guide AI toward better judgment and understanding.
The first part means fundamentally rewiring AI to reason. This involves advancing deeper architectures like World Models, which can build internal simulations to understand real-world scenarios , and Neurosymbolic systems, which combines neural networks with symbolic reasoning for deeper self-verification.
Additionally, we need to look at deep human partnership and scalable oversight. An AI cannot learn certain things from data alone, it lacks the real-world judgment an AI will never have. Among other things, deep domain expert human partners are needed to instill this wisdom , validate the AI's entire reasoning process , build its ethical guardrails , and act as skilled adversaries to find hidden flaws before they can cause harm.
What do you all think? Is this focus on a deeper partnership between advanced AI reasoning and deep human judgment the right path forward?
Agree? Disagree?
Thanks
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Download the Foundation Models Adaptor Training Toolkit
Hi, after I clicked on the download button, I was redirected to this page https://developer.apple.com and did not download the toolkit.
I'm new to Swift and was hoping the Playground would support loading adaptors. When I tried, I got a permissions error - thinking it's because it's not in the project and Playgrounds don't like going outside the project?
A tutorial and some sample code would be helpful.
Also some benchmarks on how long it's expected to take. Selfishly I'm on an M2 Mac Mini.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is the face and body detection service in the Vision framework a local model or a cloud model? Is there a performance report?
https://developer.apple.com/documentation/vision
Foundation Models framework worked perfectly on macOS 26 Beta 2, but starting from Beta 3 and continuing through Beta 6 (latest), I get dyld symbol errors even
with the exact code from Apple's documentation.
Environment:
macOS 26.0 Beta 6 (25A5351b)
Xcode 26 Beta 6
M4 Max MacBook Pro
Apple Intelligence enabled and downloaded
Error Details:
dyld[Process]: Symbol not found:
_$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC
Referenced from: /path/to/app.debug.dylib
Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Code Used (Exact from Documentation):
import FoundationModels
// This worked on Beta 2, crashes on Beta 3+
let model = SystemLanguageModel.default
let session = LanguageModelSession(model: model)
let response = try await session.respond(to: "Hello")
What I've Verified:
FoundationModels.framework exists in /System/Library/Frameworks/
Framework is properly linked in Xcode project
Apple Intelligence is enabled and working
Same code works in older beta versions
Issue persists even with completely fresh Xcode projects
Analysis:
The dyld error suggests the LanguageModelSession(model:) constructor is missing. The symbol shows it's looking for a constructor with parameters
(model:guardrails:tools:instructions:), but the documentation still shows the simple (model:) constructor.
Questions:
Has the LanguageModelSession API changed since Beta 2?
Should we now use the constructor with guardrails/tools/instructions parameters?
Is this a known issue with recent betas?
Are there updated code samples for the current API?
Additional Context:
This affects both basic SystemLanguageModel usage AND custom adapter loading. The same dyld symbol errors occur when trying to create
SystemLanguageModel(adapter: adapter) as well.
Any guidance on the correct API usage for current betas would be greatly appreciated. The documentation appears to be out of sync with the actual framework
implementation.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I've spent way too long today trying to convert an Object Detection TensorFlow2 model to a CoreML object classifier (with bounding boxes, labels and probability score)
The 'SSD MobileNet v2 320x320' is here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
And I've been following all sorts of posts and ChatGPT
https://apple.github.io/coremltools/docs-guides/source/tensorflow-2.html#convert-a-tensorflow-concrete-function
https://developer.apple.com/videos/play/wwdc2020/10153/?time=402
To convert it.
I keep hitting the same errors though, mostly around:
NotImplementedError: Expected model format: [SavedModel | concrete_function | tf.keras.Model | .h5 | GraphDef], got <ConcreteFunction signature_wrapper(input_tensor) at 0x366B87790>
I've had varying success including missing output labels/predictions.
But I simply want to create the CoreML model with all the right inputs and outputs (including correct names) as detailed in the docs here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md
It goes without saying I don't have much (any) experience with this stuff including Python so the whole thing's been a bit of a headache.
If anyone is able to help that would be great.
FWIW I'm not attached to any one specific model, but what I do need at minimum is a CoreML model that can detect objects (has to at least include lights and lamps) within a live video image, detecting where in the image the object is.
The simplest script I have looks like this:
import coremltools as ct
import tensorflow as tf
model = tf.saved_model.load("~/tf_models/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model")
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
mlmodel = ct.convert(
concrete_func,
source="tensorflow",
inputs=[ct.TensorType(shape=(1, 320, 320, 3))]
)
mlmodel.save("YourModel.mlpackage", save_format="mlpackage")
I am working on a lung cancer scanning app in for iOS with a CoreML model and when I test my app on a physical device, the model results in the same prediction 100% of the time. I even changed the names around and still resulted in the same case. I have listed my labels in cases and when its just stuck on the same case (case 1)
My code is below:
https://github.com/ShivenKhurana1/Detect-to-Protect-App/blob/main/DetectToProtect/SecondView.swift
I couldn't add the code as it was too long so I hope github link is fine!
When calling NLTagger.requestAssets with some languages, it hangs indefinitely both in the simulator and a device. This happens consistently for some languages like greek. An example call is NLTagger.requestAssets(for: .greek, tagScheme: .lemma). Other languages like french return immediately. I captured some logs from Console and found what looks like the repeated attempts to download the asset. I would expect the call to eventually terminate, either loading the asset or failing with an error.
When the @Generable is applied toward a Swift struct declared within another struct, and when said nested struct is defined as the type of one of the properties of another @Generable type, which is in turn defined as the output format of Foundation Model session, Foundation Model can stuck in a loop trying to create a infinitely nested response, until the context window limit exceeded error is triggered.
I have filed feedback FB19987191 with a demo project. Is this expected behavior?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Can't import data in create ML word tagging project
training data is 100% correct I guarantee it:
I mean look this one has one entry in it.
[
{
"tokens": [
"a", "august", "gruters"
],
"labels": [
"BUILDER", "BUILDER", "BUILDER"
]
}
]
Topic:
Machine Learning & AI
SubTopic:
Create ML
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence