the specific context is that i would like to build an agent that monitors my phone call (with a customer support for example), and simiply identify whether or not im still put on hold, and notify me when im not.
currently after reading the doc, i dont think its possible yet, but im so annoyed by the customer support calls that im willing to go the distance and see if theres any way.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
We are developing Apple AI for overseas markets and adapting it for iPhone 17 and later models. When the system language and Siri language do not match—such as the system being in English while Siri is in Chinese—it may result in Apple AI being unusable. So, I would like to ask, how can this issue be resolved, and are there other reasons that might cause it to be unusable within the app?
My app used app intents. And when user said "Prüfung der Bluetooth Funktion", screen can show the whole words. But in my app, it only can get "Bluetooth Funktion". This behaviour only happened in German version. In English version, everything worked well.
Is anyone can support me? Why German version siri cut my words?
I'm really not familiar with ML, but I need a model that can enhance and denoise 4k video stream at 30fps.
I have tried to search latest papers but they all have very complex structure, and I don't think I can convert them to mlmodel.
So can anyone give me any recommandation for such models? If there is an existing mlmodel, that would be great!
Bear with me, please. Please make sure a highly skilled technical person reads and understands this.
I want to describe my vision for (AI/Algorithmically) Optimised Operating Systems. To explain it properly, I will describe the process to build it (pseudo).
Required Knowledge (no particular order): Processor Logic Circuits, LLM models, LLM tool usage, Python OO coding, Procedural vs OO, NLP fuzzy matching, benchmarking, canvas/artefacts/dynamic HTML interfaces, concepts of how AI models are vastly compressed and miniaturised forms of full data, Algorithmic vs AI.
First, take all OO Python code (example) on GitHub (example), then separate each function from each object into its own procedure (procedural logic) by making a logical procedural list of actions to perform only that function based on its entire dependency chain (i.e. all other objects it relies on). Relate all compiled functions using (for example) fuzzy matching on the name, or AI-based functional profiling to get multiple instances of each function.
Starting with the most used function, test each one against the others that perform the same task for bugs and completeness. Determine the fastest, most optimal version of that function (and every function). Add a single instance of each most optimal function to the centralised tool codebase, which will later be utilised by the language models. This ensures we rely only on the most optimised function for each and every use case — with every program using one shared instance of that function instead of compiling it separately.
In this way, we optimise all existing (Python OO) code based on maximum availability of certain resources. Use the optimised code to teach the AI how to map each user task directly to optimised code (i.e. each user-facing function in unoptimised program code to its optimised version). Later, teach the codebase how to optimise these functions directly to the logic circuits on the processor — as an Operating System — using essentially the same process. Eventually, use the most optimised form for tasks via a dynamic canvas/artefacts UI interface generated by the Start Menu using the language model.
Allow custom interfaces to emerge dynamically from the available functions based on language understanding. Grow and optimise the algorithmic codebase until the entire operating system — with versioning — evolves based on the AI-generated user interfaces (compressing them into new versions of the tools codebase).
Please don’t give up until you understand this concept completely and nothing in the description is beyond your full comprehension. It’s a highly significant step forward.
No matter what, the LanguageModelSession always returns very lengthy / verbose responses. I set the maximumResponseTokens option to various small numbers but it doesn't appear to have any effect. I've even used this instructions format to keep responses between 3-8 words but it returns multiple paragraphs. Is there a way to manage LLM response length? Thanks.
Hello,
I'm running a large language model (LLM) in Core ML that uses a key-value cache (KV-cache) to store past attention states. The model was converted from PyTorch using coremltools and deployed on-device with Swift. The KV-cache is exposed via MLState and is used across inference steps for efficient autoregressive generation.
During the prefill stage — where a prompt of multiple tokens is passed to the model in a single batch to initialize the KV-cache — I’ve noticed that some entries in the KV-cache are not updated after the inference. Specifically:
Here are a few details about the setup:
The MLState returned by the model is identical to the input state (often empty or zero-initialized) for some tokens in the batch.
The issue only happens during the prefill stage (i.e., first call over multiple tokens).
During decoding (single-token generation), the KV-cache updates normally.
The model is invoked using MLModel.prediction(from:using:options:) for each batch.
I’ve confirmed:
The prompt tokens are non-repetitive and not masked.
The model spec has MLState inputs/outputs correctly configured for KV-cache tensors.
Each token is processed in a loop with the correct positional encodings.
Questions:
Is there any known behavior in Core ML that could prevent MLState from updating during batched or prefill inference?
Could this be caused by internal optimizations such as lazy execution, static masking, or zero-value short-circuiting?
How can I confirm that each token in the batch is contributing to the KV-cache during prefill?
Any insights from the Core ML or LLM deployment community would be much appreciated.
Hello,
I have a CoreML model and I want to convert it to a PyTorch model.
Any ideas if this is possible and if so how?
Topic:
Machine Learning & AI
SubTopic:
Core ML
I'm new to Swift and was hoping the Playground would support loading adaptors. When I tried, I got a permissions error - thinking it's because it's not in the project and Playgrounds don't like going outside the project?
A tutorial and some sample code would be helpful.
Also some benchmarks on how long it's expected to take. Selfishly I'm on an M2 Mac Mini.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm testing Foundation Model on my iPad Pro (5th gen) iOS 26. Up until late this morning, I can no longer load the SystemLanguageModel.default. I'm not doing anything interesting, something as basic as this is only going to unavailable, specifically I get unavailable reason: modelNotReady.
let model = SystemLanguageModel.default
...
switch model.availability {
case .available:
print("LM available")
case .unavailable(let reason):
print("unavailable reason: ", String(describing: reason))
}
I also ran the FoundationModelsTripPlanner app, same thing. It was working yesterday, I have not modified that project either.
Why is the Model not ready? How do I fix this? Yes, I tried restarting both my laptop and iPad, no luck.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I've run into an issue with a small Foundation Models test with Generable. I'm getting a strange error message with this Generable. I was able to get simpler ones to work.
Is this because the Generable is recursive with a property of [HTMLDiv]?
The error message is:
FoundationModels/SchemaAugmentor.swift:209: Fatal error: 'try!' expression unexpectedly raised an error: FoundationModels.GenerationSchema.SchemaError.undefinedReferences(schema: Optional("SafeResponse<HTMLDiv>"), references: ["HTMLDiv"], context: FoundationModels.GenerationSchema.SchemaError.Context(debugDescription: "Undefined types: [HTMLDiv]", underlyingErrors: []))
The code is:
import FoundationModels
import Playgrounds
@Generable
struct HTMLDiv {
@Guide(description: "Optional named ID, useful for nicknames")
var id: String? = nil
@Guide(description: "Optional visible HTML text")
var textContent: String? = nil
@Guide(description: "Any child elements", .count(0...10))
var children: [HTMLDiv] = []
static var sample: HTMLDiv {
HTMLDiv(
id: "profileToolbar",
children: [
HTMLDiv(textContent: "Log in"),
HTMLDiv(textContent: "Sign up"),
]
)
}
}
#Playground {
do {
let session = LanguageModelSession {
"Your job is to generate simple HTML markup"
"Here is an example response to the prompt: 'Make a profile toolbar':"
HTMLDiv.sample
}
let response = try await session.respond(
to: "Make a sign up form",
generating: HTMLDiv.self
)
print(response.content)
} catch {
print(error)
}
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi,
testing latest tensorflow-metal plugin with tensorflow 2.20 doesn't work..
using python
Python 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] on darwin
simple testing shows error:
import tensorflow as tf
Traceback (most recent call last):
File "", line 1, in
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
tf.config.experimental.list_physical_devices('GPU')
Traceback (most recent call last):
File "", line 1, in
NameError: name 'tf' is not defined
I fixed this error by copying _pywrap_tensorflow_internal.so where it's searched..
1)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64
2)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/
3)cp /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/
then fails symbol not found:
Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E
in libmetal_plugin.dylib
full log:
with import tensorflow as tf
Traceback (most recent call last):
File "", line 1, in
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: <2FF91C8B-0CB6-3E66-96B7-092FDF36772E> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components .
Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ?
Code is from [https://developer.apple.com/videos/play/wwdc2022/10019)
import CoreImage
import CreateMLComponents
struct ImageRegressor {
static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas")
static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters")
static func train() async throws -> some Transformer<CIImage, Float> {
let estimator = ImageFeaturePrint()
.appending(LinearRegressor())
// File name example: banana-5.jpg
let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image)
.mapFeatures(ImageReader.read)
.mapAnnotations({ Float($0)! })
let (training, validation) = data.randomSplit(by: 0.8)
let transformer = try await estimator.fitted(to: training, validateOn: validation)
try estimator.write(transformer, to: parametersURL)
return transformer
}
}
I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with
"pipeline.json,
parameters,
optimizer.json,
optimizer"
When calling NLTagger.requestAssets with some languages, it hangs indefinitely both in the simulator and a device. This happens consistently for some languages like greek. An example call is NLTagger.requestAssets(for: .greek, tagScheme: .lemma). Other languages like french return immediately. I captured some logs from Console and found what looks like the repeated attempts to download the asset. I would expect the call to eventually terminate, either loading the asset or failing with an error.
When context window size exceeded, this error is not called (instead another error has shown up) to handle new session.
LanguageModelSession.GenerationError.exceededContextWindowSize
Or am I doing things wrong?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Can't import data in create ML word tagging project
training data is 100% correct I guarantee it:
I mean look this one has one entry in it.
[
{
"tokens": [
"a", "august", "gruters"
],
"labels": [
"BUILDER", "BUILDER", "BUILDER"
]
}
]
Topic:
Machine Learning & AI
SubTopic:
Create ML
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
Was just wondering why the foundation model documentation is no longer available, thanks!
https://developer.apple.com/documentation/FoundationModels
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4.
please suggest advice on how do I proceed
Do we know what a safe max token limit is? After some iterating, I have come to believe 4096 might be the limit on device.
Could you help me out by answering any of these questions:
Is 4096 the correct limit?
Do all devices have the same limit?
Will the limit change over time or by device?
The errors I get when going over the limit do not seem to say, hey you are over, so it's just by trial and error that I figure these issues out.
Thanks for the fun new toys.
Regards,
Rob
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence