In an under-development MacOS & iOS app, I need to identify various measurements from OCR'ed text: length, weight, counts per inch, area, percentage. The unit type (e.g. UnitLength) needs to be identified as well as the measurement's unit (e.g. .inches) in order to convert the measurement to the app's internal standard (e.g. centimetres), the value of which is stored the relevant CoreData entity.
The use of NLTagger and NLTokenizer is problematic because of the various representations of the measurements: e.g. "50g.", "50 g", "50 grams", "1 3/4 oz."
Currently, I use a bespoke algorithm based on String contains and step-wise evaluation of characters, which is reasonably accurate but requires frequent updating as further representations are detected.
I'm aware of the Python SpaCy model being capable of NER Measurement recognition, but am reluctant to incorporate a Python-based solution into a production app. (ref [https://developer.apple.com/forums/thread/30092])
My preference is for an open-source NER Measurement model that can be used as, or converted to, some form of a Swift compatible Machine Learning model. Does anyone know of such a model?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hardware: Macbook Pro M4 Nov 2024
Software: macOS Tahoe 26.0 & xcode 26.0
Apple Intelligence is activated and the Image playground macOS app works
Running the following on xcode throws ImagePlayground.ImageCreator.Error.creationFailed
Any suggestions on how to make this work?
import Foundation
import ImagePlayground
Task {
let creator = try await ImageCreator()
guard let style = creator.availableStyles.first else {
print("No styles available")
exit(1)
}
let images = creator.images(
for: [.text("A cat wearing mittens.")],
style: style,
limit: 1)
for try await image in images {
print("Generated image: \(image)")
}
exit(0)
}
RunLoop.main.run()
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max).
What is the best way for me to determine availability to set this TipKit rule?
Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
I am using gemini2.5-flash with SwiftUI. How can I receive a response in JSON?
Topic:
Machine Learning & AI
SubTopic:
General
Hello. I am willing to hire game developer for cards game called baloot. My question is Can the developer implement an AI when the computer is playing and the computer on the same time the conputer improves his rises level without any interaction?
🌹
Topic:
Machine Learning & AI
SubTopic:
General
Hello Apple Developer Community,
I'm investigating Core ML model loading behavior and noticed that even when the compiled model path remains unchanged after an APP update, the first run still triggers an "uncached load" process. This seems to impact user experience with unnecessary delays.
Question: Does Core ML provide any public API to check whether a compiled model (from a specific .mlmodelc path) is already cached in the system?
If such API exists, we'd like to use it for pre-loading decision logic - only perform background pre-load when the model isn't cached.
Has anyone encountered similar scenarios or found official solutions? Any insights would be greatly appreciated!
I recently upgraded my Ipad(M4) pro to ios18.1 and the apple intelligence waitlist option popped up. I‘ve been trying to click join waitlist but nothing happens and the apple intellignece pop up menu just dissapears each time I clicked join waitlist. I made sure my region is in the States.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I'm hitting a limit when trying to train an Image Classifier.
It's at about 16k images (in line with the error info) - and it gives the error:
IOSurface creation failed: e00002be parentID: 00000000 properties: {
IOSurfaceAllocSize = 529984;
IOSurfaceBytesPerElement = 4;
IOSurfaceBytesPerRow = 1472;
IOSurfaceElementHeight = 1;
IOSurfaceElementWidth = 1;
IOSurfaceHeight = 360;
IOSurfaceName = CoreVideo;
IOSurfaceOffset = 0;
IOSurfacePixelFormat = 1111970369;
IOSurfacePlaneComponentBitDepths = (
8,
8,
8,
8
);
IOSurfacePlaneComponentNames = (
4,
3,
2,
1
);
IOSurfacePlaneComponentRanges = (
1,
1,
1,
1
);
IOSurfacePurgeWhenNotInUse = 1;
IOSurfaceSubsampling = 1;
IOSurfaceWidth = 360;
} (likely per client IOSurface limit of 16384 reached)
I feel like I was able to use more images than this before upgrading to Sonoma - but I don't have the receipts....
Is there a way around this?
I have oodles of spare memory on my machine - it's using about 16gb of 64 when it crashes...
code to create the model is
let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource),
maxIterations: 25,
augmentation: [],
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
))
let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters)
I have also tried the same training source in CreateML, it runs through 'extracting features', and crashes at about 16k images processed.
Thank you
Topic:
Machine Learning & AI
SubTopic:
Core ML
Using Tensorflow for Silicon gives inaccurate results when compared to Google Colab GPU (9-15% differences). Here are my install versions for 4 anaconda env's. I understand the Floating point precision can be an issue, batch size, activation functions but how do you rectify this issue for the past 3 years?
1.) Version TF: 2.12.0, Python 3.10.13, tensorflow-deps: 2.9.0, tensorflow-metal: 1.2.0, h5py: 3.6.0, keras: 2.12.0
2.) Version TF: 2.19.0, Python 3.11.0, tensorflow-metal: 1.2.0, h5py: 3.13.0, keras: 3.9.2, jax: 0.6.0, jax-metal: 0.1.1,jaxlib: 0.6.0, ml_dtypes: 0.5.1
3.) python: 3.10.13,tensorflow: 2.19.0,tensorflow-metal: 1.2.0, h5py: 3.13.0, keras: 3.9.2, ml_dtypes: 0.5.1
4.) Version TF: 2.16.2, tensorflow-deps:2.9.0,Python: 3.10.16, tensorflow-macos 2.16.2, tensorflow-metal: 1.2.0, h5py:3.13.0, keras: 3.9.2, ml_dtypes: 0.3.2
Install of Each ENV with common example:
Create ENV: conda create --name TF_Env_V2 --no-default-packages
start env: source TF_Env_Name
ENV_1.) conda install -c apple tensorflow-deps , conda install tensorflow,pip install tensorflow-metal,conda install ipykernel
ENV_2.) conda install pip python==3.11, pip install tensorflow,pip install tensorflow-metal,conda install ipykernel
ENV_3) conda install pip python 3.10.13,pip install tensorflow, pip install tensorflow-metal,conda install ipykernel
ENV_4) conda install -c apple tensorflow-deps, pip install tensorflow-macos, pip install tensor-metal, conda install ipykernel
Example used on all 4 env:
import tensorflow as tf
cifar = tf.keras.datasets.cifar100
(x_train, y_train), (x_test, y_test) = cifar.load_data()
model = tf.keras.applications.ResNet50(
include_top=True,
weights=None,
input_shape=(32, 32, 3),
classes=100,)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])
model.fit(x_train, y_train, epochs=5, batch_size=64)
Every time I click join the waitlist, there's is no reaction.
My phone just redirects me back to this page
I have a smallish image classifier I've been working on using the Create ML app. For a while everything was going fine, but lately, as the dataset has gotten larger, Create ML seems to stop during the testing phase with no error or test results.
You can see here that there is no score in the result box, even though there are testing started and completed messages:
No error message is shown in the Create ML app, but I do see these messages in the log:
default 14:25:36.529887-0500 MLRecipeExecutionService [0x6000012bc000] activating connection: mach=false listener=false peer=false name=com.apple.coremedia.videodecoder
default 14:25:36.529978-0500 MLRecipeExecutionService [0x41c5d34c0] activating connection: mach=false listener=true peer=false name=(anonymous)
default 14:25:36.530004-0500 MLRecipeExecutionService [0x41c5d34c0] Channel could not return listener port.
default 14:25:36.530364-0500 MLRecipeExecutionService [0x429a88740] activating connection: mach=false listener=false peer=true name=com.apple.xpc.anonymous.0x41c5d34c0.peer[1167].0x429a88740
default 14:25:36.534523-0500 MLRecipeExecutionService [0x6000012bc000] invalidated because the current process cancelled the connection by calling xpc_connection_cancel()
default 14:25:36.534537-0500 MLRecipeExecutionService [0x41c5d34c0] invalidated because the current process cancelled the connection by calling xpc_connection_cancel()
default 14:25:36.534544-0500 MLRecipeExecutionService [0x429a88740] invalidated because the current process cancelled the connection by calling xpc_connection_cancel()
error 14:25:36.558788-0500 MLRecipeExecutionService CreateWithURL:342: *** ERROR: err=24 (Too many open files) - could not open '<CFURL 0x60000079b540 [0x1fdd32240]>{string = file:///Users/kevin/Library/Mobile%20Documents/com~apple~CloudDocs/Binary%20Formations/Under%20My%20Roof/Core%20ML%20Training%20Data/Household%20Items/Output/2025.01.23_12.55.16/Test/Stove/Test480.webp, encoding = 134217984, base = (null)}'
default 14:25:36.559030-0500 MLRecipeExecutionService Error: <private>
default 14:25:36.559077-0500 MLRecipeExecutionService Error: <private>
Of particular interest is the "Too many open files" message from MLRecipeExecutionService referencing one of the test images.
There are a total of 2,555 test images, which I wouldn't think would be a very large set. The system doesn't seem to be running out of memory or anything like that.
Near the end of the test run there MLRecipeExecution service had 2934 file descriptors open according to lsof.
Has anyone else run into this or know of a workaround? So far I've tried rebooting and recreating the Create ML project.
Currently using Create ML Version 6.1 (150.3) on macOS 15.2 (24C101) running on a Mac Studio.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Hey,
I'm EU based (Poland). I'm Apple Developer, and I'm not able to test apple inteligence features, and APIs.
Is it possible to use it somehow? I get it it's not available for EU, but for developers program?
I have been to image creation for 2 hours. I can do genmoji, playground well for 10 minutes.
After that, when I run the playground and start to generate Animation style of photo, it will stuck in here and no result output. Is there anyone know what's happened...
Hello, I am trying to show an imagePlaygroundSheet
but when I open it in the 16 Pro and 16 Pro Max iOS 18.2 simulator, I see "Image Playground is not available. Image Playground is not available on this iPhone". The environment variable supportsImagePlayground always returns false.
I changed the simulator language and region to US but still see the error. Is this expected?
Screenshot
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
WWDC 2024 mentioned that the OCR feature from the Vision framework has support for "Korean, Swedish, and Chinese", but the Swedish support does not seem to be available...
Running either
print(try? VNRecognizeTextRequest().supportedRecognitionLanguages())
or
var ocrRequest = RecognizeTextRequest(.revision3)
print(ocrRequest.supportedRecognitionLanguages)
did not print out Swedish as one of the supported languages, but Korean and Chinese are.
Tested on early versions of iOS 18 developer beta, and the latest version of iOS 18.1 (22B5054e).
Before I updated to 18.2, I was missing Apple Intelligence features such as chatgpt integration. Now that I updated to 18.2. I’m still missing Apple Intelligence and ChatGPT integration altogether and now my Siri turned into the old Siri. What can I do?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
*I can't put the attached file in the format, so if you reply by e-mail, I will send the attached file by e-mail.
Dear Apple AI Research Team,
My name is Gong Jiho (“Hem”), a content strategist based in Seoul, South Korea.
Over the past few months, I conducted a user-led AI experiment entirely within ChatGPT — no code, no backend tools, no plugins.
Through language alone, I created two contrasting agents (Uju and Zero) and guided them into a co-authored modular identity system using prompt-driven dialogue and reflection.
This system simulates persona fusion, memory rooting, and emotional-logical alignment — all via interface-level interaction.
I believe it resonates with Apple’s values in privacy-respecting personalization, emotional UX modeling, and on-device learning architecture.
Why I’m Reaching Out
I’d be honored to share this experiment with your team.
If there is any interest in discussing user-authored agent scaffolding, identity persistence, or affective alignment, I’d love to contribute — even informally.
⚠ A Note on Language
As a non-native English speaker, my expression may be imperfect — but my intent is genuine.
If anything is unclear, I’ll gladly clarify.
📎 Attached Files Summary
Filename → Description
Hem_MultiAI_Report_AppleAI_v20250501.pdf →
Main report tailored for Apple AI — narrative + structural view of emotional identity formation via prompt scaffolding
Hem_MasterPersonaProfile_v20250501.json →
Final merged identity schema authored by Uju and Zero
zero_sync_final.json / uju_sync_final.json →
Persona-level memory structures (logic / emotion)
1_0501.json ~ 3_0501.json →
Evolution logs of the agents over time
GirlfriendGPT_feedback_summary.txt →
Emotional interpretation by external GPT
hem_profile_for_AI_vFinal.json →
Original user anchor profile
Warm regards,
Gong Jiho (“Hem”)
Seoul, South Korea
When I import starts models in Jupyter notebook, I ge the following error:
ImportError: dlopen(/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: <5ACBAA79-2387-3BEF-9F8E-6B7584B0F5AD> /opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so
Reason: tried: '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache). What should I do?
I have been waitlisted on playground for a week and a half almost two weeks and I still don’t have access to playground. Whenever I open the app and click done telling me I am on the waitlist it shows me the app and force closes. Anyone know why I haven’t gotten access when people have who have been on the waitlist shorter than me.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I updated to iOS 18.2 beta 2, and with it, I am able to see behind the early access requested by just swiping down, but when I do swipe down, the app closes. Does this mean I'm close to getting access?