I know that I can use face detect with CoreML, but I'm wandering that is there any to identify the same person between two images like Photos app.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
for (int i = 0; i < 1000; i++){
double st_tmp = CFAbsoluteTimeGetCurrent();
retBuffer = [self.enhancer enhance:pixelBuffer error:&error];
double et_tmp = CFAbsoluteTimeGetCurrent();
NSLog(@"[enhance once] %f ms ", (et_tmp - st_tmp) * 1000);
}
When I run a CoreML model using the above code, I notice that the runtime gradually decreases at the beginning.
output:
[enhance once] 14.965057 ms
[enhance once] 12.727022 ms
[enhance once] 12.818098 ms
[enhance once] 11.829972 ms
[enhance once] 11.461020 ms
[enhance once] 10.949016 ms
[enhance once] 10.712981 ms
[enhance once] 10.367990 ms
[enhance once] 10.077000 ms
[enhance once] 9.699941 ms
[enhance once] 9.370089 ms
[enhance once] 8.634090 ms
[enhance once] 7.659078 ms
[enhance once] 7.061005 ms
[enhance once] 6.729007 ms
[enhance once] 6.603003 ms
[enhance once] 6.427050 ms
[enhance once] 6.376028 ms
[enhance once] 6.509066 ms
[enhance once] 6.452084 ms
[enhance once] 6.549001 ms
[enhance once] 6.616950 ms
[enhance once] 6.471038 ms
[enhance once] 6.462932 ms
[enhance once] 6.443977 ms
[enhance once] 6.683946 ms
[enhance once] 6.538987 ms
[enhance once] 6.628990 ms
...
In most deep learning inference frameworks, there is usually a warmup process, but typically, only the first inference is slower. Why does CoreML have a decreasing runtime at the beginning? Is there a way to make only the first inference time longer, while keeping the rest consistent?
I use the CoreML model in the (void)display_pixels:(IJKOverlay *)overlay function.
Hi, i have been noticing some strange issues with using CoreML models in my app. I am using the Whisper.cpp implementation which has a coreML option. This speeds up the transcribing vs Metal.
However every time i use it, the app size inside iphone settings -> General -> Storage increases - specifically the "documents and data" part, the bundle size stays consistent. The Size of the app seems to increase by the same size of the coreml model, and after a few reloads it can increase to over 3-4gb!
I thought that maybe the coreml model (which is in the bundle) is being saved to file - but i can't see where, i have tried to use instruments and xcode plus lots of printing out of cache and temp directory etc, deleting the caches etc.. but no effect.
I have downloaded the container of the iphone from xcode and inspected it, there are some files stored inthe cache but only a few kbs, and even though the value in the settings-> storage shows a few gb, the container is only a few mb.
Please can someone help or give me some guidance on what to do to figure out why the documents and data is increasing? where could this folder be pointing to that is not in the xcode downloaded container??
This is the repo i am using https://github.com/ggerganov/whisper.cpp the swiftui app and objective-C app both do the same thing i am witnessing when using coreml.
Thanks in advance for any help, i am totally baffled by this behaviour
I was watching WWDC20_Model Deployment, but I found that there's no existing documentation backing up this session.
Is model deployment dashboard still available in 2024?
followed instruction in
https://developer.apple.com/metal/jax/
I got
Successfully installed importlib-metadata-7.1.0 jax-0.4.28 jax-metal-0.0.7 jaxlib-0.4.28 opt-einsum-3.3.0 scipy-1.13.0 six-1.16.0 zipp-3.18.2
but the test failed
python -c 'import jax; print(jax.numpy.arange(10))'
Traceback (most recent call last):
File "", line 1, in
File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/init.py", line 37, in
import jax.core as _core
File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/core.py", line 18, in
from jax._src.core import (
File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/core.py", line 39, in
from jax._src import dtypes
File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/dtypes.py", line 33, in
from jax._src import config
File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/config.py", line 27, in
from jax._src import lib
File "/Users/erivas/jax-metal/lib/python3.9/site-packages/jax/_src/lib/init.py", line 84, in
cpu_feature_guard.check_cpu_features()
RuntimeError: This version of jaxlib was built using AVX instructions, which your CPU and/or operating system do not support. You may be able work around this issue by building jaxlib from source.
Hi
I have only recently started working on ML on my Mac M1 Pro, previously I was working on a Windows platform. I am having difficulties getting my machine set up right so that its ready for the super fast training I was hoping for when I got it. Please help me with this and let me know if and where I am going wrong.
So, I tried a custom dataset training using Yolov8 model. I want to train for a 100 epochs. Now the same dataset and hyperparameters take about 2.5 hours on a T4 GPU on Google Colab, whereas I was only at around 60 epochs after 24 hours on my M1 pro.
I have home brew, miniconda, pytorch nightly for mac installed and set the device to mps when training the YOLO model. I feel that this is reaaallly slow. What should I be doing right?
Thank you
Lakshmi
Hi, this is the 3rd time I'm trying to post this on the forum, apple moderators ignoring it.
I'm a deep learning expert with a specialization of image processing.
I want to know why I have hundreds of AI models on my Mac that are indexing everything on my computer while it is idle, using programs like neuralhash that I can't find any information about.
I can understand if they are being used to enhance the user experience on Spotlight, Siri, Photos, and other applications, but I couldn't find the necessary information on the web.
Usually, (spyware) software like this uses them to classify files in an X/Y coordinate system. This feels like a more advanced version of stuxnet.
find / -type f -name "*.weights" > ai_models.txt
find / -type f -name "*labels*.txt" > ai_model_labels.txt
Some of the classes from the files;
file_name: SCL_v0.3.1_9c7zcipfrc_558001-labels-v3.txt
document_boarding_pass
document_check_or_checkbook
document_currency_or_bill
document_driving_license
document_office_badge
document_passport
document_receipt
document_social_security_number
hier_curation
hier_document
hier_negative
curation_meme
file_name: SceneNet5_detection_labels-v8d.txt
CVML_UNKNOWN_999999
aircraft
automobile
bicycle
bird
bottle
bus
canine
consumer_electronics
feline
fruit
furniture
headgear
kite
fish
computer_monitor
motorcycle
musical_instrument
document
people
food
sign
watersport
train
ungulates
watercraft
flower
appliance
sports_equipment
tool
I made a model using pytorch and then converted it into a mlmodel file. Next I tried and downloaded (https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture) which worked! But when I changed the model to my model that I made, the camera worked, but no predictions where shown please
h
elp!
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2.
The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right).
Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models.
P.S. While converting I also got the following warning. :
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error:
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
Hi all, I couldn't use random.PRNGKey to generate random seed. Wondering anyone has similar issue before and figure this out. Here is my current config: jax-metal==0.0.3, jaxlib==0.4.10, jax==0.4.11.
I am using Apple M1 pro.
Does the new Image Playground API allow programmatically generating images? Can the app generate and use them without the API's UI or would that require using another generative image model?
I've been going through the documentation. I can't seem to find the docs that cover all the new AI features.
I have a Shortcuts action via an App Intent that I want only for active subscribers to use.
I have a shared class that handles all the subcription related things. But for some reason my code only works if the app is active in the background. Once the app is quitted and the user performs the Shortcut, the not subscribed error is thrown – even though the user is subscribed.
How can I ensure that my subscription check is done correctly, if the app isn’t open in the background?
My Code
App Intent excerpt:
@MainActor
func perform() async throws -> some IntentResult & ReturnsValue<MeterIntentEntity> {
// Validate that the user is subscribed.
// Cancels action with error message if not subscribed.
if SubscriptionManager.shared.userIsSubscribed == false {
throw IntentError.notSubscribed
}
// More Code …
// Finish and pass created value as result.
return .result(value: something)
}
Subscription Manager excerpt:
class SubscriptionManager: ObservableObject {
// A singleton for our entire app to use
static let shared = SubscriptionManager()
let productIds = ["my_sub1", "my_sub2"]
@Published private(set) var availableSubscriptions: [Product]
@Published private(set) var purchasedSubscriptions: [Product] = []
public var userIsSubscribed: Bool {
return !self.purchasedSubscriptions.isEmpty
}
init() {
// Initialize empty products, and then do a product request asynchronously to fill them in.
availableSubscriptions = []
Task {
await updatePurchasedProducts()
}
}
@MainActor
func updatePurchasedProducts() async {
for await result in Transaction.currentEntitlements {
do {
let transaction = try checkVerified(result)
if let subscription = availableSubscriptions.first(where: { $0.id == transaction.productID }) {
purchasedSubscriptions.append(subscription)
}
} catch {
Logger.subscription.error("Error loading users user's purchased products.")
}
}
}
From https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/:
Powered by Apple Intelligence, Siri becomes more deeply integrated into the system experience. With richer language-understanding capabilities, Siri is more natural, more contextually relevant, and more personal, with the ability to simplify and accelerate everyday tasks.
From https://developer.apple.com/apple-intelligence/:
Siri is more natural, more personal, and more deeply integrated into the system. Apple Intelligence provides Siri with enhanced action capabilities, and developers can take advantage of pre-defined and pre-trained App Intents across a range of domains to not only give Siri the ability to take actions in your app, but to make your app’s actions more discoverable in places like Spotlight, the Shortcuts app, Control Center, and more. SiriKit adopters will benefit from Siri’s enhanced conversational capabilities with no additional work. And with App Entities, Siri can understand content from your app and provide users with information from your app from anywhere in the system.
Based on this, as well as the video at https://developer.apple.com/videos/play/wwdc2024/10133/ , my understanding is that in order for Siri to be able to execute tasks in applications, those applications must implement the Siri Intents API.
Can someone at Apple please clarify: will it be possible for Siri or some other aspect of Apple Intelligence / Core ML / Create ML to take actions in applications which do not support these APIs (e.g. web apps, Citrix apps, legacy apps)?
Thank you!
I am developing an iOS app that supports INPlayMediaIntent.
We are trying to increase the recognition rate of content names, which are song titles, using AppIntentVocabulary.
As a sample, some extracts are shown below.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>IntentPhrases</key>
<array>
<dict>
<key>IntentName</key>
<string>INPlayMediaIntent</string>
<key>IntentExamples</key>
<array>
<string>Mezamashi Appで湖畔の朝を再生</string>
<string>湖畔の朝をMezamashi Appで再生して</string>
</array>
</dict>
</array>
<key>ParameterVocabularies</key>
<array>
<dict>
<key>ParameterNames</key>
<array>
<string>INPlayMediaIntent.playlistTitle</string>
</array>
<key>ParameterVocabulary</key>
<array>
<dict>
<key>VocabularyItemIdentifier</key>
<string>ID1</string>
<key>VocabularyItemSynonyms</key>
<array>
<dict>
<key>VocabularyItemPronunciation</key>
<string>aogamagaeru</string>
<key>VocabularyItemPhrase</key>
<string>青ガマガエル</string>
</dict>
</array>
</dict>
<dict>
<key>VocabularyItemIdentifier</key>
<string>ID2</string>
<key>VocabularyItemSynonyms</key>
<array>
<dict>
<key>VocabularyItemPronunciation</key>
<string>kohon no asa</string>
<key>VocabularyItemPhrase</key>
<string>湖畔の朝</string>
</dict>
</array>
</dict>
<dict>
<key>VocabularyItemIdentifier</key>
<string>ID3</string>
<key>VocabularyItemSynonyms</key>
<array>
<dict>
<key>VocabularyItemPronunciation</key>
<string>kumageratachi no uta</string>
<key>VocabularyItemPhrase</key>
<string>クマゲラたちの歌</string>
</dict>
</array>
</dict>
</array>
</dict>
</array>
</dict>
</plist>
When running on the iOS 17.5 simulator in Xcode 15.4, the results are as follows.
mediaName = VocabularyItemIdentifier
mediaIdentifier = nil
<INMediaSearch: 0x6000026212c0> {
reference = 0;
mediaType = 0;
sortOrder = 0;
albumName = <null>;
mediaName = ID1;
genreNames = (
);
artistName = <null>;
moodNames = (
);
releaseDate = <null>;
mediaIdentifier = <null>;
}
However, when running on an iOS 17.5 device, the following applies.
mediaName = VocabularyItemPhrase
mediaIdentifier = VocabularyItemIdentifier
<INMediaSearch: 0x301efd9e0> {
reference = 0;
mediaType = 5;
sortOrder = 0;
albumName = <null>;
mediaName = 青ガマガエル;
genreNames = (
);
artistName = <null>;
moodNames = (
);
releaseDate = <null>;
mediaIdentifier = ID1;
}
The results are not stable, for example, sometimes everything else returns null.
I have tried everything, but it is just taking a long time.
Does anyone have any advice on this?
I am trying to make a voip car play app using siri
let assistant = CPAssistantCellConfiguration(position: .top, visibility: .always, assistantAction: .startCall)
let siriTmeplate = CPListTemplate(title: "Siri", sections: [sectionItems, loadingSection], assistantCellConfiguration: assistant)
siriTmeplate.tabSystemItem = .recents
siriTmeplate.showsTabBadge = false
Using the above code gives me the error
"Error: Intent of type INStartCallIntent is not supported for this app category"
on app luanch
I have INStartCallIntent in my apps info plist and I have all the entitlements and I have "business" as the app category,
I can fine 0 help online with this. what does this error really mean and how can I fix it please
https://developer.apple.com/videos/play/wwdc2024/10159/
This video references demo_utils but I did not see any source code attached to the video. Does anyone have access to it
iOS 18 adds a specific macro for exposing your search app intent, app entities, etc, to siri but how are you meant to add it to your existing objects without removing it entirely from < iOS 18 users?
For example, i get the following error:
AssistantIntent(schema:) is only available in iOS 18 or newer. Add @available attribute to enclosing struct.
I don't want to do that since i still want to support iOS 17 users with my existing shortcuts. Do i need to duplicate my entire shortcuts model to add the new macro?
Are there going to be any sessions on Image Playgrounds API for iOS?
"Explore machine learning on Apple platforms" mentions the writing and points to sessions, but only mentions Image Playground without pointing to sessions.
The What’s New in Create ML session in WWDC24 went into great depth with time-series forecasting models (beginning at: 15:14) and mentioned these new models, capabilities, and tools for iOS 18. So, far, all I can find is API documentation. I don’t see any other session in WWDC24 covering these new time-series forecasting Create ML features.
Is there more substance/documentation on how to use these with Create ML? Maybe I am looking in the wrong place but I am fairly new with ML.
Are there any food truck / donut shop demo/sample code like in the video?
It is of great interest to get ahead of the curve on this within business applications that may take advantage of this with inventory / ordering data.