getting this error again and again even if I tried reinstalling.
Traceback (most recent call last):
File "", line 1, in
File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/init.py", line 439, in
_ll.load_library(_plugin_dir)
File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: OBJC_CLASS$_MPSGraphRandomOpDescriptor
Referenced from: /Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: /System/Library/Frameworks/MetalPerformanceShadersGraph.framework/Versions/A/MetalPerformanceShadersGraph
Machine Learning
RSS for tagCreate intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.
Posts under Machine Learning tag
61 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
All errors in TranslationError return the same error code, making it difficult to differentiate between them. How can this issue be resolved?
Apple Intelligence is now available in macOS 15.1 and iOS 18.1 (Beta). However, it is currently not supported on visionOS. Despite the fact that visionOS runs on M2 silicon with 16GB of unified memory, Apple Intelligence cannot be utilized on this platform. I want enable Apple Intelligence for my visionOS app.
Dear Apple Team,
I have a suggestion to enhance the Apple Watch user experience.
A new feature could provide personalized
recommendations based on weather conditions and the user’s mood.
For example, during hot weather, it could suggest drink something cold,
or if the user feeling down,it could offer ways the boost their mood.
This kind of feature could make the Apple Watch not just a health and fitness
tracker but also a more functional personal assistant.
“İmprove communication with Apple Watch”
Feature #1: Noise detection and location
suggestions.
Imagine having your Apple Watch detect
ambient noise levels and suggest a quieter location for your call.
Feature #2: Context-aware call response options.
If you can't answer a call,
your Apple Watch could offer pre-set responses to communicate your status and
reduce missed call anxiety.
For example, if you're in a busy restaurant, your Apple Watch could suggest
moving to a quieter spot nearby for a better conversation.
Or if you’re in a movie theater,your Apple Watch could send an automatic
“I’m in the movie’s” text to the caller.
“İmprove user experience and app management“
Automated Sleep Notifications:
The ability for the Apple Watch to automatically turn off notifications or change the watch face
when the user is sleeping would provide a more seamless experience.
For instance,when the watch detects that the user is in sleep mode,it could enable Do Not
Disturb to silence calls and alerts.
Caller Notification:
In addition,it would be great if the Apple Watch
could inform callers that the user is currently sleeping.
This could help manage expectations for those attempting to reach the user at night.
App Management to Conserve Battery:
Implementing a feature that detects
draining app's when the user is asleep could further enhance battery life.
The watch and the iphone could close or pause apps that are using significant
power while the user is not active.
I believe these features could provide valuable advancements in enhancing the
Apple Watch's usability for those who prioritize a restful night's sleep.
Thank you for considering my suggestions.
Best regards,
Mahmut Ötgen
Istanbul,Turkey
Hello,
My App works well on iOS17 and previous iOS18 Beta version, while it crashes on latest iOS18 Beta5, when it calling model predictionFromFeatures.
Calling stack of crash is as:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unrecognized ANE execution priority MLANEExecutionPriority_Unspecified'
Last Exception Backtrace:
0 CoreFoundation 0x000000019bd6408c __exceptionPreprocess + 164
1 libobjc.A.dylib 0x000000019906b2e4 objc_exception_throw + 88
2 CoreFoundation 0x000000019be5f648 -[NSException initWithCoder:]
3 CoreML 0x00000001b7507340 -[MLE5ExecutionStream _setANEExecutionPriorityWithOptions:] + 248
4 CoreML 0x00000001b7508374 -[MLE5ExecutionStream _prepareForInputFeatures:options:error:] + 248
5 CoreML 0x00000001b7507ddc -[MLE5ExecutionStream executeForInputFeatures:options:error:] + 68
6 CoreML 0x00000001b74ce5c4 -[MLE5Engine _predictionFromFeatures:stream:options:error:] + 80
7 CoreML 0x00000001b74ce7fc -[MLE5Engine _predictionFromFeatures:options:error:] + 208
8 CoreML 0x00000001b74cf110 -[MLE5Engine _predictionFromFeatures:usingState:options:error:] + 400
9 CoreML 0x00000001b74cf270 -[MLE5Engine predictionFromFeatures:options:error:] + 96
10 CoreML 0x00000001b74ab264 -[MLDelegateModel _predictionFromFeatures:usingState:options:error:] + 684
11 CoreML 0x00000001b70991bc -[MLDelegateModel predictionFromFeatures:options:error:] + 124
And my model file type is ml package file. Source code is as below:
//model
MLModel *_model;
......
// model init
MLModelConfiguration* config = [[MLModelConfiguration alloc]init];
config.computeUnits = MLComputeUnitsCPUAndNeuralEngine;
_model = [MLModel modelWithContentsOfURL:compileUrl configuration:config error:&error];
.....
// model prediction
MLPredictionOptions *option = [[MLPredictionOptions alloc]init];
id<MLFeatureProvider> outFeatures = [_model predictionFromFeatures:_modelInput options:option error:&error];
Is there anything wrong? Any advice would be appreciated.
Dear Apple Development Team,
I’m writing to express my concerns and request a feature enhancement regarding the ChatGPT app for iOS. Currently, the app's audio functionality does not work when the app is in the background. This limitation significantly affects the user experience, particularly for those of us who rely on the app for ongoing, interactive voice conversations.
Given that many apps, particularly media and streaming services, are allowed to continue audio playback when minimized, it’s frustrating that the ChatGPT app cannot do the same. This restriction interrupts the flow of conversation, forcing users to stay within the app to maintain an audio connection.
For users who multitask on their iPhones, being able to switch between apps while continuing to listen or interact with ChatGPT is essential. The ability to reference notes, browse the web, or even respond to messages while maintaining an ongoing conversation with ChatGPT would greatly enhance the app’s usability and align it with other background-capable apps.
I understand that Apple prioritizes resource management and device performance, but I believe there’s a strong case for allowing apps like ChatGPT to operate with background audio. Given its growing importance as a tool for productivity, learning, and communication, adding this capability would provide significant value to users.
I hope you will consider this feedback for future updates to iOS, or provide guidance on any existing APIs that could be leveraged to enable such functionality.
Thank you for your time and consideration.
Best regards,
luke
yes I used gpt to write this.
Helpppp.
I installed Krita from the Appstore, it works. Then install ai_diffusion and I got :
xcrun: error: cannot be used within an App Sandbox.
Can anybody help me?
Thanks.
AttributeError
Python 3.10.7: /Applications/krita.app/Contents/MacOS/krita
Sat Aug 3 18:15:59 2024
A problem occurred in a Python script. Here is the sequence of
function calls leading up to the error, in the order they occurred.
/Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py in update_settings(self=<ai_diffusion.ui.region.ActiveRegionWidget object>, key='prompt_translation', value=None)
345 self._layout_language_button()
346 elif key == "prompt_translation":
347 self._update_language()
348
349 async def _replace_with_translation(self, client: Client):
self = <ai_diffusion.ui.region.ActiveRegionWidget object>
self._update_language = >
/Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py in _update_language(self=<ai_diffusion.ui.region.ActiveRegionWidget object>)
381 enabled = self._root._model.translation_enabled
382 lang = settings.prompt_translation if enabled else "en"
383 self._language_button.setText(lang.upper())
384 if enabled:
385 text = self._lang_help_enabled
self = <ai_diffusion.ui.region.ActiveRegionWidget object>
self._language_button = <PyQt5.QtWidgets.QToolButton object>
self._language_button.setText =
lang = None
lang.upper undefined
AttributeError: 'NoneType' object has no attribute 'upper'
cause = None
class = <class 'AttributeError'>
context = None
delattr = <method-wrapper 'delattr' of AttributeError object>
dict = {}
dir =
doc = 'Attribute not found.'
eq = <method-wrapper 'eq' of AttributeError object>
format =
ge = <method-wrapper 'ge' of AttributeError object>
getattribute = <method-wrapper 'getattribute' of AttributeError object>
gt = <method-wrapper 'gt' of AttributeError object>
hash = <method-wrapper 'hash' of AttributeError object>
init = <method-wrapper 'init' of AttributeError object>
init_subclass =
le = <method-wrapper 'le' of AttributeError object>
lt = <method-wrapper 'lt' of AttributeError object>
ne = <method-wrapper 'ne' of AttributeError object>
new =
reduce =
reduce_ex =
repr = <method-wrapper 'repr' of AttributeError object>
setattr = <method-wrapper 'setattr' of AttributeError object>
setstate =
sizeof =
str = <method-wrapper 'str' of AttributeError object>
subclasshook =
suppress_context = False
traceback =
args = ("'NoneType' object has no attribute 'upper'",)
name = 'upper'
obj = None
with_traceback =
The above is a description of an error in a Python program. Here is
the original traceback:
Traceback (most recent call last):
File "/Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py", line 347, in update_settings
self._update_language()
File "/Users/alejandropereira/Library/Containers/org.kde.krita/Data/Library/Application Support/krita/pykrita/ai_diffusion/ui/region.py", line 383, in _update_language
self._language_button.setText(lang.upper())
AttributeError: 'NoneType' object has no attribute 'upper'
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing.
How do I fix this please?
Thanks.
Hello,
I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary.
I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
I have made a text classifier model but I want to train it on device too.
When text is classified wrong, user can make update the model on device.
Code :
//
// SpamClassifierHelper.swift
// LearningML
//
// Created by Himan Dhawan on 7/1/24.
//
import Foundation
import CreateMLComponents
import CoreML
import NaturalLanguage
enum TextClassifier : String {
case spam = "spam"
case notASpam = "ham"
}
class SpamClassifierModel {
// MARK: - Private Type Properties
/// The updated Spam Classifier model.
private static var updatedSpamClassifier: SpamClassifier?
/// The default Spam Classifier model.
private static var defaultSpamClassifier: SpamClassifier {
do {
return try SpamClassifier(configuration: .init())
} catch {
fatalError("Couldn't load SpamClassifier due to: \(error.localizedDescription)")
}
}
// The Spam Classifier model currently in use.
static var liveModel: SpamClassifier {
updatedSpamClassifier ?? defaultSpamClassifier
}
/// The location of the app's Application Support directory for the user.
private static let appDirectory = FileManager.default.urls(for: .applicationSupportDirectory,
in: .userDomainMask).first!
class var urlOfModelInThisBundle : URL {
let bundle = Bundle(for: self)
return bundle.url(forResource: "SpamClassifier", withExtension:"mlmodelc")!
}
/// The default Spam Classifier model's file URL.
private static let defaultModelURL = urlOfModelInThisBundle
/// The permanent location of the updated Spam Classifier model.
private static var updatedModelURL = appDirectory.appendingPathComponent("personalized.mlmodelc")
/// The temporary location of the updated Spam Classifier model.
private static var tempUpdatedModelURL = appDirectory.appendingPathComponent("personalized_tmp.mlmodelc")
// MARK: - Public Type Methods
static func predictLabelFor(_ value: String) throws -> (predication :String?, confidence : String) {
let spam = try NLModel(mlModel: liveModel.model)
let result = spam.predictedLabel(for: value)
let confidence = spam.predictedLabelHypotheses(for: value, maximumCount: 1).first?.value ?? 0
return (result,String(format: "%.2f", confidence * 100))
}
static func updateModel(newEntryText : String, spam : TextClassifier) throws {
guard let modelURL = Bundle.main.url(forResource: "SpamClassifier", withExtension: "mlmodelc") else {
fatalError("Could not find model in bundle")
}
// Create feature provider for the new image
let featureProvider = try MLDictionaryFeatureProvider(dictionary: ["label": MLFeatureValue(string: newEntryText), "text": MLFeatureValue(string: spam.rawValue)])
let batchProvider = MLArrayBatchProvider(array: [featureProvider])
let updateTask = try MLUpdateTask(forModelAt: modelURL, trainingData: batchProvider, configuration: nil, completionHandler: { context in
let updatedModel = context.model
let fileManager = FileManager.default
do {
// Create a directory for the updated model.
try fileManager.createDirectory(at: tempUpdatedModelURL,
withIntermediateDirectories: true,
attributes: nil)
// Save the updated model to temporary filename.
try updatedModel.write(to: tempUpdatedModelURL)
// Replace any previously updated model with this one.
_ = try fileManager.replaceItemAt(updatedModelURL,
withItemAt: tempUpdatedModelURL)
loadUpdatedModel()
print("Updated model saved to:\n\t\(updatedModelURL)")
} catch let error {
print("Could not save updated model to the file system: \(error)")
return
}
})
updateTask.resume()
}
/// Loads the updated Spam Classifier, if available.
/// - Tag: LoadUpdatedModel
private static func loadUpdatedModel() {
guard FileManager.default.fileExists(atPath: updatedModelURL.path) else {
// The updated model is not present at its designated path.
return
}
// Create an instance of the updated model.
guard let model = try? SpamClassifier(contentsOf: updatedModelURL) else {
return
}
// Use this updated model to make predictions in the future.
updatedSpamClassifier = model
}
}
I'm looking for a solution to take a picture or point the camera at a piece of clothing and match that image with an image the user has stored in my app.
I'm storing the data in a Core Data database as a Binary Data object. Since the user also takes the pictures they store in the database I think I cannot use pre-trained Core ML models.
I would like the matching to be done on device if possible instead of going to an external service. That will probably describe the item based on what the AI sees, but then I cannot match the item with the stored images in the app.
Does anyone know if this is possible with frameworks as Vision or VisionKit?
Hi all,
I'm trying to build a scam detection in Message Filter powered by CoreML. I find the predictions of ML reliable and the solution for text frauds and scams are sorely needed.
I was able to create a trained MLModel and deploy it in the app. It works on my container app, but when I try to use and initialise the model in the Message Filter extension, I get an error;
initialization of text classifier model with model data failed
I have tried putting the model in the container app, extension, even made a shared framework for container and extension but to no avail. Every time I invoke the codes to init my model from the extension, I am met with the same error.
Here's my code for initializing the model
do {
let model = try Ace_v24_6(configuration: .init())
let output = try model.prediction(text: text)
guard !output.label.isEmpty else {
return nil
}
return MessagePrediction(rawValue: output.label)
} catch {
return nil
}
My question is: Is it impossible to use CoreML in MessageFilters?
Cheers
Hey, I have been trying out the Xcode 16 beta's code completion for the last couple of days. I went do disable it through the settings, in the components section. I did this to go along with a tutorial and I didn't want it to help me out.
But now that I want it back. I can't find a way to enable it again.
I tried reinstalling both the beta and regular xcode, but it didn't show up again.
Wanted to ask if someone knows how to get this back. Thanks!
I created a model that classifies certain objects using yolov8. I noticed that the model is not working properly in my application. While the model works fine in Xcode preview, in the application it either returns the same result with 99% accuracy for each classification or does not provide any result.
In Preview it looks like this:
Predictions:
extension CameraVC : AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) {
guard let data = photo.fileDataRepresentation() else {
return
}
guard let image = UIImage(data: data) else {
return
}
guard let cgImage = image.cgImage else {
fatalError("Unable to create CIImage")
}
let handler = VNImageRequestHandler(cgImage: cgImage,orientation: CGImagePropertyOrientation(image.imageOrientation))
DispatchQueue.global(qos: .userInitiated).async {
do {
try handler.perform([self.viewModel.detectionRequest])
} catch {
fatalError("Failed to perform detection: \(error)")
}
}
lazy var detectionRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: bestv720().model)
let request = VNCoreMLRequest(model: model) { [weak self] request, error in
self?.processDetections(for: request, error: error)
}
request.imageCropAndScaleOption = .centerCrop
return request
} catch {
fatalError("Failed to load Vision ML model: \(error)")
}
}()
This is where i print recognized objects:
func processDetections(for request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results as? [VNRecognizedObjectObservation] else {
return
}
var label = ""
var all_results = []
var all_confidence = []
var true_results = []
var true_confidence = []
for result in results {
for i in 0...results.count{
all_results.append(result.labels[i].identifier)
all_confidence.append(result.labels[i].confidence)
for confidence in all_confidence {
if confidence as! Float > 0.7 {
true_results.append(result.labels[i].identifier)
true_confidence.append(confidence)
}
}
}
label = result.labels[0].identifier
}
print("True Results " , true_results)
print("True Confidence ", true_confidence)
self.output?.updateView(label:label)
}
}
I converted the model like this:
from ultralytics import YOLO
model = YOLO(model_path)
model.export(format='coreml', nms=True, imgsz=[720,1280])
APP Intent is generated through the "Extract app intents metadata" swift compiled step, thus make it difficult to delete. Is there any API can delete an APP Intent at runtime instead of wait for the version release ?
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2.
The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right).
Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models.
P.S. While converting I also got the following warning. :
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error:
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
I made a model using pytorch and then converted it into a mlmodel file. Next I tried and downloaded (https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture) which worked! But when I changed the model to my model that I made, the camera worked, but no predictions where shown please
h
elp!
Hi, this is the 3rd time I'm trying to post this on the forum, apple moderators ignoring it.
I'm a deep learning expert with a specialization of image processing.
I want to know why I have hundreds of AI models on my Mac that are indexing everything on my computer while it is idle, using programs like neuralhash that I can't find any information about.
I can understand if they are being used to enhance the user experience on Spotlight, Siri, Photos, and other applications, but I couldn't find the necessary information on the web.
Usually, (spyware) software like this uses them to classify files in an X/Y coordinate system. This feels like a more advanced version of stuxnet.
find / -type f -name "*.weights" > ai_models.txt
find / -type f -name "*labels*.txt" > ai_model_labels.txt
Some of the classes from the files;
file_name: SCL_v0.3.1_9c7zcipfrc_558001-labels-v3.txt
document_boarding_pass
document_check_or_checkbook
document_currency_or_bill
document_driving_license
document_office_badge
document_passport
document_receipt
document_social_security_number
hier_curation
hier_document
hier_negative
curation_meme
file_name: SceneNet5_detection_labels-v8d.txt
CVML_UNKNOWN_999999
aircraft
automobile
bicycle
bird
bottle
bus
canine
consumer_electronics
feline
fruit
furniture
headgear
kite
fish
computer_monitor
motorcycle
musical_instrument
document
people
food
sign
watersport
train
ungulates
watercraft
flower
appliance
sports_equipment
tool
I'm trying to create an app that uses artificial intelligence technology.
One of the models provided on this website(https://developer.apple.com/machine-learning/models/) will be used.
Are there any copyright or legal issues if I create an app using the model provided by this website and distribute it to the App Store?
I'm trying to create an app that uses artificial intelligence technology.
One of the models provided on this website(https://developer.apple.com/machine-learning/models/) will be used.
Are there any copyright or legal issues if I create an app using the model provided by this website and distribute it to the App Store?