Hi there,
I am trying to create a Message Filter app that uses a trained Text Classification to predict scam texts (as it is common in my country and is constantly evolving).
However, when I try to use the MLModel in the MessageFilterExtension class, I'm getting
initialization of text classifier model with model data failed
Here's how I initialize my MLModel that is created using Create ML.
do {
let model = try MyModel(configuration: .init())
let output = try model.prediction(text: text)
guard !output.label.isEmpty else {
return nil
}
return MessagePrediction(rawValue: output.label)
} catch {
return nil
}
Is it impossible to use CoreML in Message Filter extensions?
Thank you
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
I noticed the code snippets are missing from the wwdc2024 10210 video titled Bring your app’s core features to users with App Intents https://developer.apple.com/videos/play/wwdc2024/10210/ It would be useful if those could be added. I also noticed the transcript is missing from the web version but it is in the Developer app, that is odd.
Hi all,
I'm trying to build a scam detection in Message Filter powered by CoreML. I find the predictions of ML reliable and the solution for text frauds and scams are sorely needed.
I was able to create a trained MLModel and deploy it in the app. It works on my container app, but when I try to use and initialise the model in the Message Filter extension, I get an error;
initialization of text classifier model with model data failed
I have tried putting the model in the container app, extension, even made a shared framework for container and extension but to no avail. Every time I invoke the codes to init my model from the extension, I am met with the same error.
Here's my code for initializing the model
do {
let model = try Ace_v24_6(configuration: .init())
let output = try model.prediction(text: text)
guard !output.label.isEmpty else {
return nil
}
return MessagePrediction(rawValue: output.label)
} catch {
return nil
}
My question is: Is it impossible to use CoreML in MessageFilters?
Cheers
I have a couple of models that I want to migrate to .mlpackage but can not find the resources of the session:
https://developer.apple.com/videos/play/wwdc2024/10159/
In the video at 21:10 talk about modifications and optimizations, but in the video can not even see the dependencies of the demo.
Thanks
I'm trying to use a custom system image for my App Shortcut:
AppShortcut(
intent: AppOpenIntent(),
phrases: ["Open \(.applicationName)"],
shortTitle: "Open App",
systemImageName: "custom.image"
)
I've added custom.image to Images.xcassets for my project, but the shortcuts icon is always blank in the Shortcuts app. Is it possible to use custom system images for app shortcuts?
Deploy machine learning and AI models on-device with Core ML say the performance report can see the ops run on which unit and why it cannot run on Neural Engine.
I tested my model and the report shows a gray checkmark at the Neural Engine, indicating it can run on the Neural Engine. However, it's not executing on the Neural Engine but on the CPU. Why is this happening?
I'm looking for a solution to take a picture or point the camera at a piece of clothing and match that image with an image the user has stored in my app.
I'm storing the data in a Core Data database as a Binary Data object. Since the user also takes the pictures they store in the database I think I cannot use pre-trained Core ML models.
I would like the matching to be done on device if possible instead of going to an external service. That will probably describe the item based on what the AI sees, but then I cannot match the item with the stored images in the app.
Does anyone know if this is possible with frameworks as Vision or VisionKit?
Hi everyone,
I'm curious about the capabilities of ARKit's object tracking feature. Specifically, I'd like to know:
Is there a size limit for the objects that can be tracked?
Can ARKit differentiate between two objects with the same shape but different models (e.g., different colors)?
Are objects with single colors and generic shapes (like squares or circles) effectively trackable?
Any insights or examples from your experiences would be greatly appreciated!
Thanks in advance.
Hi,
I want to create a real time sports analytics app that takes camera input and records basketball stats. I want to use pose estimation and object classification to record things such as dribbles, when the ball leaves one's hands. etc.
Is it possible to have a model in CoreML that performs pose estimation on people but also does just simple object detection on other classes (ie. ball, hoop?)
Thanks
After I have a dataframe of data with one column as features with type MLshapedarray and one column of annotations with type Int.
How can I convert them to the correct input type for the timeseriesclassifier?
I want to try an any resolution image input Core ML model.
So I wrote the model following the Core ML Tools "Set the Range for Each Dimensionas" sample code, modified as below:
# Trace the model with random input.
example_input = torch.rand(1, 3, 50, 50)
traced_model = torch.jit.trace(model.eval(), example_input)
# Set the input_shape to use RangeDim for each dimension.
input_shape = ct.Shape(shape=(1,
3,
ct.RangeDim(lower_bound=25, upper_bound=1920, default=45),
ct.RangeDim(lower_bound=25, upper_bound=1920, default=45)))
scale = 1/(0.226*255.0)
bias = [- 0.485/(0.229) , - 0.456/(0.224), - 0.406/(0.225)]
# Convert the model with input_shape.
mlmodel = ct.convert(traced_model,
inputs=[ct.ImageType(shape=input_shape, name="input", scale=scale, bias=bias)],
outputs=[ct.TensorType(name="output")],
convert_to="mlprogram",
)
# Save the Core ML model
mlmodel.save("image_resize_model.mlpackage")
It converts OK but when I predict the result with an image It will get the error as below:
You will not be able to run predict() on this Core ML model. Underlying exception message was: {
NSLocalizedDescription = "Failed to build the model execution plan using a model architecture file '/private/var/folders/8z/vtz02xrj781dxvz1v750skz40000gp/T/model-small.mlmodelc/model.mil' with error code: -7.";
}
Where did I do wrong?
I just downgraded from iOS 18 to iOS 17.5.1 and Siri not working anymore i cannot even setup
i have already make reset the phone and not working:(
I was trying the latest coremltools-8.0b1 beta on macOS 15 Beta with the intent to try using the new stateful models api in CoreML.
But the conversion would always fail with the error:
/AppleInternal/Library/BuildRoots/<snip>/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:162: failed assertion `Error: the minimum deployment target for macOS is 14.0.0'
Here's a minimal repro, which works fine with both the stable version of coremltools (7.2) and the beta version (8.0b1) on macOS Sonoma 14.5, but fails with both versions of coremltools on macOS 15.0 Beta and Xcode 16.0 Beta. Which means that this most likely isn't an issue with coremltools, but with the native compilation toolchain.
from collections import OrderedDict
import coremltools as ct
import numpy as np
import torch
import torch.nn as nn
class ResidualAttentionBlock(nn.Module):
def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
super().__init__()
self.attn = nn.MultiheadAttention(d_model, n_head)
self.ln_1 = nn.LayerNorm(d_model)
self.mlp = nn.Sequential(
OrderedDict(
[
("c_fc", nn.Linear(d_model, d_model * 4)),
("gelu", nn.GELU()),
("c_proj", nn.Linear(d_model * 4, d_model)),
]
)
)
self.ln_2 = nn.LayerNorm(d_model)
self.attn_mask = attn_mask
def attention(self, x: torch.Tensor):
self.attn_mask = (
self.attn_mask.to(dtype=x.dtype, device=x.device)
if self.attn_mask is not None
else None
)
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
def forward(self, x: torch.Tensor):
x = x + self.attention(self.ln_1(x))
x = x + self.mlp(self.ln_2(x))
return x
class Transformer(nn.Module):
def __init__(
self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None
):
super().__init__()
self.width = width
self.layers = layers
self.resblocks = nn.Sequential(
*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]
)
def forward(self, x: torch.Tensor):
return self.resblocks(x)
transformer = Transformer(width=512, layers=12, heads=8)
emb_tokens = torch.rand((1, 512))
ct_model = ct.convert(
torch.jit.trace(transformer.eval(), emb_tokens),
convert_to="mlprogram",
minimum_deployment_target=ct.target.macOS14,
inputs=[ct.TensorType(name="embIn", shape=[1, 512])],
outputs=[ct.TensorType(name="embOutput", dtype=np.float32)],
)
Adding the openAppWhenRun property to an AppIntent for a ControlWidgetButton causes the following error when the control is tapped in Control Center:
Unknown NSError The operation couldn’t be completed. (LNActionExecutorErrorDomain error 2018.)
Here’s the full ControlWidget and AppIntent code that causes the errorerror:
Should controls be able to open apps after the AppIntent runs, or is this a bug?
I have several CoreML models that I've set up to run in sequence where one of the outputs from each model is passed as one of the inputs to the next.
For the most part, there is very little overhead in between each sub-model "chunk":
However a couple of the models (eg the first two above) spend a noticeable amount of time in "Prepare Neural Engine Request". From Instruments, it seems like this is spent doing some sort of model loading.
Given that I'm calling these models in sequence and in a fixed order, is there some way to reduce or amortize this cost? Thanks!
I have made a text classifier model but I want to train it on device too.
When text is classified wrong, user can make update the model on device.
Code :
//
// SpamClassifierHelper.swift
// LearningML
//
// Created by Himan Dhawan on 7/1/24.
//
import Foundation
import CreateMLComponents
import CoreML
import NaturalLanguage
enum TextClassifier : String {
case spam = "spam"
case notASpam = "ham"
}
class SpamClassifierModel {
// MARK: - Private Type Properties
/// The updated Spam Classifier model.
private static var updatedSpamClassifier: SpamClassifier?
/// The default Spam Classifier model.
private static var defaultSpamClassifier: SpamClassifier {
do {
return try SpamClassifier(configuration: .init())
} catch {
fatalError("Couldn't load SpamClassifier due to: \(error.localizedDescription)")
}
}
// The Spam Classifier model currently in use.
static var liveModel: SpamClassifier {
updatedSpamClassifier ?? defaultSpamClassifier
}
/// The location of the app's Application Support directory for the user.
private static let appDirectory = FileManager.default.urls(for: .applicationSupportDirectory,
in: .userDomainMask).first!
class var urlOfModelInThisBundle : URL {
let bundle = Bundle(for: self)
return bundle.url(forResource: "SpamClassifier", withExtension:"mlmodelc")!
}
/// The default Spam Classifier model's file URL.
private static let defaultModelURL = urlOfModelInThisBundle
/// The permanent location of the updated Spam Classifier model.
private static var updatedModelURL = appDirectory.appendingPathComponent("personalized.mlmodelc")
/// The temporary location of the updated Spam Classifier model.
private static var tempUpdatedModelURL = appDirectory.appendingPathComponent("personalized_tmp.mlmodelc")
// MARK: - Public Type Methods
static func predictLabelFor(_ value: String) throws -> (predication :String?, confidence : String) {
let spam = try NLModel(mlModel: liveModel.model)
let result = spam.predictedLabel(for: value)
let confidence = spam.predictedLabelHypotheses(for: value, maximumCount: 1).first?.value ?? 0
return (result,String(format: "%.2f", confidence * 100))
}
static func updateModel(newEntryText : String, spam : TextClassifier) throws {
guard let modelURL = Bundle.main.url(forResource: "SpamClassifier", withExtension: "mlmodelc") else {
fatalError("Could not find model in bundle")
}
// Create feature provider for the new image
let featureProvider = try MLDictionaryFeatureProvider(dictionary: ["label": MLFeatureValue(string: newEntryText), "text": MLFeatureValue(string: spam.rawValue)])
let batchProvider = MLArrayBatchProvider(array: [featureProvider])
let updateTask = try MLUpdateTask(forModelAt: modelURL, trainingData: batchProvider, configuration: nil, completionHandler: { context in
let updatedModel = context.model
let fileManager = FileManager.default
do {
// Create a directory for the updated model.
try fileManager.createDirectory(at: tempUpdatedModelURL,
withIntermediateDirectories: true,
attributes: nil)
// Save the updated model to temporary filename.
try updatedModel.write(to: tempUpdatedModelURL)
// Replace any previously updated model with this one.
_ = try fileManager.replaceItemAt(updatedModelURL,
withItemAt: tempUpdatedModelURL)
loadUpdatedModel()
print("Updated model saved to:\n\t\(updatedModelURL)")
} catch let error {
print("Could not save updated model to the file system: \(error)")
return
}
})
updateTask.resume()
}
/// Loads the updated Spam Classifier, if available.
/// - Tag: LoadUpdatedModel
private static func loadUpdatedModel() {
guard FileManager.default.fileExists(atPath: updatedModelURL.path) else {
// The updated model is not present at its designated path.
return
}
// Create an instance of the updated model.
guard let model = try? SpamClassifier(contentsOf: updatedModelURL) else {
return
}
// Use this updated model to make predictions in the future.
updatedSpamClassifier = model
}
}
Hello everyone,
I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2.
I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h))
The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there).
When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error:
Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000
It is the first time I am using it, so I don't really have so much of experience.
Could you help me to understand what could be the problem?
Thanks a lot
I am developing an iPhone application. When I start testing in a simulator or on an actual device, I get the following message depending on the model. I don't see any problem with the actual operation of the app, but I don't know how to resolve this error.
#FactoryInstall Unable to query results, error: 5
Unable to list voice folder
Unable to list voice folder
Unable to list voice folder
Unable to list voice folder
Unable to list voice folder
I have tried to resolve the problem by following the steps below, but it hasn’t had any affect on the error. Is it OK to leave this error as it is? If you know how to resolve it, please let me know.
Clear the Xcode cache:
Go to the path of the DerivedData folder and delete all of its contents.
Clean Build folder:
Select "Product" -> "Clean Build Folder" from the Xcode menu.
Restart:
Restart Xcode and the simulator.
Software update:
Make sure you are using the latest version of Xcode and macOS.
Reinstallation:
Uninstall Xcode once and reinstall it.
Reset the simulator
I want to implement AppIntent in my framework which is created and choose Objective-C as language, but when I new a swift file and implement AppIntent. then import to my project. the AppIntent didn't work. but I can use AppIntent directly in project.
I want to implement AppIntent in my Objective-C framework, because we need to design a fast pass to certain function.
However, I create a struct inherited from AppIntent, and create a projectName-Bridging-Header.h in my framework just like I did in my project. It seems like framework don't work, I can't find it in shortcuts