I'm trying to use the Spatial model to perform Object Tracking on a .usdz file that I create.
After loading the file, which I can view correctly in the console, I start the training.
Initially, I notice that the disk usage on my PC increases. After several GB, the usage stops, but the training progress remains for hours at 0.00% with the message "About 8hr."
How can I understand what the issue is? Has anyone else experienced the same problem?
Thanks
Diego
Create ML
RSS for tagCreate machine learning models for use in your app using Create ML.
Post
Replies
Boosts
Views
Activity
Hi,
I'm training a model that should detect a forehand and a backend stroke.
The data looks like this:
activity,timestamp,Acceleration_X,Acceleration_Y,Acceleration_Z,Rotation_X,Rotation_Y,Rotation_Z
forehand,0.0,0.08,-0.08,0.03,0.18,0.26,0.32
I can load it in Create ML but it's showing the acceleration and rotation x,y,z as seperate Doubles and not as one feature.
What do I have to change to make this work?
Thank you
Hi,
I'm working on training a createML object detector model; I've run into an issue that has me stumped - when I reach somewhere between 100,000 and 150,000 iterations my model will stop training and error out.
More Details:
CreateML gives me the error prompt that says it is unable to train the model please delete the model source and start from the beginning or duplicate the model and start from the beginning (slightly paraphrased)
I see the following error in the createML console (my user name and UUIDs have been redacted)
Unable to load model from file:///Users/<my user name>/Library/Caches/com.apple.dt.createml/projects/<UUID HERE>/sessions/checkpoint.sessions/<UUID Here>//training-000132500.checkpoint: Cannot open file:///Users/<my user name>/Library/Caches/com.apple.dt.createml/projects/<UUID Here>/sessions/checkpoint.sessions/<uuid here> //training-000132500.checkpoint/dir_archive.ini for read. Cannot open /Users/<my username>/Library/Caches/com.apple.dt.createml/projects/<UUID>/sessions/checkpoint.sessions/<UUID>//training-000132500.checkpoint/dir_archive.ini for reading
I've gone into my Caches in my Library directory and I see each piece of the file path in finder UNTIL the //training-00132500 piece of the path, so I can at least confirm that createML appears to be unable to create or open the file it needs for this training session.
Technology Used:
Xcode 16
Apple M1 Pro
MacOS 14.6.1 (23G93)
I've also verified that Xcode and terminal have full disk permissions in my system preferences - I didn't see an option to add CreateML to this list.
I've also ensured that my createML project and its data sources are not in iCloud and are indeed local on my desktop.
Lastly, I made more space on my machine, so I should have a little over 1 TB of space.
Has anybody experienced this before? Any advice? I am majorly blocked on this issue, so I hope somebody else can help shed some light on this issue!
Thanks!
I'm trying to generate a json for my training data, tried manually first and then tried using roboflow and I still get the same error:
_annotations.createml.json file contains field "Index 0" that is not of type String.
the json format provided by roboflow was
[{"image":"menu1_jpg.rf.44dfacc93487d5049ed82952b44c81f7.jpg","annotations":[{"label":"100","coordinates":{"x":497,"y":431.5,"width":32,"height":10}}]}]
any help would be greatly appreciated
Hi folks, I'm trying to import data to train a model and getting the above error. I'm using the latest Xcode, have double checked the formatting in the annotations file, and used jpgrepair to remove any corruption from the data files. Next step is to try a different dataset, but is this a particular known error? (Or am I doing something obviously wrong?)
2019 Intel Mac, Xcode 15.4, macOS Sonoma 14.1.1
Thanks
getting this error again and again even if I tried reinstalling.
Traceback (most recent call last):
File "", line 1, in
File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/init.py", line 439, in
_ll.load_library(_plugin_dir)
File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: OBJC_CLASS$_MPSGraphRandomOpDescriptor
Referenced from: /Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: /System/Library/Frameworks/MetalPerformanceShadersGraph.framework/Versions/A/MetalPerformanceShadersGraph
I can successfully train an ActionClassifier using CreateML. However, I get crashes when I attempt to do the same asynchronously.
The model parameters and training data sources are the same in both cases:
let modelParameters = MLActionClassifier.ModelParameters(validation: validationDataSet,batchSize: 5, maximumIterations: 10, predictionWindowSize: 120, targetFrameRate: 30)
let trainingDataSource = MLActionClassifier.DataSource.directoryWithVideosAndAnnotation(at: myStudyParticipantURLFinal, annotationFile: documentURLFinal, videoColumn: "file", labelColumn: "category", startTimeColumn: "startTime", endTimeColumn: "endTime")
the only thing I add to attempt asyncrounous training is sessionParameters:
let sessionDirectory = URL(fileURLWithPath: "(NSHomeDirectory())/test")
// Session parameters can be provided to `train` method.
let sessionParameters = MLTrainingSessionParameters(
sessionDirectory: sessionDirectory,
reportInterval: 10,
checkpointInterval: 100,
iterations: 10
)
To the final method:
let trainJob = try MLActionClassifier.train(trainingData: trainingDataSource, parameters: modelParameters, sessionParameters: sessionParameters)
The job crashes saying it cannot find plist files. I notice that only one plist file is written: meta.plist
It seems there should also be a parameters.plist written, but it is not there.
I dragged a folder containing two subfolders directly into CreateML. One subfolder contains images, and the other contains labeled datasets. The number of files in the labeled dataset matches the number of image files. However, it shows "Missing data for label dianjiaoyise.jsons. Detailed list of labels missing files: ["dianjiaoyise.jsons"]."
I have created and trained a Hand Pose classifier model and am trying to test it. I have noticed in the WWDC2021 "Classify hand poses and actions with Create ML" the preview windows has a prediction result that gives you the prediction based on the live preview or the images. Mine does not have that. When i try to import pictures or do the live test there is no result. Its just the wireframe view and under it there is nothing.
How do I fix this please?
Thanks.
I try to use Create ML Spatial template. but unexpected error is occured in 1-3 minitues. I try some times and same results. Spatial template is not available on an M1 mac ?
My development environment is
Apple M1 Pro
macOS: 15.0
Xcode: 16.0 beta
CreateML: 6.0 beta
We can use the CreateML App to build object tracking model in Xcode 16, but is it possible to use CreateML framework as well?
No documentation of Create ML object tracking is found yet. The latest documentation I can found is Xcode 15.
https://developer.apple.com/documentation/CreateML?changes=latest_minor
Really apricated the new feature of object tracking, thank you Apple Team.
How do I use either of these data sources with MLHandActionClassifierwith on visionOS?
MLHandActionClassifier.DataSource.labeledKeypointsDataFrame
MLHandActionClassifier.DataSource.labeledKeypointsData
visionOS ARKit HandTracking provides us with 27 joints and 3D co-ordinates which differs from the 21 joint, 2D co-ordinates that these two data sources mention in their documentation.
Hello,
I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary.
I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
How do I directly input landmarks to the activity classifier rather than inputting an image/video?
I‘ve created text classification project and selected BERT algorithm With 100 iterations for json file. Json file is valid but training always cancels on 37 iteration…
Because tool does not provide any cancellation reasons I have no clue why it happens. Can I check reasons somehow? Or do anyone knows possible reasons or solutions for this?
Hi everyone, I attempted to use the MultivariateLinearRegressor from the Create ML Components framework to fit some multi-dimensional data linearly (4 dimensions in my example). I aim to obtain multi-dimensional output points (2 points in my example). However, when I fit the model with my training data and test it, it appears that only the first element of my training data is used for training, regardless of whether I use CreateMLComponents.AnnotatedBatch or [CreateMLComponents.AnnotatedFeature, CoreML.MLShapedArray>] as input.
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
Here is a test code to test the function (ios 18.0 beta, Xcode 16.0 beta)
In this example I train the model to learn 2 multidimensional points (4 dimensions) and here are the results of the predictions:
▿ 2 elements
▿ 0 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.20000000298023224 0.699999988079071
▿ _storage : <StandardStorage<Double>: 0x600002ad8270>
▿ annotation : 0.2 0.7
▿ _storage : <StandardStorage<Double>: 0x600002b30600>
▿ 1 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.23158159852027893 0.9509953260421753
▿ _storage : <StandardStorage<Double>: 0x600002ad8c90>
▿ annotation : 0.9 0.1
▿ _storage : <StandardStorage<Double>: 0x600002b55f20>
0.23158159852027893 0.9509953260421753 is totally random and should be far more closer to [0.9,0.1].
Here is the test code : ( i run it on "My mac, Designed for Ipad")
ContentView.swift
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import CoreGraphics
import Accelerate
import Foundation
import CoreML
import CreateML
import CreateMLComponents
func createMLShapedArray(from array: [Double], shape: [Int]) -> MLShapedArray<Double> {
return MLShapedArray<Double>(scalars: array, shape: shape)
}
func calculateTransformationMatrixWithNonlinearity(sourceRGB: [[Double]], referenceRGB: [[Double]], degree: Int = 3) async throws -> MultivariateLinearRegressor<Double>.Model {
let annotatedFeatures2 = zip(sourceRGB, referenceRGB).map { (featureArray, targetArray) -> AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>> in
let featureMLShapedArray = createMLShapedArray(from: featureArray, shape: [featureArray.count])
let targetMLShapedArray = createMLShapedArray(from: targetArray, shape: [targetArray.count])
return AnnotatedFeature(feature: featureMLShapedArray, annotation: targetMLShapedArray)
}
// Flatten the sourceRGBPoly into a single-dimensional array
var flattenedArray = sourceRGB.flatMap { $0 }
let featuresMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 4])
flattenedArray = referenceRGB.flatMap { $0 }
let targetMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 2])
// Create AnnotatedFeature instances
/* let annotatedFeatures2: [AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>>] = [
AnnotatedFeature(feature: featuresMLShapedArray, annotation: targetMLShapedArray)
]*/
let annotatedBatch = AnnotatedBatch(features: featuresMLShapedArray, annotations: targetMLShapedArray)
var regressor = MultivariateLinearRegressor<Double>()
regressor.configuration.learningRate = 0.1
regressor.configuration.maximumIterationCount=5000
regressor.configuration.batchSize=2
let model = try await regressor.fitted(to: annotatedBatch,validateOn: nil)
//var model = try await regressor.fitted(to: annotatedFeatures2)
// Proceed to prediction once the model is fitted
let predictions = try await model.prediction(from: annotatedFeatures2)
// Process or use the predictions
print(predictions)
print("Predictions:", predictions)
return model
}
struct ContentView: View {
var body: some View {
VStack {}
.onAppear {
Task {
do {
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
let model = try await calculateTransformationMatrixWithNonlinearity(sourceRGB: sourceMatrix, referenceRGB: referenceMatrix, degree: 2
)
print("Model fitted successfully:", model)
} catch {
print("Error:", error)
}
}
}
}
}
TimeseriesClassifier crash when update as follows:
Hello everyone,
I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2.
I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h))
The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there).
When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error:
Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000
It is the first time I am using it, so I don't really have so much of experience.
Could you help me to understand what could be the problem?
Thanks a lot
I have made a text classifier model but I want to train it on device too.
When text is classified wrong, user can make update the model on device.
Code :
//
// SpamClassifierHelper.swift
// LearningML
//
// Created by Himan Dhawan on 7/1/24.
//
import Foundation
import CreateMLComponents
import CoreML
import NaturalLanguage
enum TextClassifier : String {
case spam = "spam"
case notASpam = "ham"
}
class SpamClassifierModel {
// MARK: - Private Type Properties
/// The updated Spam Classifier model.
private static var updatedSpamClassifier: SpamClassifier?
/// The default Spam Classifier model.
private static var defaultSpamClassifier: SpamClassifier {
do {
return try SpamClassifier(configuration: .init())
} catch {
fatalError("Couldn't load SpamClassifier due to: \(error.localizedDescription)")
}
}
// The Spam Classifier model currently in use.
static var liveModel: SpamClassifier {
updatedSpamClassifier ?? defaultSpamClassifier
}
/// The location of the app's Application Support directory for the user.
private static let appDirectory = FileManager.default.urls(for: .applicationSupportDirectory,
in: .userDomainMask).first!
class var urlOfModelInThisBundle : URL {
let bundle = Bundle(for: self)
return bundle.url(forResource: "SpamClassifier", withExtension:"mlmodelc")!
}
/// The default Spam Classifier model's file URL.
private static let defaultModelURL = urlOfModelInThisBundle
/// The permanent location of the updated Spam Classifier model.
private static var updatedModelURL = appDirectory.appendingPathComponent("personalized.mlmodelc")
/// The temporary location of the updated Spam Classifier model.
private static var tempUpdatedModelURL = appDirectory.appendingPathComponent("personalized_tmp.mlmodelc")
// MARK: - Public Type Methods
static func predictLabelFor(_ value: String) throws -> (predication :String?, confidence : String) {
let spam = try NLModel(mlModel: liveModel.model)
let result = spam.predictedLabel(for: value)
let confidence = spam.predictedLabelHypotheses(for: value, maximumCount: 1).first?.value ?? 0
return (result,String(format: "%.2f", confidence * 100))
}
static func updateModel(newEntryText : String, spam : TextClassifier) throws {
guard let modelURL = Bundle.main.url(forResource: "SpamClassifier", withExtension: "mlmodelc") else {
fatalError("Could not find model in bundle")
}
// Create feature provider for the new image
let featureProvider = try MLDictionaryFeatureProvider(dictionary: ["label": MLFeatureValue(string: newEntryText), "text": MLFeatureValue(string: spam.rawValue)])
let batchProvider = MLArrayBatchProvider(array: [featureProvider])
let updateTask = try MLUpdateTask(forModelAt: modelURL, trainingData: batchProvider, configuration: nil, completionHandler: { context in
let updatedModel = context.model
let fileManager = FileManager.default
do {
// Create a directory for the updated model.
try fileManager.createDirectory(at: tempUpdatedModelURL,
withIntermediateDirectories: true,
attributes: nil)
// Save the updated model to temporary filename.
try updatedModel.write(to: tempUpdatedModelURL)
// Replace any previously updated model with this one.
_ = try fileManager.replaceItemAt(updatedModelURL,
withItemAt: tempUpdatedModelURL)
loadUpdatedModel()
print("Updated model saved to:\n\t\(updatedModelURL)")
} catch let error {
print("Could not save updated model to the file system: \(error)")
return
}
})
updateTask.resume()
}
/// Loads the updated Spam Classifier, if available.
/// - Tag: LoadUpdatedModel
private static func loadUpdatedModel() {
guard FileManager.default.fileExists(atPath: updatedModelURL.path) else {
// The updated model is not present at its designated path.
return
}
// Create an instance of the updated model.
guard let model = try? SpamClassifier(contentsOf: updatedModelURL) else {
return
}
// Use this updated model to make predictions in the future.
updatedSpamClassifier = model
}
}
After I have a dataframe of data with one column as features with type MLshapedarray and one column of annotations with type Int.
How can I convert them to the correct input type for the timeseriesclassifier?