How do I directly input landmarks to the activity classifier rather than inputting an image/video?
Create ML
RSS for tagCreate machine learning models for use in your app using Create ML.
Post
Replies
Boosts
Views
Activity
Hi everyone, I attempted to use the MultivariateLinearRegressor from the Create ML Components framework to fit some multi-dimensional data linearly (4 dimensions in my example). I aim to obtain multi-dimensional output points (2 points in my example). However, when I fit the model with my training data and test it, it appears that only the first element of my training data is used for training, regardless of whether I use CreateMLComponents.AnnotatedBatch or [CreateMLComponents.AnnotatedFeature, CoreML.MLShapedArray>] as input.
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
Here is a test code to test the function (ios 18.0 beta, Xcode 16.0 beta)
In this example I train the model to learn 2 multidimensional points (4 dimensions) and here are the results of the predictions:
▿ 2 elements
▿ 0 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.20000000298023224 0.699999988079071
▿ _storage : <StandardStorage<Double>: 0x600002ad8270>
▿ annotation : 0.2 0.7
▿ _storage : <StandardStorage<Double>: 0x600002b30600>
▿ 1 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>>
▿ prediction : 0.23158159852027893 0.9509953260421753
▿ _storage : <StandardStorage<Double>: 0x600002ad8c90>
▿ annotation : 0.9 0.1
▿ _storage : <StandardStorage<Double>: 0x600002b55f20>
0.23158159852027893 0.9509953260421753 is totally random and should be far more closer to [0.9,0.1].
Here is the test code : ( i run it on "My mac, Designed for Ipad")
ContentView.swift
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import CoreGraphics
import Accelerate
import Foundation
import CoreML
import CreateML
import CreateMLComponents
func createMLShapedArray(from array: [Double], shape: [Int]) -> MLShapedArray<Double> {
return MLShapedArray<Double>(scalars: array, shape: shape)
}
func calculateTransformationMatrixWithNonlinearity(sourceRGB: [[Double]], referenceRGB: [[Double]], degree: Int = 3) async throws -> MultivariateLinearRegressor<Double>.Model {
let annotatedFeatures2 = zip(sourceRGB, referenceRGB).map { (featureArray, targetArray) -> AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>> in
let featureMLShapedArray = createMLShapedArray(from: featureArray, shape: [featureArray.count])
let targetMLShapedArray = createMLShapedArray(from: targetArray, shape: [targetArray.count])
return AnnotatedFeature(feature: featureMLShapedArray, annotation: targetMLShapedArray)
}
// Flatten the sourceRGBPoly into a single-dimensional array
var flattenedArray = sourceRGB.flatMap { $0 }
let featuresMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 4])
flattenedArray = referenceRGB.flatMap { $0 }
let targetMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 2])
// Create AnnotatedFeature instances
/* let annotatedFeatures2: [AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>>] = [
AnnotatedFeature(feature: featuresMLShapedArray, annotation: targetMLShapedArray)
]*/
let annotatedBatch = AnnotatedBatch(features: featuresMLShapedArray, annotations: targetMLShapedArray)
var regressor = MultivariateLinearRegressor<Double>()
regressor.configuration.learningRate = 0.1
regressor.configuration.maximumIterationCount=5000
regressor.configuration.batchSize=2
let model = try await regressor.fitted(to: annotatedBatch,validateOn: nil)
//var model = try await regressor.fitted(to: annotatedFeatures2)
// Proceed to prediction once the model is fitted
let predictions = try await model.prediction(from: annotatedFeatures2)
// Process or use the predictions
print(predictions)
print("Predictions:", predictions)
return model
}
struct ContentView: View {
var body: some View {
VStack {}
.onAppear {
Task {
do {
let sourceMatrix: [[Double]] = [
[0,0.1,0.2,0.3],
[0.5,0.2,0.6,0.2]
]
let referenceMatrix: [[Double]] = [
[0.2,0.7],
[0.9,0.1]
]
let model = try await calculateTransformationMatrixWithNonlinearity(sourceRGB: sourceMatrix, referenceRGB: referenceMatrix, degree: 2
)
print("Model fitted successfully:", model)
} catch {
print("Error:", error)
}
}
}
}
}
I‘ve created text classification project and selected BERT algorithm With 100 iterations for json file. Json file is valid but training always cancels on 37 iteration…
Because tool does not provide any cancellation reasons I have no clue why it happens. Can I check reasons somehow? Or do anyone knows possible reasons or solutions for this?
Hi everyone,
I'm curious about the capabilities of ARKit's object tracking feature. Specifically, I'd like to know:
Is there a size limit for the objects that can be tracked?
Can ARKit differentiate between two objects with the same shape but different models (e.g., different colors)?
Are objects with single colors and generic shapes (like squares or circles) effectively trackable?
Any insights or examples from your experiences would be greatly appreciated!
Thanks in advance.
TimeseriesClassifier crash when update as follows:
I have made a text classifier model but I want to train it on device too.
When text is classified wrong, user can make update the model on device.
Code :
//
// SpamClassifierHelper.swift
// LearningML
//
// Created by Himan Dhawan on 7/1/24.
//
import Foundation
import CreateMLComponents
import CoreML
import NaturalLanguage
enum TextClassifier : String {
case spam = "spam"
case notASpam = "ham"
}
class SpamClassifierModel {
// MARK: - Private Type Properties
/// The updated Spam Classifier model.
private static var updatedSpamClassifier: SpamClassifier?
/// The default Spam Classifier model.
private static var defaultSpamClassifier: SpamClassifier {
do {
return try SpamClassifier(configuration: .init())
} catch {
fatalError("Couldn't load SpamClassifier due to: \(error.localizedDescription)")
}
}
// The Spam Classifier model currently in use.
static var liveModel: SpamClassifier {
updatedSpamClassifier ?? defaultSpamClassifier
}
/// The location of the app's Application Support directory for the user.
private static let appDirectory = FileManager.default.urls(for: .applicationSupportDirectory,
in: .userDomainMask).first!
class var urlOfModelInThisBundle : URL {
let bundle = Bundle(for: self)
return bundle.url(forResource: "SpamClassifier", withExtension:"mlmodelc")!
}
/// The default Spam Classifier model's file URL.
private static let defaultModelURL = urlOfModelInThisBundle
/// The permanent location of the updated Spam Classifier model.
private static var updatedModelURL = appDirectory.appendingPathComponent("personalized.mlmodelc")
/// The temporary location of the updated Spam Classifier model.
private static var tempUpdatedModelURL = appDirectory.appendingPathComponent("personalized_tmp.mlmodelc")
// MARK: - Public Type Methods
static func predictLabelFor(_ value: String) throws -> (predication :String?, confidence : String) {
let spam = try NLModel(mlModel: liveModel.model)
let result = spam.predictedLabel(for: value)
let confidence = spam.predictedLabelHypotheses(for: value, maximumCount: 1).first?.value ?? 0
return (result,String(format: "%.2f", confidence * 100))
}
static func updateModel(newEntryText : String, spam : TextClassifier) throws {
guard let modelURL = Bundle.main.url(forResource: "SpamClassifier", withExtension: "mlmodelc") else {
fatalError("Could not find model in bundle")
}
// Create feature provider for the new image
let featureProvider = try MLDictionaryFeatureProvider(dictionary: ["label": MLFeatureValue(string: newEntryText), "text": MLFeatureValue(string: spam.rawValue)])
let batchProvider = MLArrayBatchProvider(array: [featureProvider])
let updateTask = try MLUpdateTask(forModelAt: modelURL, trainingData: batchProvider, configuration: nil, completionHandler: { context in
let updatedModel = context.model
let fileManager = FileManager.default
do {
// Create a directory for the updated model.
try fileManager.createDirectory(at: tempUpdatedModelURL,
withIntermediateDirectories: true,
attributes: nil)
// Save the updated model to temporary filename.
try updatedModel.write(to: tempUpdatedModelURL)
// Replace any previously updated model with this one.
_ = try fileManager.replaceItemAt(updatedModelURL,
withItemAt: tempUpdatedModelURL)
loadUpdatedModel()
print("Updated model saved to:\n\t\(updatedModelURL)")
} catch let error {
print("Could not save updated model to the file system: \(error)")
return
}
})
updateTask.resume()
}
/// Loads the updated Spam Classifier, if available.
/// - Tag: LoadUpdatedModel
private static func loadUpdatedModel() {
guard FileManager.default.fileExists(atPath: updatedModelURL.path) else {
// The updated model is not present at its designated path.
return
}
// Create an instance of the updated model.
guard let model = try? SpamClassifier(contentsOf: updatedModelURL) else {
return
}
// Use this updated model to make predictions in the future.
updatedSpamClassifier = model
}
}
After I have a dataframe of data with one column as features with type MLshapedarray and one column of annotations with type Int.
How can I convert them to the correct input type for the timeseriesclassifier?
Is it possible to change the folder where .blob are stocked when we training a model ?
I trying to train a model to track an object, and I'm limited to the extraordinary 256Gb of my 2024 Mac ! :(
I'm training an activity classifier with CreateML and when I add samples to the Preview tab, the length of the sample it displays does not match its actual length.
I have set prediction window size to 15 and sample rate to 10. The activity is roughly 1.5 seconds.
When I put a 1.49 second sample into preview, it says it is 00:00.06 seconds:
and when I put a 12.91 second sample into preview, it says it is 00:00.52 seconds:
Here is the code I am using to print out sensor data in csv format:
if motionManager.isDeviceMotionAvailable {
motionManager.deviceMotionUpdateInterval = 0.1
motionManager.startDeviceMotionUpdates(to: .main) { data, error in
guard let data = data, let startTime = self.startTime else { return }
let timestamp = Date().timeIntervalSince(startTime)
let xAcc = data.userAcceleration.x
let yAcc = data.userAcceleration.y
let zAcc = data.userAcceleration.z
let xRotRate = data.rotationRate.x
let yRotRate = data.rotationRate.y
let zRotRate = data.rotationRate.z
let roll = data.attitude.roll
let pitch = data.attitude.pitch
let yaw = data.attitude.yaw
let row = "\(timestamp),\(xAcc),\(yAcc),\(zAcc),\(xRotRate),\(yRotRate),\(zRotRate),\(roll),\(pitch),\(yaw)"
print(row)
}
}
And here is the data for the 1.49 second sample mentioned above:
For example: we use DocKit for birdwatching, so we have an unknown field distance and direction.
Distance = ?
Direction = ?
For example, the rock from which the observation is made. The task is to recognize the number of birds caught in the frame, add a detection frame and collect statistics.
Question:
What is the maximum number of frames processed with custom object recognition?
If not enough, can I do the calculations myself and transfer to DokKit for fast movement?