Hello. I created a Core ML app for iOS with my machine learning classification model. However, I'm trying to make a macOS version of the app, but the analysis relies heavily on UIKit, which is not available on macOS. How should I apply the image analysis to image files on macOS?
Post not yet marked as solved
Hi at all
I ordered a mac pro (8-core & 580X) to use CreateML.
The start of the training is flawless. As soon as I pause the training and then want to resume it, I get the message
"Unable to resume training" &
"Archive does not contain an DataTable".
The same problem also occurred with the 16" M1 max ...
I'm frustrated... what am I doing wrong?
Is there a problem with CreateML?
Thanks for the support in advance.
Hi I have been the following WWDC21 "dynamic training on iOS" - I have been able to get the training working, with an output of the iterations etc being printed out in the console as training progresses.
However I am unable to retrieve the checkpoints or result/model once training has completed (or is in progress) nothing in the callback fires.
If I try to create a model from the sessionDirectory - it returns nil (even though training has clearly completed).
Please can someone help or provide pointers on how to access the results/checkpoints so that I can make a MlModel and use it.
var subscriptions = [AnyCancellable]()
let job = try! MLStyleTransfer.train(trainingData: datasource, parameters: trainingParameters, sessionParameters: sessionParameters)
job.result.sink { result in
print("result ", result)
}
receiveValue: { model in
try? model.write(to: sessionDirectory)
let compiledURL = try? MLModel.compileModel(at: sessionDirectory)
let mlModel = try? MLModel(contentsOf: compiledURL!)
}
.store(in: &subscriptions)
This also does not work:
job.checkpoints.sink { checkpoint in
// Process checkpoint
let model = MLStyleTransfer(trainingData: checkpoint)
}
.store(in: &subscriptions)
}
This is the printout in the console:
Using CPU to create model
+--------------+--------------+--------------+--------------+--------------+
| Iteration | Total Loss | Style Loss | Content Loss | Elapsed Time |
+--------------+--------------+--------------+--------------+--------------+
| 1 | 64.9218 | 54.9499 | 9.97187 | 3.92s |
2022-02-20 15:14:37.056251+0000 DynamicStyle[81737:9175431] [ServicesDaemonManager] interruptionHandler is called. -[FontServicesDaemonManager connection]_block_invoke
| 2 | 61.7283 | 24.6832 | 8.30343 | 9.87s |
| 3 | 59.5098 | 27.7834 | 11.7603 | 16.19s |
| 4 | 56.2737 | 16.163 | 10.985 | 22.35s |
| 5 | 53.0747 | 12.2062 | 12.0783 | 28.08s |
+--------------+--------------+--------------+--------------+--------------+
Any help would be appreciated on how to retrieve models.
Thanks
Post not yet marked as solved
My activity classifier is used in tennis sessions, where there are necessarily multiple people on the court. There is also a decent chance other courts' players will be in the shot, depending on the angle and lens.
For my training data, would it be best to crop out adjacent courts?
Post not yet marked as solved
For a Create ML activity classifier, I’m classifying “playing” tennis (the points or rallies) and a second class “not playing” to be the negative class. I’m not sure what to specify for the action duration parameter given how variable a tennis point or rally can be, but I went with 10 seconds since it seems like the average duration for both the “playing” and “not playing” labels.
When choosing this parameter however, I’m wondering if it affects performance, both speed of video processing and accuracy. Would the Vision framework return more results with smaller action durations?
Post not yet marked as solved
i am using the tabular regression method of CreateML. i see where it prints a couple of metrics such as root mean square error on the validation data, but i dont see any way to export the raw fitted numbers (e.g. training prediction), or validation numbers (e.g. validation prediction), or out of sample "testing" numbers (from the testing data set). is this possible in CreateML directly? the reason this is necessary is that you sometimes want to plot actual versus predicted and compute other metrics for regressions.
Post not yet marked as solved
I need to build a model to add to my app and tried following the Apple docs here.
No luck because I get an error that is discussed on this thread on the forum. I'm still not clear on why the error is occurring and can't resolve it.
I wonder if CreateML inside Playgrounds is still supported at all? I tried using the CreateML app that you can access through developer tools but it just crashes my Mac (2017 MBP - is it just too much of a brick to use for ML at this point? I should think not because I've recently built and trained relatively simple models using Tensorflow. + Python on this machine, and the classifier I'm trying to make now is really simple and doesn't have a huge dataset).
Post not yet marked as solved
How should I think about video quality (if it's important) when gathering training videos? Does higher video quality of training data make for better predictions, or should it more closely match the common use case (1080p I suppose, thinking about iPhones broadly)?
Post not yet marked as solved
I want to detect an image of a dart target (https://commons.wikimedia.org/wiki/File:WA_80_cm_archery_target.svg) in my iOS app.
For that I am creating an object detector with CreateML. I am using the Transfer Learning algorithm and 114 annotated images, the validation data is set to auto.
After 2000 iterations I got the following stats: 96% Training and 0% Validation.
As I understand it, the percentages are the I/U 50% scores (= percentage of intersection over union ratios from the bounding boxes with over 50%).
If the validation data is automatically chosen from the set of images, how can its score be 0%?
Post not yet marked as solved
One part of my app uses CreateML but I still need to test the UI on other screen sizes with the simulator. I tried using this condition but still get the same error.
#if os(iOS)
import CreateML // CreateML is not available when building for iOS Simulator. Consider using `#if os(iOS)` to conditionally import this framework when building for iOS.
#endif
Post not yet marked as solved
I just got an app feature working where the user imports a video file, each frame is fed to a custom action classifier, and then only frames with a certain action classified are exported.
However, I'm finding that testing a one hour 4K video at 60 FPS is taking an unreasonably long time - it's been processing for 7 hours now on a MacBook Pro with M1 Max running the Mac Catalyst app. Are there any techniques or general guidance that would help with improving performance? As much as possible I'd like to preserve the input video quality, especially frame rate. One hour length for the video is expected, as it's of a tennis session (could be anywhere from 10 minutes to a couple hours). I made the body pose action classifier with Create ML.
Post not yet marked as solved
Hi, I'm new to Create ML. I was trying to create a simple sentiment analysis model. My input file is cleaned JSON data from Apple reviews with two fields, "label" (pos/neg) and "text" (the content with sentiment). Training runs successfully. However, when I try to perform a test with similar data with the same fields in either JSON or CSV format, I continue to get the following error.
Testing Error: Expected directory at URL at "filename.json" I've tried using different data sets but always receive the same error.
Does anyone have any idea what I'm doing wrong?
Post not yet marked as solved
After creating a custom action classifier in Create ML, previewing it (see the bottom of the page) with an input video shows the label associated with a segment of the video. What would be a good way to store the duration for a given label, say, each CMTimeRange of segment of video frames that are classified as containing "Jumping Jacks?"
I previously found that storing time ranges of trajectory results was convenient, since each VNTrajectoryObservation vended by Apple had an associated CMTimeRange.
However, using my custom action classifier instead, each VNObservation result's CMTimeRange has a duration value that's always 0.
func completionHandler(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNHumanBodyPoseObservation] else {
return
}
if let result = results.first {
storeObservation(result)
}
do {
for result in results where try self.getLastTennisActionType(from: [result]) == .playing {
var fileRelativeTimeRange = result.timeRange
fileRelativeTimeRange.start = fileRelativeTimeRange.start - self.assetWriterStartTime
self.timeRangesOfInterest[Int(fileRelativeTimeRange.start.seconds)] = fileRelativeTimeRange
}
} catch {
print("Unable to perform the request: \(error.localizedDescription).")
}
}
In this case I'm interested in frames with the label "Playing" and successfully classify them, but I'm not sure where to go from here to track the duration of video segments with consecutive frames that have that label.
Post not yet marked as solved
I followed Apple's guidance in their articles Creating an Action Classifier Model, Gathering Training Videos for an Action Classifier, and Building an Action Classifier Data Source. With this Core ML model file now imported in Xcode, how do use it to classify video frames?
For each video frame I call
do {
let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer)
try requestHandler.perform([self.detectHumanBodyPoseRequest])
} catch {
print("Unable to perform the request: \(error.localizedDescription).")
}
But it's unclear to me how to use the results of the VNDetectHumanBodyPoseRequest which come back as the type [VNHumanBodyPoseObservation]?. How would I feed to the results into my custom classifier, which has an automatically generated model class TennisActionClassifier.swift? The classifier is for making predictions on the frame's body poses, labeling the actions as either playing a rally/point or not playing.
Post not yet marked as solved
My goal is to mark any tennis video's timestamps of both the start of each rally/point and the end of each rally/point. I tried trajectory detection, but the "end time" is when the ball bounces rather than when the rally/point ends. I'm not quite sure what direction to go from here to improve on this. Would action classification of body poses in each frame (two classes, "playing" and "not playing") be the best way to split the video into segments? A different technique?
Hello everybody,
I am trying to run inference on a CoreML Model created by me using CreateML. I am following the sample code provided by Apple on the CoreML documentation page and every time I try to classify an image I get this error: "Could not create Espresso context".
Has this ever happened to anyone? How did you solve it?
Here is my code:
import Foundation
import Vision
import UIKit
import ImageIO
final class ButterflyClassification {
var classificationResult: Result?
lazy var classificationRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: ButterfliesModel_1(configuration: MLModelConfiguration()).model)
return VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassification(for: request, error: error)
})
}
catch {
fatalError("Failed to lead model.")
}
}()
func processClassification(for request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results else {
print("Unable to classify image.")
return
}
let classifications = results as! [VNClassificationObservation]
if classifications.isEmpty {
print("No classification was provided.")
return
}
else {
let firstClassification = classifications[0]
self.classificationResult = Result(speciesName: firstClassification.identifier, confidence: Double(firstClassification.confidence))
}
}
}
func classifyButterfly(image: UIImage) - Result? {
guard let ciImage = CIImage(image: image) else {
fatalError("Unable to create ciImage")
}
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])
do {
try handler.perform([self.classificationRequest])
}
catch {
print("Failed to perform classification.\n\(error.localizedDescription)")
}
}
return classificationResult
}
}
Thank you for your help!
Post not yet marked as solved
After collecting data I wanted to create an ML model I have done this before but for some reason I was getting the error:
error: Couldn't lookup symbols:
CreateML.MLDataTable.init(contentsOf: Foundation.URL, options: CreateML.MLDataTable.ParsingOptions) throws -> CreateML.MLDataTable
CreateML.MLDataTable.init(contentsOf: Foundation.URL, options: CreateML.MLDataTable.ParsingOptions) throws -> CreateML.MLDataTable
So I went to test a working example that was created by apple using this link: https://developer.apple.com/documentation/createml/creating_a_model_from_tabular_data
After running this test with no data changed, I still get the same error logged. I don't know if I'm doing something wrong. Any advice would be greatly appreciated.
Post not yet marked as solved
Beginner at using CreateML, so please forgive me if this question isn't asked correctly. As i understand it, the image classification projects are meant to detect certain objects in an image (giraffe vs elephant). My question is is there a way to use image classification to "score" or bin images that share qualities with my training dataset?
As an example; let's say I want to find a perfect square inside another square (like a white border around an image). What are the things that could make a "non-perfect" image? maybe one of the corners of the square is rounded, maybe a corner is not 90 degrees, maybe the inner square is not perfectly centered within the white frame / border.
Now let's say I want to take a picture of this object and have my app tell me how close this image is to a perfect square inside a square and rate them 1-5
My thought was to setup my training data to have a set of images that show perfect squares in a "rated 5" folder, a set of slightly imperfect squares in a "rated 4" folder, and a set of even less perfect squares in a "rated 3" folder, etc.
Long winded question, i apologize; will the CreateML image classifier be able to look at my image for those qualities that make them 3,4,5, or will it only look at the content of the square itself and detect: Giraffe, race car, boat, person? I'm looking agin for the metric of "perfectness" regardless of what the content is within the inner square. Am I on the right train of thought, or is there a better approach to take?
Post not yet marked as solved
I am having issues with my ml model outputting the correct the prediction. When taking my data and creating an MLMultiArray, is it possible to view the MLMultiArray with my data so that I can debug and view?
Post not yet marked as solved
Trying to use the UI version of Create ML in a MacOS Playground.
Import always fails. It is no longer available? If it is available in the development system, why is it such a travail to use?