Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Create ML Documentation

Posts under Create ML subtopic

Post

Replies

Boosts

Views

Activity

Create ML fails to train a text classifier using the BERT transfer learning algorithm
I'm trying to train a text classifier model in Create ML. The Create ML app/framework offers five algorithms. I can successfully train the model with all of the algorithms except the BERT transfer learning option. When I select this algorithm, Create ML simply stops the training process immediately after the initial feature extraction phase (with no reported error). What I've tried: I tried simplifying the dataset to just a few classes and short examples in case there was a problem with the data. I tried experimenting with the number of iterations and language/script options. I checked Console.app for logged errors and found the following for the Create ML app: error 10:38:28.385778+0000 Create ML Couldn't read event column - category is invalid. Format string is : <private> error 10:38:30.902724+0000 Create ML Could not encode the entity <private>. Error: <private> I'm not sure if these errors are normal or indicative of a problem. I don't know what it means by the "event" column – I don't have an event column in my data and I don't believe there should be one. These errors are not reported when using the other algorithms. Given that I couldn't get the app to work with BERT, I switched over to the CreateML framework and followed the code samples given in the documentation. (By the way, there's an error in the docs: the line let (trainingData, testingData) = data.stratifiedSplit(on: "text", by: 0.8) should be stratifying on "label", not on "text"). The main chunk of code looks like this: var parameters = MLTextClassifier.ModelParameters( validation: .split(strategy: .automatic), algorithm: .transferLearning(.bertEmbedding, revision: 1), language: .english ) parameters.maxIterations = 100 let sentimentClassifier = try MLTextClassifier( trainingData: trainingData, textColumn: "text", labelColumn: "label", parameters: parameters ) Ultimately I want to train a single multilingual model, and I believe that BERT is the best choice for this. The problem is that there doesn't seem to be a way to choose the multilingual Latin script option in the API. In the Create ML app you can theoretically do this by selecting the Latin script with language set to "Automatic", as recommended in this WWDC video (relevant section starts at around 8:02). But, as far as I can tell, ModelParameters only lets you pick a specific language. I presume the framework must provide some way to do this, since the Create ML app uses the framework under the hood, but I can't see a way to do it. Another possibility is that the Create ML app might be misrepresenting the framework – perhaps selecting a specific language in the app doesn't actually make any difference – for example, maybe all Latin languages actually use the same model under the hood and the language selector is just there to guide people to the right choice (but this is just my speculation). Any help would be much appreciated! If possible, I'd prefer to use the Create ML app if I can get the BERT option to work – is this actually working for anyone? Or failing that, I want to use the framework to train a multilingual Latin model with BERT, so I'm looking for instructions on how to choose that specific option or confirmation that I can just choose .english to get the correct Latin multilingual model. I'm running Xcode 26.2 on Tahoe 21.1 on an M1 Pro MacBook Pro. I have version 6.2 of the Create ML app.
2
0
171
1d
Huge discrepency of predictions confidence between from Pytorch to Coreml example
I am follwing this tutorial: https://apple.github.io/coremltools/docs-guides/source/convert-a-torchvision-model-from-pytorch.html I have obtained simialr result using the python code. However when I view it in Xcode, the preview prediction percentage confidence is way off I suspect it is due the the output of the model, which is in percentage already and in Xcode it multiply 100 again leading to this result. Please give me any feedback to fix this, thank you.
0
0
173
Nov ’25
CreateML Training Object Detection Not using MPS
Hi everyone Im currently developing an object detection model that shall identify up to seven classes in an image. While im usually doing development with basic python and the ultralytics library, i thought i would like to give CreateML a shot. The experience is actually very nice, except for the fact that the model seem not to be using any ANE or GPU (MPS) for accelerated training. On https://developer.apple.com/machine-learning/create-ml/ it states: "On-device training Train models blazingly fast right on your Mac while taking advantage of CPU and GPU." Am I doing something wrong? Im running the training on Apple M1 Pro 16GB MacOS 26.1 (Tahoe) Xcode 26.1 (Build version 17B55) It would be super nice to get some feedback or instructions. Thank you in advance!
0
0
232
Nov ’25
How to create updatable models using Create ML app
I've built a model using Create ML, but I can't make it, for the love of God, updatable. I can't find any checkbox or anything related. It's an Activity Classifier, if it matters. I want to continue training it on-device using MLUpdateTask, but the model, as exported from Create ML, fails with error: Domain=com.apple.CoreML Code=6 "Failed to unarchive update parameters. Model should be re-compiled." UserInfo={NSLocalizedDescription=Failed to unarchive update parameters. Model should be re-compiled.}
0
0
303
Nov ’25
“Unleashing the MacBook Air M2: 673 TFLOPS Achieved with Highly Optimized Metal Shading Language”
Using highly optimized Metal Shading Language (MSL) code, I pushed the MacBook Air M2 to its performance limits with the deformable_attention_universal kernel. The results demonstrate both the efficiency of the code and the exceptional power of Apple Silicon. The total computational workload exceeded 8.455 quadrillion FLOPs, equivalent to processing 8,455 trillion operations. On average, the code sustained a throughput of 85.37 TFLOPS, showcasing the chip’s remarkable ability to handle massive workloads. Peak instantaneous performance reached approximately 673.73 TFLOPS, reflecting near-optimal utilization of the GPU cores. Despite this intensity, the cumulative GPU runtime remained under 100 seconds, highlighting the code’s efficiency and time optimization. The fastest iteration achieved a record processing time of only 0.051 ms, demonstrating minimal bottlenecks and excellent responsiveness. Memory management was equally impressive: peak GPU memory usage never exceeded 2 MB, reflecting efficient use of the M2’s Unified Memory. This minimizes data transfer overhead and ensures smooth performance across repeated workloads. Overall, these results confirm that a well-optimized Metal implementation can unlock the full potential of Apple Silicon, delivering exceptional computational density, processing speed, and memory efficiency. The MacBook Air M2, often considered an energy-efficient consumer laptop, is capable of handling highly intensive workloads at performance levels typically expected from much larger GPUs. This test validates both the robustness of the Metal code and the extraordinary capabilities of the M2 chip for high-performance computing tasks.
0
0
409
Nov ’25
Embedding model missing once transferred to Xcode
I've created a "Transfer Learning BERT Embeddings" model with the default "Latin" language family and "Automatic" Language setting. This model performs exceptionally well against the test data set and functions as expected when I preview it in Create ML. However, when I add it to the Xcode project of the application to which I am deploying it, I am getting runtime errors that suggest it can't find the embedding resources: Failed to locate assets for 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289' embedding model Note, I am adding the model to the app project the same way that I added an earlier "Maximum Entropy" model. That model had no runtime issues. So it seems there is an issue getting hold of the embeddings at runtime. For now, "runtime" means in the Simulator. I intend to deploy my application to iOS devices once GM 26 is released (the app also uses AFM). I'm developing on Tahoe 26 beta, running on iOS 26 beta, using Xcode 26 beta. Is this a known/expected issue? Are the embeddings expected to be a resource in the model? Is there a workaround? I did try opening the model in Xcode and saving it as an mlpackage, then adding that to my app project, but that also didn't resolve the issue.
1
0
417
Sep ’25
Correct JSON format for CoreMotion data for ActivityClassification purposes
I’m developing an activity classifier that I’d like to input using the JSON format of CoreMotion data. I am getting the error: Unable to parse /Users/DewG/Downloads/Testing/Step1/Testing.json. It does not appear to be in JSON record format. A SequenceType of dictionaries is expected I've verified that the format I am using is JSON via various JSON validators, so I am expecting I'm just holding it wrong. Is there an example of a JSON file with CoreMotion data that I can model after?
2
0
153
Jul ’25
CoreML model can load on MacOS 15.3.1 but failed to load on MacOS 15.5
I have been working on a small CV program, which uses fine-tuned U2Netp model converted by coremltools 8.3.0 from PyTorch. It works well on my iPhone (with iOS version 18.5) and my Macbook (with MacOS version 15.3.1). But it fails to load after I upgraded Macbook to MacOS version 15.5. I have attached console log when loading this model. Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage @ GetMPSGraphExecutable E5RT: Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage (13) Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage @ GetMPSGraphExecutable E5RT: Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage (13) Failure translating MIL->EIR network: Espresso exception: "Network translation error": MIL->EIR translation error at /Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil:1557:12: Parameter binding for axes does not exist. [Espresso::handle_ex_plan] exception=Espresso exception: "Network translation error": MIL->EIR translation error at /Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil:1557:12: Parameter binding for axes does not exist. status=-14 Failed to build the model execution plan using a model architecture file '/Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil' with error code: -14.
0
0
177
Jul ’25
Full documentation of annotations file for Create ML
The documentation for the Create ML tool ("Building an object detector data source") mentions that there are options for using normalized values instead of pixels and also different anchor point origins ("MLBoundingBoxCoordinatesOrigin") instead of always using "center". However, the JSON format for these does not appear in any examples. Does anyone know the format for these options?
0
1
162
May ’25
Detection of balls about 6-10ft Away not detecting
I used Yolo5-11 and while performing great detecting balls lets say 5-10ft away in 1920 resolution and even in 640 it really is taking toll on my app performance. When I use Create ML it outputs all in 415x which is probably the reason why it does not detect objects from far. What can I do to preserve some energy ? My model is used with about 1K pictures 200 each test and validate, and from close up and far.
0
2
155
Apr ’25
CoreML model for news scoring
Is it possible to train a model using CreateML to infer a relevance numeric score of a news article based on similar trained data, something like a sentiment score ? I created a Text Classifier that assigns a category label which works perfect but I would like a solution that calculates a numeric value, not a label.
2
0
135
Mar ’25
VNCoreMLTransform - request failed
Keep getting error : I have tried Picker for File, Photo Library , both same results . Debugging the resize for 360x360 but still facing this error. The model I'm trying to implement is created with CreateMLComponents The process is from example of WWDC 2022 Banana Ripeness , I have used index for each .jpg . Prediction Failed: The VNCoreMLTransform request failed Is there some possible way to solve it or is error somewhere in training of model ?
1
0
497
Mar ’25
missing CreateML frameworks
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4. please suggest advice on how do I proceed
4
0
789
Mar ’25
Creating .mlmodel with Create ML Components
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components . Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ? Code is from [https://developer.apple.com/videos/play/wwdc2022/10019) import CoreImage import CreateMLComponents struct ImageRegressor { static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas") static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters") static func train() async throws -> some Transformer<CIImage, Float> { let estimator = ImageFeaturePrint() .appending(LinearRegressor()) // File name example: banana-5.jpg let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image) .mapFeatures(ImageReader.read) .mapAnnotations({ Float($0)! }) let (training, validation) = data.randomSplit(by: 0.8) let transformer = try await estimator.fitted(to: training, validateOn: validation) try estimator.write(transformer, to: parametersURL) return transformer } } I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with "pipeline.json, parameters, optimizer.json, optimizer"
3
0
605
Mar ’25
Unexpectedly slow CreateML text classifier training (limited GPU/CPU usage)
While training a text classifier model with a few thousand samples completes in seconds, when using 100,000 or 1 million samples, CreateML's training time increases exponentially (to hours or days). During these hours/days, GPU usage is low and almost every CPU core is idle. When using the Swift APIs for model training, resource utilization does not increase. I'm using Xcode 16.2, macOS 15.2 on either an M2 Ultra 64 GB or an M3 Max 48 GB laptop (both using built-in SSD with ~500 GB free) running no other applications. Is there a setting I've missed to allow training to take over more of my computing resources? Is this expected of CreateML (i.e., when looking to exploit a larger corpus, I should move to other tooling)? I'd love to speed up my iteration cycle time.
1
0
661
Feb ’25