Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Create ML Documentation

Posts under Create ML tag

44 Posts
Sort by:
Post not yet marked as solved
1 Replies
540 Views
I'm trying to create an updatable model, but this seems possible only by creating from scratch a neural network model and then, using the NeuralNetworkBuilder, call the make_updatable method. But I met a lot of problems on this way for the solution. In this example I try to open a converted ML Model (neural network) using the NeuralNetworkBuilder: import coremltools model = coremltools.models.MLModel("SimpleImageClassifier.mlpackage") spec = model.get_spec() builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec) builder.inspect_layers() But I met this error in the builder instance line: AttributeError: 'NoneType' object has no attribute 'layers' I also tried to define a neural network using the NeuralNetworkBuilder but then what do I have to do with this object? I didn't find a way to save it or convert it. The result I want is simple, the possibility to train more the model on the user device to meet his exigences. However the way to obtain an updatable model seems incomprehensible. In my case, the model should be an image classification. What approach should I follow to achieve this result? Thank you
Posted Last updated
.
Post not yet marked as solved
1 Replies
498 Views
Coremltools: 6.2.0 When I run coreml model in python result is good: {'var_840': array([[-8.15439941e+02, 2.88793579e+02, -3.83110474e+02, -8.95208740e+02, -3.53131561e+02, -3.65339783e+02, -4.94590851e+02, 6.24686813e+01, -5.92614822e+01, -9.67470627e+01, -4.30247498e+02, -9.27047348e+01, 2.19661942e+01, -2.96691345e+02, -4.26566772e+02........ But when I run on xcode so result look like: [-inf,inf,nan,-inf,nan,nan,nan,nan,nan,-inf,-inf,-inf,-inf,-inf,-inf,nan,-inf,-inf,nan,-inf,nan,nan,-inf,nan,-inf,-inf,-inf,nan,nan,nan,nan,nan,nan,nan,nan,nan,nan,-inf,nan,nan,nan,nan,-inf,nan,-inf ....... Step1: Convert Resnet50 to coreml: import torch import torchvision # Load a pre-trained version of MobileNetV2 model. torch_model = torchvision.models.resnet50(pretrained=True) # Set the model in evaluation mode. torch_model.eval() # Trace the model with random data. example_input = torch.rand(1, 3, 224, 224) traced_model = torch.jit.trace(torch_model, example_input) out = traced_model(example_input) # Download class labels in ImageNetLabel.txt. # Set the image scale and bias for input image preprocessing. import coremltools as ct image_input = ct.ImageType(shape=example_input.shape, ) # Using image_input in the inputs parameter: # Convert to Core ML using the Unified Conversion API. model = ct.convert( traced_model, inputs=[image_input], compute_units=ct.ComputeUnit.CPU_ONLY, ) # Save the converted model. model.save("resnettest.mlmodel") # Print a confirmation message. print('model converted and saved') Step2: Test model coreml in python: import coremltools as ct import PIL import numpy as np # Load the model model = ct.models.MLModel('/Users/ngoclh/Downloads/resnettest.mlmodel') print(model) img_path = "/Users/ngoclh/gitlocal/DetectCirtochApp/DetectCirtochApp/resources/image.jpg" img = PIL.Image.open(img_path) img = img.resize([224, 224], PIL.Image.ANTIALIAS) coreml_out_dict = model.predict({"input_1" : img}) print(coreml_out_dict) Step3: Test coreml model in Xcode: func getFeature() { do { let deepLab = try VGG_emb.init() //mobilenet_emb.init()//cirtorch_emb.init() let image = UIImage(named: "image.jpg") let pixBuf = image!.pixelBuffer(width: 224, height: 224)! guard let output = try? deepLab.prediction(input_1: pixBuf) else { return } let names = output.featureNames print("ngoc names: ", names) for name in names { let feature = output.featureValue(for: name) print("ngoc feature: ", feature) } } catch { print(error) } }
Posted
by HuuNgoc.
Last updated
.
Post not yet marked as solved
1 Replies
419 Views
Objective: I am in the process of developing an application that utilizes machine learning (Core ML) to interact with photographs of documents, specifically focusing on those containing tables. Step 1: Capturing the Image The application will initiate by allowing users to take photos of documents. The key here is not just any part of the document, but specifically the sections where tables are present. Step 2: Image Analysis through Machine Learning Upon capturing the image, the next phase involves a machine learning model. Using Apple's Create ML tool with Swift, the application will analyze the image. The model's task is two-fold: Identifying the Table: Distinguish the table from other document information, ensuring it recognizes and isolates the table structure within the photograph. Ignoring Irrelevant Information: Concurrently, the model will disregard all non-table content, focusing the application's resources on the table data. Step 3: Data Extraction and Training Once the table is identified, the real work begins. The application will engage in detailed scrutiny, where it's trained to understand and recognize row and column data based on specific datasets. This training will enable the application to 'read' the table accurately, much like a human would, by identifying the organization of information into rows and columns. Step 4: Information Storage Post-analysis, the application will extract this critical data, storing it in a structured format. Each piece of identifiable information from the rows and columns will be systematically organized into a Dictionary or an Object. This structure is not just for immediate use but also efficient for future data operations within the app. Conclusion: Through these sequential steps, the application transitions from merely capturing an image to intelligently recognizing, deciphering, and storing table data from within a physical document. This streamlined process is all courtesy of integrating machine learning into the app's functionality, promising significant efficiency and accuracy in data handling. Realistically, I have not found any good examples out there so I am attempting to create my own ML (with no experience 😅), so any guidance or help would be very much appreciated.
Posted Last updated
.
Post not yet marked as solved
2 Replies
424 Views
I followed the video of Composing advanced models with Create ML Components. I have created the model with let urlParameter = URL(fileURLWithPath: "/path/to/model.pkg") let (training, validation) = dataFrame.randomSplit(by: 0.8) let model = try await transformer.fitted(to: DataFrame(training), validateOn: DataFrame(validation)) { event in guard let tAccuracy = event.metrics[.trainingAccuracy] as? Double else { return } print(tAccuracy) } try transformer.write(model, to: url) print("done") Next goal is to read the model and update it with new dataFrame let urlCSV = URL(fileURLWithPath: "path/to/newData.csv") var model = try transformer.read(from: urlParameters) // loading created model let newDataFrame = try DataFrame(contentsOfCSVFile: urlCSV ) // new dataFrame with features and annotations try await transformer.update(&model, with: newDataFrame) // I want to keep previous learned data and update the model with new try transformer.write(model, to: urlParameters) // the model saves but the only last added dataFrame are saved. Previous one just replaced with new one But looks like I only replace old data with new one. **The Question ** How can add new data to model I created without losing old one ?
Posted
by griffenk.
Last updated
.
Post not yet marked as solved
5 Replies
626 Views
I had code that ran 7x faster in Ventura compared to how it runs now in Sonoma. For the basic model training I used let pmst = MLBoostedTreeRegressor.ModelParameters(validation: .split(strategy: .automatic),maxIterations:10000) let model = try MLBoostedTreeRegressor(trainingData: trainingdata, targetColumn: columntopredict, parameters: pmst) Which took around 2 secs in Ventura and now takes between 10 and 14 seconds in Sonoma I have tried to investigate why, and have noticed that when I use I see these results useWatchSPIForScribble: NO, allowLowPrecisionAccumulationOnGPU: NO, allowBackgroundGPUComputeSetting: NO, preferredMetalDevice: (null), enableTestVectorMode: NO, parameters: (null), rootModelURL: (null), profilingOptions: 0, usePreloadedKey: NO, trainWithMLCompute: NO, parentModelName: , modelName: Unnamed_Model, experimentalMLE5EngineUsage: Enable, preparesLazily: NO, predictionConcurrencyHint: 0, Why is the preferred Metal Device null? If I do let devices = MTLCopyAllDevices() for device in devices { config.preferredMetalDevice = device print(device.name) } I can see that the M1 chipset is available but not selected (from reading the literature the default should be nil?) Is this the reason why it is so slow? Is there a way to force a change in the config or elsewhere? Why has the default changed, if it has?
Posted Last updated
.
Post not yet marked as solved
0 Replies
795 Views
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
Posted
by mrdbourke.
Last updated
.
Post not yet marked as solved
1 Replies
455 Views
Hi! I'd like to ask why after training an action classifier model using CreateML, I consistently fail to detect my wrist joint while using the live preview feature, while other joints are detected correctly. What could be causing this result? Thank you to anyone for providing answers. Training Data Sources: YouTube videos, searching keyword: "20min workout at home" All videos display the full body. Training Data Quantity: Model 1: Target action: 1 video / Irrelevant action: 1 video Model 2: Target action: 6 videos / Irrelevant action: 40 videos Both Model 1 and Model 2 fail to detect the wrist joint during live preview.
Posted
by Jay_CJ.
Last updated
.
Post not yet marked as solved
2 Replies
492 Views
Following this guide https://developer.apple.com/documentation/CreateML/creating-a-multi-label-image-classifier Has anyone been able to export a CoreML model, specifically according to the documentation below? Since there isn't any runnable examples, just snippets, perhaps documentation error? If anyone is already familiar with these pipeline generics, is there something that jumps out about the example transformer used that fails conformance or just factually incorrect? Export the model to use with Vision After you train the model, you can export it as a Core ML model. // Export to Core ML let modelURL = URL(filePath: "/path/to/model") try model.export(to: modelURL)
Posted Last updated
.
Post not yet marked as solved
7 Replies
931 Views
Hello everyone, I am new to using Create ML, but am running up against a problem where the error is not descriptive, and I can not figure out what might be causing it. I am fairly sure my data is formatted properly, as in the CreateML software, it detects the images and shows me a bar graph of how many images belong to each label. But when it comes to actually training, the moment I press the "Train" button, it shows an error with the message: "Unexpected Error". I have also attempted to create and train the model programmatically, and that actually works! The framework requires that the JSON be named "annotations.json" instead of "annotation.json" and that the key representing the name of image be changed to "filename" from "image", but other than that, the data is the same. I tried to use the software with the changes I made to the JSON for use in the framework, but if I try that, it won't even parse the data, so I am fairly sure that my data is formatted correctly. I would prefer to use the app rather than do everything programmatically, because it presents the data in a much more digestible way. Has anyone else come up against this issue or a similar issue. I should note that I am running the latest Beta of MacOS Sonoma and Xcode.
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.2k Views
hey if i wanted to create an app that takes screenshots from an apple device (and any app within) to give context to an ai so the ai can then respond. Then the app parses the response then executes commands on behalf of the ai/user, how would I do so with the rule that "screenshots/captures are not allowed within other apps"? Want to stay within bounds of the rules in place. Possibilities: Ai assistant, Ai pals, passive automation
Posted Last updated
.
Post not yet marked as solved
3 Replies
619 Views
Hey all, we are currently training a Hand Pose model with the current release of CreateML, and during the feature extraction phase, we get the following error: Assertion failed: (/AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/TuriCore/turicreate_oss/src/core/storage/DataTable_data/DataColumn_v2_block_manager.cpp:105): seg->blocks.size()>column_id [0 > 0] We have tried to search this online and mitigate the issue, but we are getting nowhere - has anyone else experienced this issue?
Posted
by dougkilby.
Last updated
.
Post not yet marked as solved
1 Replies
676 Views
Hey, Are there any limits to the windowDuration property of the AudioFeaturePrint transformer such as the minimum value or maximum value? If we create a model with the Create ML UI App, upon selecting the AudioFeaturePrint as the feature extractor, we cannot go below 0.5 seconds for the window duration. Is the limit same if we programmatically create a model using the AudioFeaturePrint?
Posted
by mspattan.
Last updated
.
Post marked as solved
3 Replies
810 Views
Is it possible to create an updatable sound classifier model which uses Apple's built in MLSoundClassifier available via Create ML that can be trained/personalized on device using Core ML? I tried to look up in quite a few places for a long while, however, I know that when on-device training was initially announced in 2019, updatable models were only restricted to non built-in classifiers, but any additional information that may have come out after 2019 in this regard has been hard to find.
Posted
by mspattan.
Last updated
.
Post not yet marked as solved
1 Replies
866 Views
Hello, I am a student and I am doing a search for my thesis on create ML and shape recognition and image processing, so for this subject I want to find the details of the steps used in create ML for this, such as the techniques used for pre-processing, and the methods of extracting characteristics, and the filters applied, ect...
Posted Last updated
.
Post marked as solved
1 Replies
792 Views
Hello, I am reaching out for some assistance regarding integrating a CoreML action classifier into a SwiftUI app. Specifically, I am trying to implement this classifier to work with the live camera of the device. I have been doing some research, but unfortunately, I have not been able to find any relevant information on this topic. I was wondering if you could provide me with any examples, resources, or information that could help me achieve this integration? Any guidance you can offer would be greatly appreciated. Thank you in advance for your help and support.
Posted Last updated
.
Post not yet marked as solved
1 Replies
664 Views
does anyone know if the CreateML app has a way to build Support Vector Machine models for tabular regression? I see only the attached options. xcode14.2
Posted
by diffent.
Last updated
.
Post not yet marked as solved
1 Replies
986 Views
Hi everyone! I’m trying to train an activity classification model with 3 classes. The problem is that only one class has precision and recall > 0 after training. Even with 2 classes result is the same First I’d thought that there is a problem with my data but when I switched “left” label to “right” and vice versa the results were the same: only “left”-labeled data get non-zero precision and recall.
Posted
by corle.
Last updated
.
Post marked as solved
1 Replies
491 Views
Is it possible to use import CreateML on an iOS project? I'm looking at the code form the "Build dynamic iOS apps with the Create ML framework" video from this link https://developer.apple.com/videos/play/wwdc2021/10037/, but I'm not sure what kind of project I need to create. If I created an iOS project and tried running the code, what inputs would I need?
Posted
by reetinav.
Last updated
.
Post not yet marked as solved
1 Replies
891 Views
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset). I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully. But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax. Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND. Does anyone faced this issue? if I execute this builder.inspect_layers(last=4) I get this output [Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND) Updatable: False Input blobs: ['sequential/dense_1/MatMul'] Output blobs: ['Identity'] [Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul) Updatable: False Input blobs: ['sequential/dense/Relu'] Output blobs: ['sequential/dense_1/MatMul'] [Id: 30], Name: sequential/dense/Relu (Type: activation) Updatable: False Input blobs: ['sequential/dense/MatMul'] Output blobs: ['sequential/dense/Relu'] In the make_updatable function when I execute builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity') I get this error ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
Posted Last updated
.
Post not yet marked as solved
4 Replies
2k Views
Im on the recent version of MacOs and I recently trained a Style Transfer model using CreateML. I used the preview tab of CreateML to preview my model with a video (as well as an image), however when I press the button to export or share the result from the neural network none are exported. The modal window appears but doesnt save after the progress bar shows up for the conversion I tried converting the CoreML model file into a CoreML package, however when I tried exporting the preview it crashed and switched tabs to the package information section. I've been having this issue with all three export buttons on the model preview section of both the CreateML application and Xcode. Is this happening to anyone else? Ive also tried using the coremltools package for Python to extract a preview, however documentation for Style Transfer networks doesnt exist for loading videos with that package. The style transfer network only takes an input of images, so its unclear where a video file can be loaded.
Posted
by trzroy.
Last updated
.