Integrate machine learning models into your app using Core ML.

Posts under Core ML tag

118 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Vision Pro & Vision SDK
I'm exploring my Vision Pro and finding it unclear whether I can even achieve things like body pose detection etc. https://developer.apple.com/videos/play/wwdc2023/111241/ It's clear that I can apply it to self provided images, but how about to the data coming from visionOS SDKs? All I can find is this mesh data from ARKit, https://developer.apple.com/documentation/arkit/arkit_in_visionos - am I missing something or do we not yet have good APIs for this? Appreciate any guidance! Thanks.
2
0
1k
Feb ’24
Who will build first app that I can use while I sleep (quite literally)(24/7 immersion)?
In theory, sending signals from iPhone apps to and from the brain with non-invasive technology could be achieved through a combination of brain-computer interface (BCI) technologies, machine learning algorithms, and mobile app development. Brain-Computer Interface (BCI): BCI technology can be used to record brain signals and translate them into commands that can be understood by a computer or a mobile device. Non-invasive BCIs, such as electroencephalography (EEG), can track brain activity using sensors placed on or near the head[6]. For instance, a portable, non-invasive, mind-reading AI developed by UTS uses an AI model called DeWave to translate EEG signals into words and sentences[3]. Machine Learning Algorithms: Machine learning algorithms can be used to analyze and interpret the brain signals recorded by the BCI. These algorithms can learn from large quantities of EEG data to translate brain signals into specific commands[3]. Mobile App Development: A mobile app can be developed to receive these commands and perform specific actions on the iPhone. The app could also potentially send signals back to the brain using technologies like transcranial magnetic stimulation (TMS), which can deliver information to the brain[5]. However, it's important to note that while this technology is theoretically possible, it's still in the early stages of development and faces significant technical and ethical challenges. Current non-invasive BCIs do not have the same level of fidelity as invasive devices, and the practical application of these systems is still limited[1][3]. Furthermore, ethical considerations around privacy, consent, and the potential for misuse of this technology must also be addressed[13]. Sources [1] You can now use your iPhone with your brain after a major breakthrough | Semafor https://www.semafor.com/article/11/01/2022/you-can-now-use-your-iphone-with-your-brain [2] ! Are You A Robot? https://www.sciencedirect.com/science/article/pii/S1110866515000237 [3] Portable, non-invasive, mind-reading AI turns thoughts into text https://techxplore.com/news/2023-12-portable-non-invasive-mind-reading-ai-thoughts.html [4] Elon Musk's Neuralink implants brain chip in first human https://www.reuters.com/technology/neuralink-implants-brain-chip-first-human-musk-says-2024-01-29/ [5] BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains - Scientific Reports https://www.nature.com/articles/s41598-019-41895-7 [6] Brain-computer interfaces and the future of user engagement https://www.fastcompany.com/90802262/brain-computer-interfaces-and-the-future-of-user-engagement [7] Mobile App + Wearable For Neurostimulation - Accion Labs https://www.accionlabs.com/mobile-app-wearable-for-neurostimulation [8] Signal Generation, Acquisition, and Processing in Brain Machine Interfaces: A Unified Review https://www.frontiersin.org/articles/10.3389/fnins.2021.728178/full [9] Mind-reading technology has arrived https://www.vox.com/future-perfect/2023/5/4/23708162/neurotechnology-mind-reading-brain-neuralink-brain-computer-interface [10] Synchron Brain Implant - Breakthrough Allows You to Control Your iPhone With Your Mind - Grit Daily News https://gritdaily.com/synchron-brain-implant-controls-tech-with-the-mind/ [11] Mind uploading - Wikipedia https://en.wikipedia.org/wiki/Mind_uploading [12] BirgerMind - Express your thoughts loudly https://birgermind.com [13] Elon Musk wants to merge humans with AI. How many brains will be damaged along the way? https://www.vox.com/future-perfect/23899981/elon-musk-ai-neuralink-brain-computer-interface [14] Models of communication and control for brain networks: distinctions, convergence, and future outlook https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7655113/ [15] Mind Control for the Masses—No Implant Needed https://www.wired.com/story/nextmind-noninvasive-brain-computer-interface/ [16] Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them https://www.theverge.com/2019/7/16/20697123/elon-musk-neuralink-brain-reading-thread-robot [17] Essa and Kotte https://arxiv.org/pdf/2201.04229.pdf [18] Synchron's Brain Implant Breakthrough Lets Users Control iPhones And iPads With Their Mind https://hothardware.com/news/brain-implant-breakthrough-lets-you-control-ipad-with-your-mind [19] An Apple Watch for Your Brain https://www.thedeload.com/p/an-apple-watch-for-your-brain [20] Toward an information theoretical description of communication in brain networks https://direct.mit.edu/netn/article/5/3/646/97541/Toward-an-information-theoretical-description-of [21] A soft, wearable brain–machine interface https://news.ycombinator.com/item?id=28447778 [22] Portable neurofeedback App https://www.psychosomatik.com/en/portable-neurofeedback-app/ [23] Intro to Brain Computer Interface http://learn.neurotechedu.com/introtobci/
0
1
737
Feb ’24
Is the Apple Neural Scene Analyzer (ANSA) backbone available to devs
Hello, My understanding of the paper below is that iOS ships with a MobileNetv3-based ML model backbone, which then uses different heads for specific tasks in iOS. I understand that this backbone is accessible for various uses through the Vision framework, but I was wondering if it is also accessible for on-device fine-tuning for other purposes. Just as an example, if I want to have a model to detect some unique object in a photo, can I use the built in backbone or do I have to include my own in the app. Thanks very much for any advice and apologies if I didn't understand something correctly. Source: https://machinelearning.apple.com/research/on-device-scene-analysis
1
0
652
Feb ’24
why there's nerual engine-data copy in coreml npu prediction
I am currently facing a performance issue while using CoreML on iOS 16+ devices to run a simple grid_sample model. When profiling the model using xcode Profiler, I noticed that before each NPU computation, there is a significant delay caused by the "input copy" and "neural engine-data copy" operations.I have specified that both the input and output of the model are of type float16, there shouldn't be any data type convert. I would appreciate any insights or suggestions regarding the reasons behind this delay and possible solutions My simple model is class GridSample(torch.nn.Module): def __init__( self, ): super().__init__() def forward(self, input: torch.Tensor, grid: torch.Tensor) -> torch.Tensor: output = F.grid_sample( input, grid.to(input), mode='nearest', padding_mode='zeros', align_corners=True, ) return output tr_input = torch.randn((8, 64, 512, 512) tr_grid = torch.randn((8, 256, 256, 2) simple_model = GridSample() simple_model.eval() traced_model = torch.jit.trace(simple_model, [tr_input, tr_grid]) coreml_input = [coremltools.TensorType(name="image_input", shape=tr_input.shape, dtype=np.float16), coremltools.TensorType(name="warp_grid", shape=tr_grid.shape, dtype=np.float16)] mlmodel = coremltools.converters.convert(traced_model, inputs=coreml_input, convert_to="mlprogram", minimum_deployment_target=coremltools.target.iOS16, compute_units=coremltools.ComputeUnit.ALL, compute_precision = coremltools.precision.FLOAT16, outputs=[ct.TensorType(name="x0", dtype=np.float16)], debug=False) mlmodel.save("./grid_sample.mlpackage") os.system(f"xcrun coremlcompiler compile './grid_sample.mlpackage' './')
0
0
595
Feb ’24
CoreML Conversion of TensorFlow Keras NN fails on Iris Data set
On tf version 2.11.0. I have tried to follow on a fairly standard NN example in order to convert to a CoreML model. However, I cannot get this to work and I'm not clear where it is going wrong. It would seem to be a fairly standard task - a toy example - and I can't see why the conversion would fail. Any help would be appreciated. I have tried the different approaches listed below, but it seems the conversion should just work. I have also tried running the same code pinned to: tensorflow==2.6.2 scikit-learn==0.19.2 pandas==1.1.1 And get a different sequence of errors. The Python code I used mostly comes form this example: https://lnwatson.co.uk/posts/intro_to_nn/ import pandas as pd import numpy as np import tensorflow as tf import torch from sklearn.model_selection import train_test_split from tensorflow import keras import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1' np.bool = np.bool_ np.int = np.int_ print("tf version", tf.__version__) csv_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' col_names = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width','Class'] df = pd.read_csv(csv_url, names = col_names) labels = df.pop('Class') labels = pd.get_dummies(labels) X = df.values y = labels.values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2) model = keras.Sequential() model.add(keras.layers.Dense(16, activation='relu', input_shape=(4,))) model.add(keras.layers.Dense(3, activation='softmax')) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=12, epochs=200, validation_data=(X_val, y_val)) import coremltools as ct # Pass in `tf.keras.Model` to the Unified Conversion API mlmodel = ct.convert(model, convert_to="mlprogram") # mlmodel = ct.convert(model, source="tensorflow") # mlmodel = ct.convert(model, convert_to="neuralnetwork") # mlmodel = ct.convert( # model, # source="tensorflow", # inputs=[ct.TensorType(name="input")], # outputs=[ct.TensorType(name="output")], # minimum_deployment_target=ct.target.iOS14, # ) When using either of these 3: mlmodel = ct.convert(model, convert_to="mlprogram") mlmodel = ct.convert(model, source="tensorflow") mlmodel = ct.convert(model, convert_to="neuralnetwork") I get: mlmodel2 = ct.convert(model, source="tensorflow") ValueError: Const node 'sequential_5/dense_10/MatMul/ReadVariableOp' cannot have no value ERROR:root:sequential_5/dense_11/BiasAdd/ReadVariableOp:0 ERROR:root:[ 0.34652767 0.16202268 -0.3554725 ] Running TensorFlow Graph Passes: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 28.76 passes/s] Converting Frontend ==> MIL Ops: 8%|█████████████████ | 1/12 [00:00<00:00, 16710.37 ops/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/Documents/CoreML Basic Models/NN_Keras_Iris.py:142 130 import coremltools as ct 131 # Pass in `tf.keras.Model` to the Unified Conversion API 132 # mlmodel = ct.convert(model, convert_to="mlprogram") 133 (...) 140 141 # ct.convert(mymodel(), source="tensorflow") --> 142 mlmodel2 = ct.convert(model, source="tensorflow") 144 mlmodel = ct.convert( 145 model, 146 source="tensorflow", (...) 153 minimum_deployment_target=ct.target.iOS14, 154 ) .... File ~/opt/anaconda3/envs/coreml_env/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/ops.py:430, in Const(context, node) 427 @register_tf_op 428 def Const(context, node): 429 if node.value is None: --> 430 raise ValueError("Const node '{}' cannot have no value".format(node.name)) 431 mode = get_const_mode(node.value.val) 432 x = mb.const(val=node.value.val, mode=mode, name=node.name) ValueError: Const node 'sequential_5/dense_10/MatMul/ReadVariableOp' cannot have no value Second Approach: A different approach I tried was specifying the inout type TensorType. However, when specifying the input and outputs I get a different error. I have tried variations on this initialiser but all produce the same error. The variations revolve around adding input_shape, dtype=np.float32 mlmodel = ct.convert( model, source="tensorflow", inputs=[ct.TensorType(name="input")], outputs=[ct.TensorType(name="output")], minimum_deployment_target=ct.target.iOS14, ) t File ~/opt/anaconda3/envs/coreml_env/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/load.py:106, in <listcomp>(.0) 104 logging.debug(msg.format(outputs)) 105 outputs = outputs if isinstance(outputs, list) else [outputs] --> 106 outputs = [i.split(":")[0] for i in outputs] 107 if _get_version(tf.__version__) < _StrictVersion("1.13.1"): 108 return tf.graph_util.extract_sub_graph(graph_def, outputs) AttributeError: 'TensorType' object has no attribute 'split'
0
0
608
Jan ’24
coreML Hand Pose classification: doesn't appear on the camera.
I created a Hand Pose model using CreateML and integrated it into my SwiftUI project app. While coding, I referred to the Apple Developer documentation app for the necessary code. However, when I ran the app on an iPhone 14, the camera didn't display any effects or finger numbers as expected. note: I've already tested the ML model separately, and it works fine. the code: import CoreML import SceneKit import SwiftUI import Vision import ARKit struct ARViewContainer: UIViewControllerRepresentable { let arViewController: ARViewController let model: modelHand func makeUIViewController(context: UIViewControllerRepresentableContext<ARViewContainer>) -> ARViewController { arViewController.model = model return arViewController } func updateUIViewController(_ uiViewController: ARViewController, context: UIViewControllerRepresentableContext<ARViewContainer>) { // Update the view controller if needed } } class ARViewController: UIViewController, ARSessionDelegate { var frameCounter = 0 let handPosePredictionInterval = 10 var model: modelHand! var effectNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() let arView = ARSCNView(frame: view.bounds) view.addSubview(arView) let session = ARSession() session.delegate = self let configuration = ARWorldTrackingConfiguration() configuration.frameSemantics = .personSegmentationWithDepth arView.session.run(configuration) } func session(_ session: ARSession, didUpdate frame: ARFrame) { let pixelBuffer = frame.capturedImage let handPoseRequest = VNDetectHumanHandPoseRequest() handPoseRequest.maximumHandCount = 1 handPoseRequest.revision = VNDetectHumanHandPoseRequestRevision1 let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]) do { try handler.perform([handPoseRequest]) } catch { assertionFailure("Hand Pose Request failed: \(error)") } guard let handPoses = handPoseRequest.results, !handPoses.isEmpty else { return } if frameCounter % handPosePredictionInterval == 0 { if let handObservation = handPoses.first as? VNHumanHandPoseObservation { do { let keypointsMultiArray = try handObservation.keypointsMultiArray() let handPosePrediction = try model.prediction(poses: keypointsMultiArray) let confidence = handPosePrediction.labelProbabilities[handPosePrediction.label]! print("Confidence: \(confidence)") if confidence > 0.9 { print("Rendering hand pose effect: \(handPosePrediction.label)") renderHandPoseEffect(name: handPosePrediction.label) } } catch { fatalError("Failed to perform hand pose prediction: \(error)") } } } } func renderHandPoseEffect(name: String) { switch name { case "One": print("Rendering effect for One") if effectNode == nil { effectNode = addParticleNode(for: "One") } default: print("Removing all particle nodes") removeAllParticleNode() } } func removeAllParticleNode() { effectNode?.removeFromParentNode() effectNode = nil } func addParticleNode(for poseName: String) -> SCNNode { print("Adding particle node for pose: \(poseName)") let particleNode = SCNNode() return particleNode } } struct ContentView: View { let model = modelHand() var body: some View { ARViewContainer(arViewController: ARViewController(), model: model) } } #Preview { ContentView() }
0
0
595
Jan ’24
Need Help with Create ML in Xcode - Unexpected App Closure
Hello Apple Developer community, I hope this message finds you well. I am currently facing an issue with Create ML in Xcode, and I am seeking assistance from the knowledgeable members of this forum. Any help or guidance would be greatly appreciated. Problem Description: I am encountering an unexpected issue when attempting to create a classification model for images using Create ML in Xcode. Upon opening Create ML, the application closes unexpectedly when I choose to create a new image classification model. Steps I Have Taken: I have already tried the following steps to troubleshoot the issue: Updated Xcode and macOS to the latest versions. Restarted Xcode and my computer. Created a new sample project to isolate the issue. Despite these efforts, the problem persists. System Information: Xcode Version: 15.2 macOS Version: Sonoma 14.0 I am on a tight deadline for a project, and resolving this issue quickly is crucial. Your help is invaluable, and I thank you in advance for any support you can provide. Best regards.
1
0
706
Jan ’24
Core ML MLOneHotEncoder Error Post-Update: "unknown category String"
Apple Developer community, I recently updated Xcode and Core ML from version 13.0.1 to 14.1.2 and am facing an issue with the MLOneHotEncoder in my Core ML classifier. The same code and data that worked fine in the previous version now throw an error during predictions. The error message is: MLOneHotEncoder: unknown category String [TERM] expected one of This seems to suggest that the MLOneHotEncoder is not handling unknown strings, as it did in the previous version. Here's a brief overview of my situation: Core ML Model: The model is a classifier that uses MLOneHotEncoder for processing categorical data. Data: The same dataset is used for training and predictions, which worked fine before the update. Error Context: The error occurs at the prediction stage, not during training. I have checked for data consistency and confirmed that the dataset is the same as used with the previous version. Here are my questions: Has there been a change in how MLOneHotEncoder handles unknown categories in Core ML version 14.1.2? Are there any recommended practices for handling unknown string categories with MLOneHotEncoder in the updated Core ML version? Is there a need to modify the model training code or data preprocessing steps to accommodate changes in the new Core ML version? I would appreciate any insights or suggestions on how to resolve this issue. If additional information is needed, I am happy to provide it. Thank you for your assistance!
1
0
476
Jan ’24
Color Format Requirements for Input in Apples MLModel of DeepLabV3
I am sending CVPixelBuffers to the input of the DeepLabV3 MLModel. I am of the understanding that it requires pixel color format 32ARGB or 32RGBA. Correct? Can 32BRGA be input? CVPixelBuffers support 32BRGA and OpenCV as well. Please note, I want to use the MLModel as trained. Neither 32RGBA no 32ARGB are supported for type CVPixelBuffer. 32ARGB: An unsupported runtime error occurs with the configuration as follows... func configureOutput() { videoOutput.setSampleBufferDelegate(self, queue: bufferQueue) videoOutput.alwaysDiscardsLateVideoFrames = true videoOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey): kCMPixelFormat_32ARGB]. 32RGBA: "Cannot find 'kCMPixelFormat_32rgba' in scope." The app process: Video captured pixelBuffers are sent to c++ code where openCV operations are done, creating up to 3 smaller Mats which are then converted back into pixel buffers in the Objective-C. These converted PixedBuffer are used in three ways. All are sent to the MLModel for image segmentation to identify people; the files may be sent to the photo library; or may simply be viewed on the screen. I need a color format that can support all these down stream operations/pipelines.
1
0
461
Jan ’24
DataFrame's Column doesn't support array of dictionary
I'm following Apple WWDC video (https://developer.apple.com/videos/play/wwdc2021/10037/) about how to create a recommendation model. But I'm getting this error when I run the project on that like of code from their tutorial. "Column keywords has element of unsupported type Dictionary<String, Double>." Here is the block of code took from the transcript of WWDC video that cause me issue: func featuresFromMealAndKeywords(meal: String, keywords: [String]) -> [String: Double] { // Capture interactions between content (the dish keywords) and context (meal) by // adding a copy of each keyword modified to include the meal. let featureNames = keywords + keywords.map { meal + ":" + $0 } // For each keyword, create an entry in a dictionary of features with a value of 1.0. return featureNames.reduce(into: [:]) { features, name in features[name] = 1.0 } } var trainingKeywords: [[String: Double]] = [] var trainingTargets: [Double] = [] for item in userPurchasedItems { // Add in the positive example. trainingKeywords.append( featuresFromMealAndKeywords(meal: item.meal, keywords: item.keywords)) trainingTargets.append(1.0) // Add in the negative example. let negativeKeywords = allKeywords.subtracting(item.keywords) trainingKeywords.append( featuresFromMealAndKeywords(meal: item.meal, keywords: Array(negativeKeywords))) trainingTargets.append(-1.0) } // Create the training data. var trainingData = DataFrame() trainingData.append(column: Column(name: "keywords" contents: trainingKeywords)) trainingData.append(column: Column(name: "target", contents: trainingTargets)) // Create the model. let model = try MLLinearRegressor(trainingData: trainingData, targetColumn: "target") Did DataFrame implementation changed since then and doesn't support Dictionary anymore? I'm at lost right now on how to reproduce their example.
4
6
713
Jan ’24
Word Tagging Model- How to change tagging unit
I created a word tagging model in CreateML and am trying to make predictions with it using the following code: let text = "$30.00 7/1/2023" let model = TaggingModel() let input = TaggingModelInput(text: text) guard let output = try? model.prediction(input: input) else { fatalError("Unexpected runtime error.") } However, the output separates "$" and "30.00" as separate tokens as well as "7", "/", "1", "/", etc. Is there any way to make sure prices and dates get grouped together and to simply separate tokens based on whitespace? Any help is appreciated!
1
0
589
Jan ’24
CreateML API train soft lock on 90%
Hello, I'm trying to train a MLImageClassifier dataset using Swift using the function MLImageClassifier.train. It doesn't change the dataset size (I have the same problem with a smaller one), but when the train reaches the 9 completedUnitCount of 10, even if the CPU usage is still high, seems to happen a soft lock that doesn't never brings the model to its completion (or error). The dataset is made of jpg images, using the CreateML app doesn't appear any problem during the training. There is any known issue with CreateML training APIs about part 9 of the process? There is any information about this part of the training job? Thank you
1
0
712
Nov ’23
Updating CoreML model on device gives negative mean squared error loss
I converted a toy Pytorch regression model to CoreML mlmodel using coremltools and set it to be updatable with mean_squared_error_loss. But when testing the training, the context.metrics[.lossValue] can give negative value which is impossible. Further more, context.metrics[.lossValue] result is very different from my own computed training loss as shown in the screenshot attached. I was wondering if I used a wrong way to extract the training loss from context? Does context.metrics[.lossValue] really give MSE if I used coremltools function set_mean_squared_error_loss to set the loss? Any suggestion is appreciated. Since the validation loss decreases as epoch goes, the model should be indeed updated correctly. I am using coremltools==7.0, xcode==15.0.1 Here is my code to convert Pytorch model to updatable CoreML model: import coremltools from coremltools.models.neural_network import NeuralNetworkBuilder, SgdParams, AdamParams from coremltools.models import datatypes # Load the model specification spec = coremltools.utils.load_spec('regression.mlmodel') builder = NeuralNetworkBuilder(spec=spec) builder.inspect_output_features() # Name: linear_1 # Make layers updatable builder.make_updatable(['linear_0', 'linear_1']) # Manually add a mean squared error loss layer feature = ('linear_1', datatypes.Array(1)) builder.set_mean_squared_error_loss(name='lossLayer', input_feature=feature) # define the optimizer (Adam in this example) adam_params = AdamParams(lr=0.01, beta1=0.9, beta2=0.999, eps=1e-8, batch=16) builder.set_adam_optimizer(adam_params) # Set the number of epochs builder.set_epochs(100) # Save the updated model updated_model = coremltools.models.MLModel(spec) updated_model.save('updatable_regression30.mlmodel') Here is the code I use to try to update the saved updatable_regression30.mlmodel: import CoreML import GameKit func generateSampleData(numSamples: Int, seed: UInt64) -> ([MLMultiArray], [MLMultiArray]) { // simple regression: y = 10 * sum(x) + 1 var inputArray = [MLMultiArray]() var outputArray = [MLMultiArray]() // Create a random number generator with a fixed seed let randomSource = GKLinearCongruentialRandomSource(seed: seed) let randomDistribution = GKRandomDistribution(randomSource: randomSource, lowestValue: 0, highestValue: 1000) for _ in 0..<numSamples { do { let input = try MLMultiArray(shape: [1, 2], dataType: .float32) let output = try MLMultiArray(shape: [1], dataType: .float32) var sumInput: Float = 0 for i in 0..<input.shape[1].intValue { // Generate random value using the fixed seed generator let inputValue = Float(randomDistribution.nextInt()) / 1000.0 input[[0, i] as [NSNumber]] = NSNumber(value: inputValue) sumInput += inputValue } output[0] = NSNumber(value: 10.0 * sumInput + 1.0) inputArray.append(input) outputArray.append(output) } catch { print("Error occurred while creating MLMultiArrays: \(error)") } } return (inputArray, outputArray) } func computeLoss(model: MLModel, data: ([MLMultiArray], [MLMultiArray])) -> Double { let (inputData, outputData) = data var totalLoss: Double = 0 for (index, input) in inputData.enumerated() { let output = outputData[index] if let prediction = try? model.prediction(from: MLDictionaryFeatureProvider(dictionary: ["x": MLFeatureValue(multiArray: input)])), let predictedOutput = prediction.featureValue(for: "linear_1")?.multiArrayValue { let loss = (output[0].doubleValue - predictedOutput[0].doubleValue) totalLoss += loss * loss // squared error } } return totalLoss / Double(inputData.count) // mean of squared errors } func trainModel() { // Load the updatable model guard let updatableModelURL = Bundle.main.url(forResource: "updatable_regression30", withExtension: "mlmodelc") else { print("Failed to load the updatable model") return } // Generate sample data let (inputData, outputData) = generateSampleData(numSamples: 200, seed: 8) let validationData = generateSampleData(numSamples: 100, seed:18) // Create an MLArrayBatchProvider from the sample data var featureProviders = [MLFeatureProvider]() for (index, input) in inputData.enumerated() { let output = outputData[index] let dataPointFeatures: [String: MLFeatureValue] = [ "x": MLFeatureValue(multiArray: input), "linear_1_true": MLFeatureValue(multiArray: output) ] if let provider = try? MLDictionaryFeatureProvider(dictionary: dataPointFeatures) { featureProviders.append(provider) } } let batchProvider = MLArrayBatchProvider(array: featureProviders) // Define progress handlers let progressHandlers = MLUpdateProgressHandlers(forEvents: [.trainingBegin, .epochEnd], progressHandler: { context in switch context.event { case .trainingBegin: print("Training began.") case .epochEnd: let loss = context.metrics[.lossValue] as! Double let validationLoss = computeLoss(model: context.model, data: validationData) let computedTrainLoss = computeLoss(model: context.model, data: (inputData, outputData)) print("Epoch \(context.metrics[.epochIndex]!) ended. Training Loss: \(loss), Computed Training Loss: \(computedTrainLoss), Validation Loss: \(validationLoss)") default: break } } ) // Create an update task with progress handlers let updateTask = try! MLUpdateTask(forModelAt: updatableModelURL, trainingData: batchProvider, configuration: nil, progressHandlers: progressHandlers) // Start the update task updateTask.resume() } // call trainModel() to start training
1
0
639
Nov ’23
CoreML: how to use a NeuralNetworkBuilder to make a model updatable
I'm trying to create an updatable model, but this seems possible only by creating from scratch a neural network model and then, using the NeuralNetworkBuilder, call the make_updatable method. But I met a lot of problems on this way for the solution. In this example I try to open a converted ML Model (neural network) using the NeuralNetworkBuilder: import coremltools model = coremltools.models.MLModel("SimpleImageClassifier.mlpackage") spec = model.get_spec() builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec) builder.inspect_layers() But I met this error in the builder instance line: AttributeError: 'NoneType' object has no attribute 'layers' I also tried to define a neural network using the NeuralNetworkBuilder but then what do I have to do with this object? I didn't find a way to save it or convert it. The result I want is simple, the possibility to train more the model on the user device to meet his exigences. However the way to obtain an updatable model seems incomprehensible. In my case, the model should be an image classification. What approach should I follow to achieve this result? Thank you
1
0
857
Nov ’23
Two questions regard converting Decoder into a CoreML
I converted a decoder model into CoreML using following way: input_1 = ct.TensorType(name="input_1", shape=ct.Shape((1, ct.RangeDim(lower_bound=1, upper_bound=50), 512)), dtype=np.float32) input_2 = ct.TensorType(name="input_2", shape=ct.Shape((1, ct.RangeDim(lower_bound=1, upper_bound=50), 512)), dtype=np.float32) decoder_iOS2 = ct.convert(decoder_layer, inputs=[input_1, input_2] ) But if load the model in Xcode it gives me two errors: Error1: MLE5Engine is not currently supported for models with range shape inputs that try to utilize the Neural Engine. Q1: As having a Flexible Input shape is nature of the Decoder, I can ignore this error message, right? This is the things that can't be fixed.? Erro2: doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/CB2207C5-B549-4868-AEB5-FFA7A3E24397/Photo2ASCII.app/Deocder_iOS_test2.mlmodelc/model.mil : sourceURL= (null) : key={"isegment":0,"inputs":{"input_1":{"shape":[512,1,1,1,1]},"input_2":{"shape":[512,1,1,1,1]}},"outputs":{"Identity":{"shape":[512,1,1,1,1]}}} : identifierSource=0 : cacheURLIdentifier=A93CE297F87F752D426002C8D1CE79094E614BEA1C0E96113228C8D3F06831FA_F055BF0F9A381C4C6DC99CE8FCF5C98E7E8B83EA5BF7CFD0EDC15EF776B29413 : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6885927629810 : intermediateBufferHandle=6885928772758 : queueDepth=127 } : state=3 : programHandle=6885927629810 : intermediateBufferHandle=6885928772758 : queueDepth=127 : attr={ ANEFModelDescription = { ANEFModelInput16KAlignmentArray = ( ); ANEFModelOutput16KAlignmentArray = ( ); ANEFModelProcedures = ( { ANEFModelInputSymbolIndexArray = ( 0, 1 ); ANEFModelOutputSymbolIndexArray = ( 0 ); ANEFModelProcedureID = 0; } ); kANEFModelInputSymbolsArrayKey = ( "input_1", "input_2" ); kANEFModelOutputSymbolsArrayKey = ( "Identity@output" ); kANEFModelProcedureNameToIDMapKey = { net = 0; }; }; NetworkStatusList = ( { LiveInputList = ( { BatchStride = 1024; Batches = 1; Channels = 1; Depth = 1; DepthStride = 1024; Height = 1; Interleave = 1; Name = "input_1"; PlaneCount = 1; PlaneStride = 1024; RowStride = 1024; Symbol = "input_1"; Type = Float16; Width = 512; }, { BatchStride = 1024; Batches = 1; Channels = 1; Depth = 1; DepthStride = 1024; Height = 1; Interleave = 1; Name = "input_2"; PlaneCount = 1; PlaneStride = 1024; RowStride = 1024; Symbol = "input_2"; Type = Float16; Width = 512; } ); LiveOutputList = ( { BatchStride = 1024; Batches = 1; Channels = 1; Depth = 1; DepthStride = 1024; Height = 1; Interleave = 1; Name = "Identity@output"; PlaneCount = 1; PlaneStride = 1024; RowStride = 1024; Symbol = "Identity@output"; Type = Float16; Width = 512; } ); Name = net; } ); } : perfStatsMask=0} was not loaded by the client. Q2: Is that I can ignore this error message, if I'm gonna use CPU/GPU when running the model?
0
0
531
Nov ’23