Integrate machine learning models into your app using Core ML.

Posts under Core ML tag

104 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

This neural network model does not have a parameter for requested key 'precisionRecallCurves'
Hello I am making a rock paper scissors game using object detection with a model I made using create ml and a dataset I found online. The trained model works and I tried to implement it into Xcode but when I run my app I get this error This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler. I am still new to create ml and I cannot seem to find anything about making my model updatable in create ml.
0
2
559
Mar ’24
Better Results with Separate Sound Classifier Models?
I'm working with MLSoundClassifier to try to look for 2 different sounds in a live audio stream. I have been debating with the team if it is better to train 2 separate models, one for each different sound, or train 1 model on both sounds? Has anyone had any experience with this. Some of us believe that we have received better results with the separate models and some with 1 single model trained on both sounds. Thank you!
0
0
387
Feb ’24
CoreML in playgrounds
How do I add a already made CoreML model into my playground? I tried what people recommended online -- building a test project and get the .mlmodelc file and put that in the playground along with the autogenerated class for the model. However, I keep on getting so many errors. The errors: Unexpected duplicate tasks Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Unexpected duplicate tasks Showing Recent Issues Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel ZooClassifier.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language.
1
0
532
Feb ’24
PyTorch convert function for op 'intimplicit' not implemented
I am trying to coremltools.converters.convert a traced PyTorch model and I got an error: PyTorch convert function for op 'intimplicit' not implemented I am trying to convert a RVC model from github. I traced the model with torch.jit.trace and it fails. So I traced down the problematic part to the ** layer : https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/infer_pack/modules.py#L188 import torch import coremltools as ct from infer.lib.infer_pack.modules import ** model = **(192, 5, dilation_rate=1, n_layers=16, ***_channels=256, p_dropout=0) model.remove_weight_norm() model.eval() test_x = torch.rand(1, 192, 200) test_x_mask = torch.rand(1, 1, 200) test_g = torch.rand(1, 256, 1) traced_model = torch.jit.trace(model, (test_x, test_x_mask, test_g), check_trace = True) x = ct.TensorType(name='x', shape=test_x.shape) x_mask = ct.TensorType(name='x_mask', shape=test_x_mask.shape) g = ct.TensorType(name='g', shape=test_g.shape) mlmodel = ct.converters.convert(traced_model, inputs=[x, x_mask, g]) I got an error RuntimeError: PyTorch convert function for op 'intimplicit' not implemented. How could I modify the **::forward so it does not generate an intimplicit operator ? Thanks David
1
0
518
Feb ’24
Run CoreML model crash on VisionPro real device when review by AppStoreConnect
I run a MiDaS CoreML model on the Device. It run well on VisionPro Simulator and iOS RealDevice. But crash on VisionPro device. crash mssage: /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:550: failed assertion `MPSKernel MTLComputePipelineStateCache unable to load function ndArrayConvolution2DA14. Crashlog_com.moemiku.VisionMagicPhoto_2024-01-21-16-01-07.txt Crashlog_com.moemiku.VisionMagicPhoto_2024-01-21-16-00-39.txt
2
0
569
Feb ’24
CoreML models persist in memory when changed
Hi, I'm using apple/ml-stable-diffusion package with CoreML models running under GPU mode in my SwiftUI app. The problem that I have and every other implementation I have tested is that every time the model is changed the old model still persists I'm memory until the app is reset. This is the same for every app I have tested that uses this package. So my question is can I kill the sub processes or flush the memory. The package has memory freeing functions, but they don't affect the loaded model. Any clues as to where I might start looking?
0
0
371
Feb ’24
Vision Pro CoreML inference 10x slower than M1 Mac/seems to run on CPU
Have a CoreML model that I run in my app Spatial Media Toolkit which lets you convert 2D photos to Spatial. Running the model on my 13" M1 mac gets 70ms inference. Running the exact same code on my Vision Pro takes 700ms. I'm working on adding video support but Vision Pro inference is feeling impossible due to 700ms per frame (20x realtime for for 30fps! 1 sec of video takes 20 sec!) There's a ModelConfiguration you can provide, and when I force CPU I get the same exact performance. Either it's only running on CPU, the NeuralEngine is throttled, or maybe GPU isn't allowed to help out. Disappointing but also feels like a software issue. Would be curious if anyone else has hit this/have any workarounds
0
0
433
Feb ’24
Source code for this video
Trying to learn vision apps and I was wondering if the actual .xcodeproj file was available anywhere. I understand there are snippets of code below the video but it's difficult to learn how to build an app with those files since it just focuses on the ML aspect. https://developer.apple.com/videos/play/wwdc2021/10039/ I'm also looking for the code for this video specifically. I'm aware of the drawing code but that is a relatively simple example to understand and the CreateML stuff isn't prevalent in that.
2
0
443
Feb ’24
Swift Playground Bundle can't find Compiled CoreML Model (.mlmodelc)
I have been attempting to debug this for over 10 hours... I am working on implementing Apple's MobileNetV2 CoreML model into a Swift Playgrounds. I performed the following steps Compiled CoreML model in regular Xcode project Moved Compiled CoreML (MobileNetV2.mlmodelc) model to Resources folder of Swift Playground Copy Paste the model class (MobileNetV2.swift) into the Sources folder of Swift Playground Use UIImage extensions to resize and convert UIImage into CVbuffer Implement basic code to run the model. However, every time I run this, it keeps giving me this error: MobileNetV2.swift:100: Fatal error: Unexpectedly found nil while unwrapping an Optional value From the automatically generated model class function: /// URL of model assuming it was installed in the same bundle as this class class var urlOfModelInThisBundle : URL { let bundle = Bundle(for: self) return bundle.url(forResource: "MobileNetV2", withExtension:"mlmodelc")! } The model builds perfectly, this is my contentView Code: import SwiftUI struct ContentView: View { func test() -> String{ // 1. Load the image from the 'Resources' folder. let newImage = UIImage(named: "img") // 2. Resize the image to the required input dimension of the Core ML model // Method from UIImage+Extension.swift let newSize = CGSize(width: 224, height: 224) guard let resizedImage = newImage?.resizeImageTo(size: newSize) else { fatalError("⚠️ The image could not be found or resized.") } // 3. Convert the resized image to CVPixelBuffer as it is the required input // type of the Core ML model. Method from UIImage+Extension.swift guard let convertedImage = resizedImage.convertToBuffer() else { fatalError("⚠️ The image could not be converted to CVPixelBugger") } // 1. Create the ML model instance from the model class in the 'Sources' folder let mlModel = MobileNetV2() // 2. Get the prediction output guard let prediction = try? mlModel.prediction(image: convertedImage) else { fatalError("⚠️ The model could not return a prediction") } // 3. Checking the results of the prediction let mostLikelyImageCategory = prediction.classLabel let probabilityOfEachCategory = prediction.classLabelProbs var highestProbability: Double { let probabilty = probabilityOfEachCategory[mostLikelyImageCategory] ?? 0.0 let roundedProbability = (probabilty * 100).rounded(.toNearestOrEven) return roundedProbability } return("\(mostLikelyImageCategory): \(highestProbability)%") } var body: some View { VStack { let _ = print(test()) Image(systemName: "globe") .imageScale(.large) .foregroundColor(.accentColor) Text("Hello, world!") Image(uiImage: UIImage(named: "img")!) } } } Upon printing my bundle contents, I get these: ["_CodeSignature", "metadata.json", "__PlaceholderAppIcon76x76@2x~ipad.png", "Info.plist", "__PlaceholderAppIcon60x60@2x.png", "coremldata.bin", "{App Name}", "PkgInfo", "Assets.car", "embedded.mobileprovision"] Anything would help 🙏 For additional reference, here are my UIImage extensions in ExtImage.swift: //Huge thanks to @mprecke on github for these UIImage extension function. import Foundation import UIKit extension UIImage { func resizeImageTo(size: CGSize) -> UIImage? { UIGraphicsBeginImageContextWithOptions(size, false, 0.0) self.draw(in: CGRect(origin: CGPoint.zero, size: size)) let resizedImage = UIGraphicsGetImageFromCurrentImageContext()! UIGraphicsEndImageContext() return resizedImage } func convertToBuffer() -> CVPixelBuffer? { let attributes = [ kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue ] as CFDictionary var pixelBuffer: CVPixelBuffer? let status = CVPixelBufferCreate( kCFAllocatorDefault, Int(self.size.width), Int(self.size.height), kCVPixelFormatType_32ARGB, attributes, &pixelBuffer) guard (status == kCVReturnSuccess) else { return nil } CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!) let rgbColorSpace = CGColorSpaceCreateDeviceRGB() let context = CGContext( data: pixelData, width: Int(self.size.width), height: Int(self.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) context?.translateBy(x: 0, y: self.size.height) context?.scaleBy(x: 1.0, y: -1.0) UIGraphicsPushContext(context!) self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)) UIGraphicsPopContext() CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) return pixelBuffer } }
5
1
1.3k
Feb ’24
Vision Pro & Vision SDK
I'm exploring my Vision Pro and finding it unclear whether I can even achieve things like body pose detection etc. https://developer.apple.com/videos/play/wwdc2023/111241/ It's clear that I can apply it to self provided images, but how about to the data coming from visionOS SDKs? All I can find is this mesh data from ARKit, https://developer.apple.com/documentation/arkit/arkit_in_visionos - am I missing something or do we not yet have good APIs for this? Appreciate any guidance! Thanks.
2
0
851
Feb ’24
Who will build first app that I can use while I sleep (quite literally)(24/7 immersion)?
In theory, sending signals from iPhone apps to and from the brain with non-invasive technology could be achieved through a combination of brain-computer interface (BCI) technologies, machine learning algorithms, and mobile app development. Brain-Computer Interface (BCI): BCI technology can be used to record brain signals and translate them into commands that can be understood by a computer or a mobile device. Non-invasive BCIs, such as electroencephalography (EEG), can track brain activity using sensors placed on or near the head[6]. For instance, a portable, non-invasive, mind-reading AI developed by UTS uses an AI model called DeWave to translate EEG signals into words and sentences[3]. Machine Learning Algorithms: Machine learning algorithms can be used to analyze and interpret the brain signals recorded by the BCI. These algorithms can learn from large quantities of EEG data to translate brain signals into specific commands[3]. Mobile App Development: A mobile app can be developed to receive these commands and perform specific actions on the iPhone. The app could also potentially send signals back to the brain using technologies like transcranial magnetic stimulation (TMS), which can deliver information to the brain[5]. However, it's important to note that while this technology is theoretically possible, it's still in the early stages of development and faces significant technical and ethical challenges. Current non-invasive BCIs do not have the same level of fidelity as invasive devices, and the practical application of these systems is still limited[1][3]. Furthermore, ethical considerations around privacy, consent, and the potential for misuse of this technology must also be addressed[13]. Sources [1] You can now use your iPhone with your brain after a major breakthrough | Semafor https://www.semafor.com/article/11/01/2022/you-can-now-use-your-iphone-with-your-brain [2] ! Are You A Robot? https://www.sciencedirect.com/science/article/pii/S1110866515000237 [3] Portable, non-invasive, mind-reading AI turns thoughts into text https://techxplore.com/news/2023-12-portable-non-invasive-mind-reading-ai-thoughts.html [4] Elon Musk's Neuralink implants brain chip in first human https://www.reuters.com/technology/neuralink-implants-brain-chip-first-human-musk-says-2024-01-29/ [5] BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains - Scientific Reports https://www.nature.com/articles/s41598-019-41895-7 [6] Brain-computer interfaces and the future of user engagement https://www.fastcompany.com/90802262/brain-computer-interfaces-and-the-future-of-user-engagement [7] Mobile App + Wearable For Neurostimulation - Accion Labs https://www.accionlabs.com/mobile-app-wearable-for-neurostimulation [8] Signal Generation, Acquisition, and Processing in Brain Machine Interfaces: A Unified Review https://www.frontiersin.org/articles/10.3389/fnins.2021.728178/full [9] Mind-reading technology has arrived https://www.vox.com/future-perfect/2023/5/4/23708162/neurotechnology-mind-reading-brain-neuralink-brain-computer-interface [10] Synchron Brain Implant - Breakthrough Allows You to Control Your iPhone With Your Mind - Grit Daily News https://gritdaily.com/synchron-brain-implant-controls-tech-with-the-mind/ [11] Mind uploading - Wikipedia https://en.wikipedia.org/wiki/Mind_uploading [12] BirgerMind - Express your thoughts loudly https://birgermind.com [13] Elon Musk wants to merge humans with AI. How many brains will be damaged along the way? https://www.vox.com/future-perfect/23899981/elon-musk-ai-neuralink-brain-computer-interface [14] Models of communication and control for brain networks: distinctions, convergence, and future outlook https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7655113/ [15] Mind Control for the Masses—No Implant Needed https://www.wired.com/story/nextmind-noninvasive-brain-computer-interface/ [16] Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them https://www.theverge.com/2019/7/16/20697123/elon-musk-neuralink-brain-reading-thread-robot [17] Essa and Kotte https://arxiv.org/pdf/2201.04229.pdf [18] Synchron's Brain Implant Breakthrough Lets Users Control iPhones And iPads With Their Mind https://hothardware.com/news/brain-implant-breakthrough-lets-you-control-ipad-with-your-mind [19] An Apple Watch for Your Brain https://www.thedeload.com/p/an-apple-watch-for-your-brain [20] Toward an information theoretical description of communication in brain networks https://direct.mit.edu/netn/article/5/3/646/97541/Toward-an-information-theoretical-description-of [21] A soft, wearable brain–machine interface https://news.ycombinator.com/item?id=28447778 [22] Portable neurofeedback App https://www.psychosomatik.com/en/portable-neurofeedback-app/ [23] Intro to Brain Computer Interface http://learn.neurotechedu.com/introtobci/
0
1
602
Feb ’24
Is the Apple Neural Scene Analyzer (ANSA) backbone available to devs
Hello, My understanding of the paper below is that iOS ships with a MobileNetv3-based ML model backbone, which then uses different heads for specific tasks in iOS. I understand that this backbone is accessible for various uses through the Vision framework, but I was wondering if it is also accessible for on-device fine-tuning for other purposes. Just as an example, if I want to have a model to detect some unique object in a photo, can I use the built in backbone or do I have to include my own in the app. Thanks very much for any advice and apologies if I didn't understand something correctly. Source: https://machinelearning.apple.com/research/on-device-scene-analysis
1
0
502
Feb ’24
why there's nerual engine-data copy in coreml npu prediction
I am currently facing a performance issue while using CoreML on iOS 16+ devices to run a simple grid_sample model. When profiling the model using xcode Profiler, I noticed that before each NPU computation, there is a significant delay caused by the "input copy" and "neural engine-data copy" operations.I have specified that both the input and output of the model are of type float16, there shouldn't be any data type convert. I would appreciate any insights or suggestions regarding the reasons behind this delay and possible solutions My simple model is class GridSample(torch.nn.Module): def __init__( self, ): super().__init__() def forward(self, input: torch.Tensor, grid: torch.Tensor) -> torch.Tensor: output = F.grid_sample( input, grid.to(input), mode='nearest', padding_mode='zeros', align_corners=True, ) return output tr_input = torch.randn((8, 64, 512, 512) tr_grid = torch.randn((8, 256, 256, 2) simple_model = GridSample() simple_model.eval() traced_model = torch.jit.trace(simple_model, [tr_input, tr_grid]) coreml_input = [coremltools.TensorType(name="image_input", shape=tr_input.shape, dtype=np.float16), coremltools.TensorType(name="warp_grid", shape=tr_grid.shape, dtype=np.float16)] mlmodel = coremltools.converters.convert(traced_model, inputs=coreml_input, convert_to="mlprogram", minimum_deployment_target=coremltools.target.iOS16, compute_units=coremltools.ComputeUnit.ALL, compute_precision = coremltools.precision.FLOAT16, outputs=[ct.TensorType(name="x0", dtype=np.float16)], debug=False) mlmodel.save("./grid_sample.mlpackage") os.system(f"xcrun coremlcompiler compile './grid_sample.mlpackage' './')
0
0
493
Feb ’24
CoreML Conversion of TensorFlow Keras NN fails on Iris Data set
On tf version 2.11.0. I have tried to follow on a fairly standard NN example in order to convert to a CoreML model. However, I cannot get this to work and I'm not clear where it is going wrong. It would seem to be a fairly standard task - a toy example - and I can't see why the conversion would fail. Any help would be appreciated. I have tried the different approaches listed below, but it seems the conversion should just work. I have also tried running the same code pinned to: tensorflow==2.6.2 scikit-learn==0.19.2 pandas==1.1.1 And get a different sequence of errors. The Python code I used mostly comes form this example: https://lnwatson.co.uk/posts/intro_to_nn/ import pandas as pd import numpy as np import tensorflow as tf import torch from sklearn.model_selection import train_test_split from tensorflow import keras import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1' np.bool = np.bool_ np.int = np.int_ print("tf version", tf.__version__) csv_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' col_names = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width','Class'] df = pd.read_csv(csv_url, names = col_names) labels = df.pop('Class') labels = pd.get_dummies(labels) X = df.values y = labels.values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2) model = keras.Sequential() model.add(keras.layers.Dense(16, activation='relu', input_shape=(4,))) model.add(keras.layers.Dense(3, activation='softmax')) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=12, epochs=200, validation_data=(X_val, y_val)) import coremltools as ct # Pass in `tf.keras.Model` to the Unified Conversion API mlmodel = ct.convert(model, convert_to="mlprogram") # mlmodel = ct.convert(model, source="tensorflow") # mlmodel = ct.convert(model, convert_to="neuralnetwork") # mlmodel = ct.convert( # model, # source="tensorflow", # inputs=[ct.TensorType(name="input")], # outputs=[ct.TensorType(name="output")], # minimum_deployment_target=ct.target.iOS14, # ) When using either of these 3: mlmodel = ct.convert(model, convert_to="mlprogram") mlmodel = ct.convert(model, source="tensorflow") mlmodel = ct.convert(model, convert_to="neuralnetwork") I get: mlmodel2 = ct.convert(model, source="tensorflow") ValueError: Const node 'sequential_5/dense_10/MatMul/ReadVariableOp' cannot have no value ERROR:root:sequential_5/dense_11/BiasAdd/ReadVariableOp:0 ERROR:root:[ 0.34652767 0.16202268 -0.3554725 ] Running TensorFlow Graph Passes: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 28.76 passes/s] Converting Frontend ==> MIL Ops: 8%|█████████████████ | 1/12 [00:00<00:00, 16710.37 ops/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/Documents/CoreML Basic Models/NN_Keras_Iris.py:142 130 import coremltools as ct 131 # Pass in `tf.keras.Model` to the Unified Conversion API 132 # mlmodel = ct.convert(model, convert_to="mlprogram") 133 (...) 140 141 # ct.convert(mymodel(), source="tensorflow") --> 142 mlmodel2 = ct.convert(model, source="tensorflow") 144 mlmodel = ct.convert( 145 model, 146 source="tensorflow", (...) 153 minimum_deployment_target=ct.target.iOS14, 154 ) .... File ~/opt/anaconda3/envs/coreml_env/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/ops.py:430, in Const(context, node) 427 @register_tf_op 428 def Const(context, node): 429 if node.value is None: --> 430 raise ValueError("Const node '{}' cannot have no value".format(node.name)) 431 mode = get_const_mode(node.value.val) 432 x = mb.const(val=node.value.val, mode=mode, name=node.name) ValueError: Const node 'sequential_5/dense_10/MatMul/ReadVariableOp' cannot have no value Second Approach: A different approach I tried was specifying the inout type TensorType. However, when specifying the input and outputs I get a different error. I have tried variations on this initialiser but all produce the same error. The variations revolve around adding input_shape, dtype=np.float32 mlmodel = ct.convert( model, source="tensorflow", inputs=[ct.TensorType(name="input")], outputs=[ct.TensorType(name="output")], minimum_deployment_target=ct.target.iOS14, ) t File ~/opt/anaconda3/envs/coreml_env/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/load.py:106, in <listcomp>(.0) 104 logging.debug(msg.format(outputs)) 105 outputs = outputs if isinstance(outputs, list) else [outputs] --> 106 outputs = [i.split(":")[0] for i in outputs] 107 if _get_version(tf.__version__) < _StrictVersion("1.13.1"): 108 return tf.graph_util.extract_sub_graph(graph_def, outputs) AttributeError: 'TensorType' object has no attribute 'split'
0
0
485
Jan ’24
coreML Hand Pose classification: doesn't appear on the camera.
I created a Hand Pose model using CreateML and integrated it into my SwiftUI project app. While coding, I referred to the Apple Developer documentation app for the necessary code. However, when I ran the app on an iPhone 14, the camera didn't display any effects or finger numbers as expected. note: I've already tested the ML model separately, and it works fine. the code: import CoreML import SceneKit import SwiftUI import Vision import ARKit struct ARViewContainer: UIViewControllerRepresentable { let arViewController: ARViewController let model: modelHand func makeUIViewController(context: UIViewControllerRepresentableContext<ARViewContainer>) -> ARViewController { arViewController.model = model return arViewController } func updateUIViewController(_ uiViewController: ARViewController, context: UIViewControllerRepresentableContext<ARViewContainer>) { // Update the view controller if needed } } class ARViewController: UIViewController, ARSessionDelegate { var frameCounter = 0 let handPosePredictionInterval = 10 var model: modelHand! var effectNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() let arView = ARSCNView(frame: view.bounds) view.addSubview(arView) let session = ARSession() session.delegate = self let configuration = ARWorldTrackingConfiguration() configuration.frameSemantics = .personSegmentationWithDepth arView.session.run(configuration) } func session(_ session: ARSession, didUpdate frame: ARFrame) { let pixelBuffer = frame.capturedImage let handPoseRequest = VNDetectHumanHandPoseRequest() handPoseRequest.maximumHandCount = 1 handPoseRequest.revision = VNDetectHumanHandPoseRequestRevision1 let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]) do { try handler.perform([handPoseRequest]) } catch { assertionFailure("Hand Pose Request failed: \(error)") } guard let handPoses = handPoseRequest.results, !handPoses.isEmpty else { return } if frameCounter % handPosePredictionInterval == 0 { if let handObservation = handPoses.first as? VNHumanHandPoseObservation { do { let keypointsMultiArray = try handObservation.keypointsMultiArray() let handPosePrediction = try model.prediction(poses: keypointsMultiArray) let confidence = handPosePrediction.labelProbabilities[handPosePrediction.label]! print("Confidence: \(confidence)") if confidence > 0.9 { print("Rendering hand pose effect: \(handPosePrediction.label)") renderHandPoseEffect(name: handPosePrediction.label) } } catch { fatalError("Failed to perform hand pose prediction: \(error)") } } } } func renderHandPoseEffect(name: String) { switch name { case "One": print("Rendering effect for One") if effectNode == nil { effectNode = addParticleNode(for: "One") } default: print("Removing all particle nodes") removeAllParticleNode() } } func removeAllParticleNode() { effectNode?.removeFromParentNode() effectNode = nil } func addParticleNode(for poseName: String) -> SCNNode { print("Adding particle node for pose: \(poseName)") let particleNode = SCNNode() return particleNode } } struct ContentView: View { let model = modelHand() var body: some View { ARViewContainer(arViewController: ARViewController(), model: model) } } #Preview { ContentView() }
0
0
509
Jan ’24
Need Help with Create ML in Xcode - Unexpected App Closure
Hello Apple Developer community, I hope this message finds you well. I am currently facing an issue with Create ML in Xcode, and I am seeking assistance from the knowledgeable members of this forum. Any help or guidance would be greatly appreciated. Problem Description: I am encountering an unexpected issue when attempting to create a classification model for images using Create ML in Xcode. Upon opening Create ML, the application closes unexpectedly when I choose to create a new image classification model. Steps I Have Taken: I have already tried the following steps to troubleshoot the issue: Updated Xcode and macOS to the latest versions. Restarted Xcode and my computer. Created a new sample project to isolate the issue. Despite these efforts, the problem persists. System Information: Xcode Version: 15.2 macOS Version: Sonoma 14.0 I am on a tight deadline for a project, and resolving this issue quickly is crucial. Your help is invaluable, and I thank you in advance for any support you can provide. Best regards.
1
0
572
Jan ’24
Core ML MLOneHotEncoder Error Post-Update: "unknown category String"
Apple Developer community, I recently updated Xcode and Core ML from version 13.0.1 to 14.1.2 and am facing an issue with the MLOneHotEncoder in my Core ML classifier. The same code and data that worked fine in the previous version now throw an error during predictions. The error message is: MLOneHotEncoder: unknown category String [TERM] expected one of This seems to suggest that the MLOneHotEncoder is not handling unknown strings, as it did in the previous version. Here's a brief overview of my situation: Core ML Model: The model is a classifier that uses MLOneHotEncoder for processing categorical data. Data: The same dataset is used for training and predictions, which worked fine before the update. Error Context: The error occurs at the prediction stage, not during training. I have checked for data consistency and confirmed that the dataset is the same as used with the previous version. Here are my questions: Has there been a change in how MLOneHotEncoder handles unknown categories in Core ML version 14.1.2? Are there any recommended practices for handling unknown string categories with MLOneHotEncoder in the updated Core ML version? Is there a need to modify the model training code or data preprocessing steps to accommodate changes in the new Core ML version? I would appreciate any insights or suggestions on how to resolve this issue. If additional information is needed, I am happy to provide it. Thank you for your assistance!
1
0
436
Jan ’24