Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

80 Posts
Sort by:
Post not yet marked as solved
0 Replies
31 Views
Hello, I have created a Neural Network → K Nearest Neighbors Classifier with python. # followed by k-Nearest Neighbors for classification. import coremltools import coremltools.proto.FeatureTypes_pb2 as ft from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder import copy # Take the SqueezeNet feature extractor from the Turi Create model. base_model = coremltools.models.MLModel("SqueezeNet.mlmodel") base_spec = base_model._spec layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers) # Delete the softmax and innerProduct layers. The new last layer is # a "flatten" layer that outputs a 1000-element vector. del layers[-1] del layers[-1] preprocessing = base_spec.neuralNetworkClassifier.preprocessing # The Turi Create model is a classifier, which is treated as a special # model type in Core ML. But we need a general-purpose neural network. del base_spec.neuralNetworkClassifier.layers[:] base_spec.neuralNetwork.layers.extend(layers) # Also copy over the image preprocessing options. base_spec.neuralNetwork.preprocessing.extend(preprocessing) # Remove other classifier stuff. base_spec.description.ClearField("metadata") base_spec.description.ClearField("predictedFeatureName") base_spec.description.ClearField("predictedProbabilitiesName") # Remove the old classifier outputs. del base_spec.description.output[:] # Add a new output for the feature vector. output = base_spec.description.output.add() output.name = "features" output.type.multiArrayType.shape.append(1000) output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32 # Connect the last layer to this new output. base_spec.neuralNetwork.layers[-1].output[0] = "features" # Create the k-NN model. knn_builder = KNearestNeighborsClassifierBuilder(input_name="features", output_name="label", number_of_dimensions=1000, default_class_label="???", number_of_neighbors=3, weighting_scheme="inverse_distance", index_type="linear") knn_spec = knn_builder.spec knn_spec.description.input[0].shortDescription = "Input vector" knn_spec.description.output[0].shortDescription = "Predicted label" knn_spec.description.output[1].shortDescription = "Probabilities for each possible label" knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10)) # Use the same name as in the neural network models, so that we # can use the same code for evaluating both types of model. knn_spec.description.predictedProbabilitiesName = "labelProbability" knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName # Put it all together into a pipeline. pipeline_spec = coremltools.proto.Model_pb2.Model() pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION pipeline_spec.isUpdatable = True pipeline_spec.description.input.extend(base_spec.description.input[:]) pipeline_spec.description.output.extend(knn_spec.description.output[:]) pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName # Add inputs for training. pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]]) pipeline_spec.description.trainingInput[0].shortDescription = "Example image" pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]]) pipeline_spec.description.trainingInput[1].shortDescription = "True label" pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec) pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec) pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"]) coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel") it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/ It Works and I were am to include it into my project: I want to train the model via the MLUpdateTask: ar batchInputs: [MLFeatureProvider] = [] let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint) let imageOptions: [MLFeatureValue.ImageOption: Any] = [ .cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue] var featureProviders = [MLFeatureProvider]() //URLS where images are stored let trainingData = ImageManager.getImagesAndLabel() for data in trainingData{ let label = data.key for imgURL in data.value{ let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions) if let pixelBuffer = featureValue.imageBufferValue{ let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label) batchInputs.append(featureProvider)}} let trainingData = MLArrayBatchProvider(array: batchInputs) When calling the MLUpdateTask as follows, the context.model from completionHandler is null. Unfortunately there is no other Information available from the compiler. do{ debugPrint(context) try context.model.write(to: ModelManager.targetURL) } catch{ debugPrint("Error saving the model \(error)") } }) updateTask.resume() I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0) Can some1 more experienced tell me how to fix this? It seems like I am missing some parameters? I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels. Thanks!
Posted
by
Post not yet marked as solved
0 Replies
83 Views
Hi, just got an Apple M3 Pro to try it out on some Jax operations. I see the development is actively ongoing so maybe this error can help. This is the environment: Metal device set to: Apple M3 Pro systemMemory: 18.00 GB maxCacheSize: 6.00 GB jax: 0.4.26 jaxlib: 0.4.23 numpy: 1.26.4 python: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:49:36) [Clang 16.0.6 ] jax.devices (1 total, 1 local): [METAL(id=0)] process_count: 1 platform: uname_result(system='Darwin', node='MKFL96VR9YT', release='23.4.0', version='Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030', machine='arm64') This is a minimal example which produces an error, I think due to the fft part: from jax import numpy as np array = np.ones((16, 16)) np.fft.fft2(array) This is the full traceback: Traceback (most recent call last): File "/Users/user/Downloads/wow.py", line 5, in <module> np.fft.fft2(array) File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/numpy/fft.py", line 216, in fft2 return _fft_core_2d('fft2', xla_client.FftType.FFT, a, s=s, axes=axes, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/numpy/fft.py", line 210, in _fft_core_2d return _fft_core(func_name, fft_type, a, s, axes, norm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/numpy/fft.py", line 102, in _fft_core transformed = lax.fft(arr, fft_type, tuple(s)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/traceback_util.py", line 179, in reraise_with_filtered_traceback return fun(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 298, in cache_miss outs, out_flat, out_tree, args_flat, jaxpr, attrs_tracked = _python_pjit_helper( ^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 176, in _python_pjit_helper out_flat = pjit_p.bind(*args_flat, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/core.py", line 2788, in bind return self.bind_with_trace(top_trace, args, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/core.py", line 425, in bind_with_trace out = trace.process_primitive(self, map(trace.full_raise, args), params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/core.py", line 913, in process_primitive return primitive.impl(*tracers, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 1494, in _pjit_call_impl return xc._xla.pjit(name, f, call_impl_cache_miss, [], [], donated_argnums, # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 1471, in call_impl_cache_miss out_flat, compiled = _pjit_call_impl_python( ^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 1406, in _pjit_call_impl_python lowering_parameters=mlir.LoweringParameters()).compile() ^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/interpreters/pxla.py", line 2369, in compile executable = UnloadedMeshExecutable.from_hlo( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/interpreters/pxla.py", line 2908, in from_hlo xla_executable, compile_options = _cached_compilation( ^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/interpreters/pxla.py", line 2718, in _cached_compilation xla_executable = compiler.compile_or_get_cached( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/compiler.py", line 266, in compile_or_get_cached return backend_compile(backend, computation, compile_options, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/profiler.py", line 335, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/compiler.py", line 238, in backend_compile return backend.compile(built_c, compile_options=options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ jaxlib.xla_extension.XlaRuntimeError: UNKNOWN: <unknown>:0: error: 'func.func' op One or more function input/output data types are not supported. <unknown>:0: note: see current operation: "func.func"() <{arg_attrs = [{mhlo.layout_mode = "default", mhlo.sharding = "{replicated}"}], function_type = (tensor<16x16xf32>) -> tensor<16x16xcomplex<f32>>, res_attrs = [{jax.result_info = "", mhlo.layout_mode = "default"}], sym_name = "main", sym_visibility = "public"}> ({ ^bb0(%arg0: tensor<16x16xf32>): %0 = "mhlo.convert"(%arg0) : (tensor<16x16xf32>) -> tensor<16x16xcomplex<f32>> %1 = "mhlo.fft"(%0) {fft_length = dense<16> : tensor<2xi64>, fft_type = #mhlo<fft_type FFT>} : (tensor<16x16xcomplex<f32>>) -> tensor<16x16xcomplex<f32>> "func.return"(%1) : (tensor<16x16xcomplex<f32>>) -> () }) : () -> () <unknown>:0: error: failed to legalize operation 'func.func' <unknown>:0: note: see current operation: "func.func"() <{arg_attrs = [{mhlo.layout_mode = "default", mhlo.sharding = "{replicated}"}], function_type = (tensor<16x16xf32>) -> tensor<16x16xcomplex<f32>>, res_attrs = [{jax.result_info = "", mhlo.layout_mode = "default"}], sym_name = "main", sym_visibility = "public"}> ({ ^bb0(%arg0: tensor<16x16xf32>): %0 = "mhlo.convert"(%arg0) : (tensor<16x16xf32>) -> tensor<16x16xcomplex<f32>> %1 = "mhlo.fft"(%0) {fft_length = dense<16> : tensor<2xi64>, fft_type = #mhlo<fft_type FFT>} : (tensor<16x16xcomplex<f32>>) -> tensor<16x16xcomplex<f32>> "func.return"(%1) : (tensor<16x16xcomplex<f32>>) -> () }) : () -> () I'd be happy running more tests should you need them, I'm new to this, so not sure which just yet. Many thanks!!
Posted
by
Post not yet marked as solved
0 Replies
179 Views
Hey, i just created and trained an MLImageClassifier via the MLImageclassifier.train() method (https://developer.apple.com/documentation/createml/mlimageclassifier/train(trainingdata:parameters:sessionparameters:)) For my Trainingdata (MLImageclassifier.DataSource) i am using my directoy structure, so i got an images folder with subfolders of person1, person2, person3 etc. which contain images of the labeled persons (https://developer.apple.com/documentation/createml/mlimageclassifier/datasource/labeleddirectories(at:)) I am saving the checkpoints and sessions in my appdirectory, so i can create an MLIMageClassifier from an exisiting MLSession and/or MLCheckpoint. My question is: is there any way to add new labels, optimally from my directoy strucutre, to an MLImageClassifier which i create from an existing MLCheckpoint/MLSession? So like adding a person4 and training my pretrained Classifier with only that person4. Or is it simply not possible and i have to train from the beginning everytime i want to add a new label? Unfortunately i cannot find anything in the API. Thanks!
Posted
by
Post not yet marked as solved
0 Replies
130 Views
Hey, im training an MLImageClassifier via the train()-method: guard let job = try? MLImageClassifier.train(trainingData: trainingData, parameters: modelParameter, sessionParameters: sessionParameters) else{ debugPrint("Training failed") return } Unfortunately the metrics of my MLProgress, which is created from the returning MLJob while training are empty. Code for listening on Progress: job.progress.publisher(for: \.fractionCompleted) .sink{[weak job] fractionCompleted in guard let job = job else { debugPrint("failure in creating job") return } guard let progress = MLProgress(progress: job.progress) else { debugPrint("failure in creating progress") return } print("ProgressPROGRESS: \(progress)") print("Progress: \(fractionCompleted)") } .store(in: &subscriptions) Printing the Progress ends in: MLProgress(elapsedTime: 2.2328420877456665, phase: CreateML.MLPhase.extractingFeatures, itemCount: 32, totalItemCount: Optional(39), metrics: [:]) Got the Same result when listening to MLCheckpoints, Metrics are empty aswell: MLCheckpoint(url: URLPATH.checkpoint, phase: CreateML.MLPhase.extractingFeatures, iteration: 32, date: 2024-04-18 11:21:18 +0000, metrics: [:]) Can some1 tell me how I can access the metrics while training? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
246 Views
Hello Developers, We are trying to convert Pytorch models to CoreML using coremltools, while converting we used jit.trace to create trace of model where we encountered a warning that if model has controlflow and conditions it is not advisable to use trace instead convert into TorchScript using jit.script, However after successful conversion of model into TorchScript, Now in the next step of conversion from TorchScript to CoreML here is the error we are getting when we tried to convert to coremltools python package. This root error is so abstract that we are not able to trace-back from where its occurring. AssertionError: Item selection is supported only on python list/tuple objects We trying to add this above error prompt into ChatGPT and we get something like the below response from ChatGPT. But unfortunately it's not working. The error indicates that the Core ML converter encountered a TorchScript operation involving item selection (indexing or slicing) on an object that it doesn't recognize as a Python list or tuple. The converter supports item selection only on these Python container types. This could happen if your model uses indexing on tensors or other types not recognized as list or tuple by the Core ML tools. You may need to revise the TorchScript code to ensure it only performs item selection on supported types or adjust the way tensors are indexed.
Posted
by
Post not yet marked as solved
2 Replies
252 Views
I have a trained model to identify squats (good & bad repetitions). It seems to be working perfectly in CreateML when I preview it with some test data, although once I add it to my app the model seems to be inaccurate and the majority of the time mixes up the actions. Does anyone know if the issue is code related or is it something to do with the model itself and how it analyses live data? Below I have added one of my functions for "Good Squats" which most of the time doesn't even get called (even with lower confidence). The majority of the time the model classes everything as a bad squat even though it is clearly not. Could the problem be that my dataset doesn't have enough videos? print("GoodForm") squatDetected = true DispatchQueue.main.asyncAfter(deadline: .now() + 1.5) { self.squatDetected = false } DispatchQueue.main.async { self.showGoodFormAlert(with: confidence) AudioServicesPlayAlertSound(SystemSoundID(1322)) } } Any help would be appreciated.
Posted
by
Post not yet marked as solved
0 Replies
331 Views
Hello, I have been following the excellent/informative "Metal for Machine Learning" from WWDC19 to learn how to do on device training (I have a specific use case for this) and it is all working really well using the MPSNNGraph. However, I would like to call my own metal compute/render function/pipeline to transform the inference result before calculating the loss, does anyone know if this possible and what would this look like in code? Please see my current code below, at the comment I need to call an intermediate compute/render function to transform the inference result image before passing to the MPSNNForwardLossNode. let rgbImageNode = MPSNNImageNode(handle: nil) let inferGraph = makeInferenceGraph() let reshape = MPSNNReshapeNode(source: inferGraph.resultImage, resultWidth: 64, resultHeight: 64, resultFeatureChannels: 4) //Need to call render or compute pipeline to post process in the inference result image let rgbLoss = MPSNNForwardLossNode(source:reshape.resultImage, labels:rgbImageNode, lossDescriptor:lossDescriptor) let initGrad = MPSNNInitialGradientNode(source:rgbLoss.resultImage) let gradNodes = initGrad.trainingGraph(withSourceGradient:nil, nodeHandler:nil) guard let trainGraph = MPSNNGraph(device: device, resultImage: gradNodes![0].resultImage, resultImageIsNeeded: true) else{ fatalError("Unable to get training graph.") } Thanks
Posted
by
Post not yet marked as solved
1 Replies
426 Views
How do I add a already made CoreML model into my playground? I tried what people recommended online -- building a test project and get the .mlmodelc file and put that in the playground along with the autogenerated class for the model. However, I keep on getting so many errors. The errors: Unexpected duplicate tasks Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Unexpected duplicate tasks Showing Recent Issues Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel ZooClassifier.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language.
Posted
by
Post not yet marked as solved
2 Replies
422 Views
Hi, I have a an issue with jax.numpy.linalg.inv(a). import jax.numpy.linalg as jnpl B = jnp.identity(2) jnpl.inv(B) Throws the following error: XlaRuntimeError: UNKNOWN: /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: error: failed to legalize operation 'mhlo.triangular_solve' /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: called from /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: see current operation: %120 = \"mhlo.triangular_solve\"(%42#4, %119) {left_side = true, lower = true, transpose_a = #mhlo<transpose NO_TRANSPOSE>, unit_diagonal = true} : (tensor<2x2xf32>, tensor<2x2xf32>) -> tensor<2x2xf32> Any ideas what could be the issue or how to solve it?
Posted
by
Post not yet marked as solved
1 Replies
362 Views
Trying to learn vision apps and I was wondering if the actual .xcodeproj file was available anywhere. I understand there are snippets of code below the video but it's difficult to learn how to build an app with those files since it just focuses on the ML aspect. https://developer.apple.com/videos/play/wwdc2021/10039/ I'm also looking for the code for this video specifically. I'm aware of the drawing code but that is a relatively simple example to understand and the CreateML stuff isn't prevalent in that.
Posted
by
Post not yet marked as solved
0 Replies
488 Views
Hello, I am a new user with an Apple MacBook Pro. I'm experiencing difficulties running my code through the GPU. What do I need to install on my computer to be able to use libraries for machine learning, Computer Vision, PyTorch and Tensor Flow? I already watch lot of tutorials on this subject, but still is looks very complicated and I need mentoring for this task. I would greatly appreciate it if I could receive a response and if someone could guide me on this matter.
Posted
by
Post marked as solved
2 Replies
663 Views
I'm exploring my Vision Pro and finding it unclear whether I can even achieve things like body pose detection etc. https://developer.apple.com/videos/play/wwdc2023/111241/ It's clear that I can apply it to self provided images, but how about to the data coming from visionOS SDKs? All I can find is this mesh data from ARKit, https://developer.apple.com/documentation/arkit/arkit_in_visionos - am I missing something or do we not yet have good APIs for this? Appreciate any guidance! Thanks.
Posted
by
Post not yet marked as solved
0 Replies
367 Views
After training my dataset, the training, validation, and testing sets all show 0% in detection accuracy and all my test photos show false negative. The dataset has 1032 photos and 2 classes, and I used Roboflow for the image annotation. For network, I choose full network. If there is any way to fix this?
Posted
by
Post not yet marked as solved
0 Replies
322 Views
WWDC22 video "Explore the machine learning development experience" provides Python code for an interesting application (real-time ML image colorization), but doesn't provide the complete Xcode project, and assumes viewer knows how to do Python in Xcode (haven't heard of such in 10 years of iOS development!). Any pointers to either the video's example Xcode project, or how to create a suitable Xcode project capable of running Python code?
Posted
by
Post not yet marked as solved
7 Replies
796 Views
Hi Developers, I want to create a Vision app on Swift Playgrounds on iPad. However, Vision does not properly function on Swift Playgrounds on iPad or Xcode Playgrounds. The Vision code only works on a normal Xcode Project. SO can I submit my Swift Student Challenge 2024 Application as a normal Xcode Project rather than Xcode Playgrounds or Swift Playgrounds File. Thanks :)
Posted
by
Post not yet marked as solved
1 Replies
645 Views
Hi, In Xcode 14 I was able to train linear regression models with Create ML using large CSV files (I tested on about 30000 items and 5 features): However, in Xcode 15 (I tested on 15.0.1 and 15.1), the training continuously stays in the "Processing" state: When using a dataset with 900 items, everything works fine. I filed a feedback for this issue: FB13516799. Does anybody else have this issue / can reproduce it?
Posted
by
Post not yet marked as solved
2 Replies
814 Views
Running the sample Python keras-ocr example on M3 Max returns incorrect results if tensorflow-metal is installed. Code Example: https://keras-ocr.readthedocs.io/en/latest/examples/using_pretrained_models.html Note: https://upload.wikimedia.org/wikipedia/commons/e/e8/FseeG2QeLXo.jpg not found. Line commented out. Without tensorflow-metal (Correct results): ['toodstande', 's', 'somme', 'srny', 'squadron', 'ds', 'quentn', 'snhnen', 'bnpnone', 'sasne', 'taing', 'yeoms', 'sry', 'the', 'royal', 'wessex', 'yeomanry', 'regiment', 'yeomanry', 'wests', 'south', 'the', 'now', 'recruiting', 'arm', 'blon', 'wxybsqipsacomodn', 'email', '438300', '01722'] ['banana', 'union', 'no', 'no', 'software', 'patents'] With tensorflow-metal (Incorrect results): ['sddoooo', '', 'eamnooss', 'xynrr', 'daanues', 'idd', 'innee', 'iiiinus', 'tnounppanab', 'inla', 'ppnt', 'mmnooexyy', 'yyr', 'ehhtt', 'laayvyoorr', 'xeseww', 'rinamoevy', 'tnemiger', 'yrnamoey', 'sstseww', 'htuwlos', 'fefeahit', 'wwoniia', 'turceedrr', 'ymmrira', 'atate', 'prasbyxwr', 'liamme', '00338803144', '22277100'] ['annnaab', 'noolinnu', 'oon', 'oon', 'wttffoos', 'sttneettaap'] Logs: With tensorflow-metal (Incorrect results) (.venv) <REDACTED> % pip3 install -U tensorflow-metal Collecting tensorflow-metal Using cached tensorflow_metal-1.1.0-cp311-cp311-macosx_12_0_arm64.whl.metadata (1.2 kB) Requirement already satisfied: wheel~=0.35 in ./.venv/lib/python3.11/site-packages (from tensorflow-metal) (0.42.0) Requirement already satisfied: six>=1.15.0 in ./.venv/lib/python3.11/site-packages (from tensorflow-metal) (1.16.0) Using cached tensorflow_metal-1.1.0-cp311-cp311-macosx_12_0_arm64.whl (1.4 MB) Installing collected packages: tensorflow-metal Successfully installed tensorflow-metal-1.1.0 (.venv) <REDACTED> % python3 keras-ocr-bug.py Looking for <REDACTED>/.keras-ocr/craft_mlt_25k.h5 2023-12-16 22:05:05.452493: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M3 Max 2023-12-16 22:05:05.452532: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 64.00 GB 2023-12-16 22:05:05.452545: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 24.00 GB 2023-12-16 22:05:05.452591: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2023-12-16 22:05:05.452609: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) WARNING:tensorflow:From <REDACTED>/.venv/lib/python3.11/site-packages/tensorflow/python/util/dispatch.py:1260: resize_bilinear (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.image.resize(...method=ResizeMethod.BILINEAR...)` instead. Looking for <REDACTED>/.keras-ocr/crnn_kurapan.h5 2023-12-16 22:05:07.526354: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:117] Plugin optimizer for device_type GPU is enabled. 1/1 [==============================] - 1s 855ms/step 2/2 [==============================] - 1s 140ms/step ['sddoooo', '', 'eamnooss', 'xynrr', 'daanues', 'idd', 'innee', 'iiiinus', 'tnounppanab', 'inla', 'ppnt', 'mmnooexyy', 'yyr', 'ehhtt', 'laayvyoorr', 'xeseww', 'rinamoevy', 'tnemiger', 'yrnamoey', 'sstseww', 'htuwlos', 'fefeahit', 'wwoniia', 'turceedrr', 'ymmrira', 'atate', 'prasbyxwr', 'liamme', '00338803144', '22277100'] ['annnaab', 'noolinnu', 'oon', 'oon', 'wttffoos', 'sttneettaap'] Logs: Valid results, without tensorflow-metal (.venv) <REDACTED> % pip3 uninstall tensorflow-metal Found existing installation: tensorflow-metal 1.1.0 Uninstalling tensorflow-metal-1.1.0: Would remove: <REDACTED>/.venv/lib/python3.11/site-packages/tensorflow-plugins/* <REDACTED>/.venv/lib/python3.11/site-packages/tensorflow_metal-1.1.0.dist-info/* Proceed (Y/n)? Y Successfully uninstalled tensorflow-metal-1.1.0 (.venv) <REDACTED> % python3 keras-ocr-bug.py Looking for <REDACTED>/.keras-ocr/craft_mlt_25k.h5 WARNING:tensorflow:From <REDACTED>/.venv/lib/python3.11/site-packages/tensorflow/python/util/dispatch.py:1260: resize_bilinear (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.image.resize(...method=ResizeMethod.BILINEAR...)` instead. Looking for <REDACTED>/.keras-ocr/crnn_kurapan.h5 1/1 [==============================] - 7s 7s/step 2/2 [==============================] - 1s 71ms/step ['toodstande', 's', 'somme', 'srny', 'squadron', 'ds', 'quentn', 'snhnen', 'bnpnone', 'sasne', 'taing', 'yeoms', 'sry', 'the', 'royal', 'wessex', 'yeomanry', 'regiment', 'yeomanry', 'wests', 'south', 'the', 'now', 'recruiting', 'arm', 'blon', 'wxybsqipsacomodn', 'email', '438300', '01722'] ['banana', 'union', 'no', 'no', 'software', 'patents']
Posted
by
Post not yet marked as solved
1 Replies
508 Views
I converted a toy Pytorch regression model to CoreML mlmodel using coremltools and set it to be updatable with mean_squared_error_loss. But when testing the training, the context.metrics[.lossValue] can give negative value which is impossible. Further more, context.metrics[.lossValue] result is very different from my own computed training loss as shown in the screenshot attached. I was wondering if I used a wrong way to extract the training loss from context? Does context.metrics[.lossValue] really give MSE if I used coremltools function set_mean_squared_error_loss to set the loss? Any suggestion is appreciated. Since the validation loss decreases as epoch goes, the model should be indeed updated correctly. I am using coremltools==7.0, xcode==15.0.1 Here is my code to convert Pytorch model to updatable CoreML model: import coremltools from coremltools.models.neural_network import NeuralNetworkBuilder, SgdParams, AdamParams from coremltools.models import datatypes # Load the model specification spec = coremltools.utils.load_spec('regression.mlmodel') builder = NeuralNetworkBuilder(spec=spec) builder.inspect_output_features() # Name: linear_1 # Make layers updatable builder.make_updatable(['linear_0', 'linear_1']) # Manually add a mean squared error loss layer feature = ('linear_1', datatypes.Array(1)) builder.set_mean_squared_error_loss(name='lossLayer', input_feature=feature) # define the optimizer (Adam in this example) adam_params = AdamParams(lr=0.01, beta1=0.9, beta2=0.999, eps=1e-8, batch=16) builder.set_adam_optimizer(adam_params) # Set the number of epochs builder.set_epochs(100) # Save the updated model updated_model = coremltools.models.MLModel(spec) updated_model.save('updatable_regression30.mlmodel') Here is the code I use to try to update the saved updatable_regression30.mlmodel: import CoreML import GameKit func generateSampleData(numSamples: Int, seed: UInt64) -> ([MLMultiArray], [MLMultiArray]) { // simple regression: y = 10 * sum(x) + 1 var inputArray = [MLMultiArray]() var outputArray = [MLMultiArray]() // Create a random number generator with a fixed seed let randomSource = GKLinearCongruentialRandomSource(seed: seed) let randomDistribution = GKRandomDistribution(randomSource: randomSource, lowestValue: 0, highestValue: 1000) for _ in 0..<numSamples { do { let input = try MLMultiArray(shape: [1, 2], dataType: .float32) let output = try MLMultiArray(shape: [1], dataType: .float32) var sumInput: Float = 0 for i in 0..<input.shape[1].intValue { // Generate random value using the fixed seed generator let inputValue = Float(randomDistribution.nextInt()) / 1000.0 input[[0, i] as [NSNumber]] = NSNumber(value: inputValue) sumInput += inputValue } output[0] = NSNumber(value: 10.0 * sumInput + 1.0) inputArray.append(input) outputArray.append(output) } catch { print("Error occurred while creating MLMultiArrays: \(error)") } } return (inputArray, outputArray) } func computeLoss(model: MLModel, data: ([MLMultiArray], [MLMultiArray])) -> Double { let (inputData, outputData) = data var totalLoss: Double = 0 for (index, input) in inputData.enumerated() { let output = outputData[index] if let prediction = try? model.prediction(from: MLDictionaryFeatureProvider(dictionary: ["x": MLFeatureValue(multiArray: input)])), let predictedOutput = prediction.featureValue(for: "linear_1")?.multiArrayValue { let loss = (output[0].doubleValue - predictedOutput[0].doubleValue) totalLoss += loss * loss // squared error } } return totalLoss / Double(inputData.count) // mean of squared errors } func trainModel() { // Load the updatable model guard let updatableModelURL = Bundle.main.url(forResource: "updatable_regression30", withExtension: "mlmodelc") else { print("Failed to load the updatable model") return } // Generate sample data let (inputData, outputData) = generateSampleData(numSamples: 200, seed: 8) let validationData = generateSampleData(numSamples: 100, seed:18) // Create an MLArrayBatchProvider from the sample data var featureProviders = [MLFeatureProvider]() for (index, input) in inputData.enumerated() { let output = outputData[index] let dataPointFeatures: [String: MLFeatureValue] = [ "x": MLFeatureValue(multiArray: input), "linear_1_true": MLFeatureValue(multiArray: output) ] if let provider = try? MLDictionaryFeatureProvider(dictionary: dataPointFeatures) { featureProviders.append(provider) } } let batchProvider = MLArrayBatchProvider(array: featureProviders) // Define progress handlers let progressHandlers = MLUpdateProgressHandlers(forEvents: [.trainingBegin, .epochEnd], progressHandler: { context in switch context.event { case .trainingBegin: print("Training began.") case .epochEnd: let loss = context.metrics[.lossValue] as! Double let validationLoss = computeLoss(model: context.model, data: validationData) let computedTrainLoss = computeLoss(model: context.model, data: (inputData, outputData)) print("Epoch \(context.metrics[.epochIndex]!) ended. Training Loss: \(loss), Computed Training Loss: \(computedTrainLoss), Validation Loss: \(validationLoss)") default: break } } ) // Create an update task with progress handlers let updateTask = try! MLUpdateTask(forModelAt: updatableModelURL, trainingData: batchProvider, configuration: nil, progressHandlers: progressHandlers) // Start the update task updateTask.resume() } // call trainModel() to start training
Posted
by