Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

81 Posts
Sort by:
Post marked as solved
1 Replies
1.4k Views
When I run the performance test on a CoreML model, it shows predictions are 834% faster running on the Neural Engine as it is on the GPU. It also shows, that 100% of the model can run on the Neural Engine: GPU only: But when I set the compute units to all: let config = MLModelConfiguration() config.computeUnits = .all and profile, it shows that the neural engine isn’t used at all. Well, other than loading the model which takes 25 seconds when allowed to use the neural engine versus less than a second when not allowing the neural engine: The difference in speed is the difference between the app being too slow to even release versus quite reasonable performance. I have a lot of work invested in this, so I am really hoping that I can get it to run on the Neural Engine. Why isn't it actually running on the Neural Engine when it shows that it is supported and I have the compute unit set to run on the Neural Engine?
Posted
by
Post not yet marked as solved
1 Replies
552 Views
We have CoreML models in our app, each encrypted with a separate key generated in XCode. After app update we are receiving following error ` `[coreml] Could not create persistent key blob for EFD428E8-CDE7-4E0A-B379-FC169E50DE4D : error=Error Domain=com.apple.CoreML Code=8 "Fetching decryption key from server failed." UserInfo={NSLocalizedDescription=Fetching decryption key from server failed., NSUnderlyingError=0x281d80ab0 {Error Domain=CKErrorDomain Code=6 "CKInternalErrorDomain: 2022" UserInfo={NSDebugDescription=CKInternalErrorDomain: 2022, RequestUUID=D5CF13CF-6A10-436B-AB93-4C5C04859FFE, NSLocalizedDescription=Request failed with http status code 503, CKErrorDescription=Request failed with http status code 503, CKRetryAfter=35, NSUnderlyingError=0x281d80000 {Error Domain=CKInternalErrorDomain Code=2022 "Request failed with http status code 503" UserInfo={CKRetryAfter=35, CKHTTPStatus=503, CKErrorDescription=Request failed with http status code 503, RequestUUID=D5CF13CF-6A10-436B-AB93-4C5C04859FFE, NSLocalizedDescription=Request failed with http status code 503}}, CKHTTPStatus=503}}}` Tried deleting app, restarting device but nothing works. This was released on Appstore earlier and was working fine. It stopped working after update. Any help is appreciated.
Posted
by
Post not yet marked as solved
1 Replies
461 Views
I want to know how the preview function is implemented. I have a mlmodel for object detection. I found that when I open the model in xcode, xcode provides a preview function. I put a photo into it and get the target prediction box. I would like to know how this visualization function is implemented. At present, I can only get the three data items of Label, Confidence, and BoundingBox in the playground, and the drawing of the prediction box still requires me to write code for processing. import Vision func performObjectDetection() { do { let model = try VNCoreMLModel(for: court().model) let request = VNCoreMLRequest(model: model) { (request, error) in if let error = error { print("Failed to perform request: \(error)") return } guard let results = request.results as? [VNRecognizedObjectObservation] else { print("No results found") return } for result in results { print("Label: \(result.labels.first?.identifier ?? "No label")") print("Confidence: \(result.labels.first?.confidence ?? 0.0)") print("BoundingBox: \(result.boundingBox)") } } guard let image = UIImage(named: "nbaPics.jpeg"), let ciImage = CIImage(image: image) else { print("Failed to load image") return } let handler = VNImageRequestHandler(ciImage: ciImage, orientation: .up, options: [:]) try handler.perform([request]) } catch { print("Failed to load model: \(error)") } } performObjectDetection() These are my codes and results
Posted
by
Post marked as solved
3 Replies
864 Views
Is it possible to create an updatable sound classifier model which uses Apple's built in MLSoundClassifier available via Create ML that can be trained/personalized on device using Core ML? I tried to look up in quite a few places for a long while, however, I know that when on-device training was initially announced in 2019, updatable models were only restricted to non built-in classifiers, but any additional information that may have come out after 2019 in this regard has been hard to find.
Posted
by
Post not yet marked as solved
0 Replies
416 Views
I'm referring to this talk: https://developer.apple.com/videos/play/wwdc2021/10152 I was wondering if the code for the "Image composition" project he demonstrates at the end of the talk (around 24:00) is available somewhere? Would much appreciate any help.
Posted
by
Post not yet marked as solved
4 Replies
1.6k Views
I initially raised this issue in the tensorflow forum, and they directed me back here since this is a tf-macos specific problem [see https://github.com/tensorflow/tensorflow/issues/60673]. When calling Model.compile() with the AdamW optimizer, a warning is thrown saying that v2.11+ optimizers have a known slowdown on M1/M2 devices, and so the backend attempts to fallback to a legacy version. However, no legacy version of the AdamW optimizer exists. In a previous tf-macos version 2.12, this lead to an error during Model.compile() [see issue https://github.com//issues/60652 and https://developer.apple.com/forums/thread/729732]. In the current nightly, this error is not thrown - however, after calling model.compile(), the attribute model.optimizer is set to string 'adamw' instead of an optimizer object. Later, when we call model.fit(), this leads to an AttributeError, because model.optimizer.minimize() does not exist when model.optimizer is a string. Expected behaviour: correctly compile the model with either a v2.11+ optimiser without slowdown, or a legacy-compatible implementation of the AdamW optimizer. Then the model will train correctly with a valid AdamW optimizer when calling model.fit(). Note: a warning message suggests using the optimizer located at tf.keras.optimizers.legacy.AdamW, but this does not exist It would be nice to be able to either use modern optimizers, or have a legacy-compatible version of AdamW, since weight-decay is an important tool in modern ML research, and currently cannot be used on mac. Standalone code to reproduce the issue ##===========## ## Imports ## ##===========## import sys import tensorflow as tf import numpy as np from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Dense from tensorflow.keras.optimizers import AdamW ##===================## ## Report versions ## ##===================## # # Expected outputs: # Python version is: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 19:01:19) [Clang 14.0.6 ] # TF version is: 2.14.0-dev20230523 # Numpy version is: 1.23.2 # print(f"Python version is: {sys.version}") print(f"TF version is: {tf.__version__}") print(f"Numpy version is: {np.__version__}") ##==============================## ## Create a very simple model ## ##==============================## # # Expected outputs: # Model: "model_1" # _________________________________________________________________ # Layer (type) Output Shape Param # # ================================================================= # Layer_in (InputLayer) [(None, 2)] 0 # # Layer_hidden (Dense) (None, 10) 30 # # Layer_out (Dense) (None, 2) 22 # # ================================================================= # Total params: 52 (208.00 Byte) # Trainable params: 52 (208.00 Byte) # Non-trainable params: 0 (0.00 Byte) # _________________________________________________________________ # x_in = Input(2 , dtype=tf.float32, name="Layer_in" ) x = x_in x = Dense(10, dtype=tf.float32, name="Layer_hidden", activation="relu" )(x) x = Dense(2 , dtype=tf.float32, name="Layer_out" , activation="linear")(x) model = Model(x_in, x) model.summary() ##===================================================## ## Compile model with MSE loss and AdamW optimizer ## ##===================================================## # # Expected outputs: # WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.AdamW` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.AdamW`. # WARNING:absl:There is a known slowdown when using v2.11+ Keras optimizers on M1/M2 Macs. Falling back to the legacy Keras optimizer, i.e., `tf.keras.optimizers.legacy.AdamW`. # model.compile( loss = "mse", optimizer = AdamW(learning_rate=1e-3, weight_decay=1e-2) ) ##===========================## ## Generate some fake data ## ##===========================## # # Expected outputs: # X shape is (100, 2), Y shape is (100, 2) # dataset_size = 100 X = np.random.normal(size=(dataset_size, 2)) X = tf.constant(X, dtype=tf.float32) Y = np.random.normal(size=(dataset_size, 2)) Y = tf.constant(Y, dtype=tf.float32) print(f"X shape is {X.shape}, Y shape is {Y.shape}") ##===================================## ## Fit model to data for one epoch ## ##===================================## # # Expected outputs: # --------------------------------------------------------------------------- # AttributeError Traceback (most recent call last) # Cell In[9], line 51 # 1 ##===================================## # 2 ## Fit model to data for one epoch ## # 3 ##===================================## # (...) # 48 # • mask=None # 49 # # ---> 51 model.fit(X, Y, epochs=1) # File ~/miniforge3/envs/tf_macos_nightly_230523/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs) # 67 filtered_tb = _process_traceback_frames(e.__traceback__) # 68 # To get the full stack trace, call: # 69 # `tf.debugging.disable_traceback_filtering()` # ---> 70 raise e.with_traceback(filtered_tb) from None # 71 finally: # 72 del filtered_tb # File /var/folders/6_/gprzxt797d5098h8dtk22nch0000gn/T/__autograph_generated_filezzqv9k36.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator) # 13 try: # 14 do_return = True # ---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope) # 16 except: # 17 do_return = False # AttributeError: in user code: # File "/Users/Ste/miniforge3/envs/tf_macos_nightly_230523/lib/python3.10/site-packages/keras/src/engine/training.py", line 1338, in train_function * # return step_function(self, iterator) # File "/Users/Ste/miniforge3/envs/tf_macos_nightly_230523/lib/python3.10/site-packages/keras/src/engine/training.py", line 1322, in step_function ** # outputs = model.distribute_strategy.run(run_step, args=(data,)) # File "/Users/Ste/miniforge3/envs/tf_macos_nightly_230523/lib/python3.10/site-packages/keras/src/engine/training.py", line 1303, in run_step ** # outputs = model.train_step(data) # File "/Users/Ste/miniforge3/envs/tf_macos_nightly_230523/lib/python3.10/site-packages/keras/src/engine/training.py", line 1084, in train_step # self.optimizer.minimize(loss, self.trainable_variables, tape=tape) # AttributeError: 'str' object has no attribute 'minimize' model.fit(X, Y, epochs=1)
Posted
by
Post not yet marked as solved
1 Replies
715 Views
Hey, Are there any limits to the windowDuration property of the AudioFeaturePrint transformer such as the minimum value or maximum value? If we create a model with the Create ML UI App, upon selecting the AudioFeaturePrint as the feature extractor, we cannot go below 0.5 seconds for the window duration. Is the limit same if we programmatically create a model using the AudioFeaturePrint?
Posted
by
Post not yet marked as solved
1 Replies
877 Views
In the video of Explore Natural Language multilingual models https://developer.apple.com/videos/play/wwdc2023/10042/, it's said at 6:24 that there are three models. I wonder if it is possible to find semantic similairity between models? For example English and Japanese belong to different models(Latin and CJK), can we compare the vector produced from the different models to find out if two sentences have similar meanings?
Posted
by
Post not yet marked as solved
6 Replies
2.9k Views
Build and installed Jax and Jax-metal following instructions on a M2Pro Mac-mini from here - https://developer.apple.com/metal/jax/ However, the following check seems to suggest XLA using CPU and not GPU. >>> from jax.lib import xla_bridge >>> print(xla_bridge.get_backend().platform) cpu Has anyone got it working to dump GPU? Thanks in advance!
Posted
by
Post marked as solved
5 Replies
1.2k Views
Hello, I'm interested in trying the new JAX Metal plug-in and followed the steps in https://developer.apple.com/metal/jax/. Upon installation, I don't see any difference between the backend device detected by JAX and a pure CPU setup: >>> import jax >>> jax.devices() [CpuDevice(id=0)] >>> jax.devices()[0].platform 'cpu' >>> jax.devices()[0].device_kind 'cpu' >>> jax.devices()[0].client.platform 'cpu' >>> jax.devices()[0].client.runtime_type 'tfrt' Is this really using a Metal backend? How can I determine for sure? Thank you!
Posted
by
Post not yet marked as solved
0 Replies
524 Views
I am seeing an issue in jax.numpy-dot and jax.numpy.matmul as illustrated by this example of jax.numpy.dot: import jax.numpy as jnp import numpy as np x = np.array(np.random.rand(3, 3)) y = np.array(np.random.rand(3)) z = np.array(np.random.rand(3)) print("X: ", x) print("Y: ", y) print("Z: ", z) print("Numpy 1D*1D: ", np.dot(y, z)) print("Jax Numpy 1D*1D: ", jnp.dot(y, z)) print("Numpy 2D*1D: ", np.dot(x, y)) print("Jax Numpy 2D*1D: ", jnp.dot(x, y)) loc("-":4:5): error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<3xf32>') in function @main /AppleInternal/Library/BuildRoots/1a7a4148-f669-11ed-9d56-f6357a1003e8/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1950: failed assertion `Error: MLIR pass manager failed' zsh: abort python test.py As can be seen, dot product between two 1D arrays works for both standard Numpy and jax.numpy. However, 2D*1D only works for standard Numpy while jax.numpy throws an error. I am using: Jax 0.4.11, Jax-metal 0.0.2 and jaxlib 0.4.10. Has anyone else seen this issue?
Posted
by
Post not yet marked as solved
0 Replies
579 Views
Hello everyone, I encountered some compiler errors while following a WWDC video on converting a colorization PyTorch model to CoreML. I have followed all the steps correctly, but I'm facing issues with the following lines of code provided in the video: In the colorize() method, there is a line: let modelInput = try ColorizerInput(inputWith: lightness.cgImage!) This line expects a cgImage as input, but the auto-generated Model class only accepts an MLMultiArray or MLShapedArray, not an image. Video conversion step did not cover setting the input or output as ImageType. In the extractColorChannels() method, there are a couple of lines: let outA: [Float] = output.output_aShapedArray.scalars let outB: [Float] = output.output_bShapedArray.scalars However, I only have output.var183_aShapedArray available. In other words, there is no var183_bShapedArray. I would appreciate any thoughts or suggestions you may have regarding these issues. Thank you. Link to the WWDC22 session 10017 https://developer.apple.com/videos/play/wwdc2022/10017/
Posted
by
Post not yet marked as solved
2 Replies
774 Views
Hi all, I am new to the metal Pytorch. I am trying to implement the demo code of customized ops in Pytorch. The demo code However, I think the torch namespace doesn't have "mps" now? The "torch::mps" cannot be found if I try to compile the .mm file into PyTorch cpp extension. After some digging, I think everybody is using Aten namespace with "at::"? How can I use functions in mps and make this demo code work? Thanks in advance. Error message In file included from /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.mm:10: /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.h:11:30: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings] static char *CUSTOM_KERNEL = R"MPS_SOFTSHRINK( ^ /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.mm:43:53: error: no member named 'mps' in namespace 'torch' id<MTLCommandBuffer> commandBuffer = torch::mps::get_command_buffer(); ~~~~~~~^ /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.mm:47:47: error: no member named 'mps' in namespace 'torch' dispatch_queue_t serialQueue = torch::mps::get_dispatch_queue(); ~~~~~~~^ /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.mm:76:20: error: no member named 'mps' in namespace 'torch' torch::mps::commit(); ~~~~~~~^ 1 warning and 3 errors generated. ninja: build stopped: subcommand failed. CustomSoftshrink.mm code /* See the LICENSE.txt file for this sample’s licensing information. Abstract: The code that registers a PyTorch custom operation. */ #include <torch/extension.h> #include "CustomSoftshrink.h" #import <Foundation/Foundation.h> #import <Metal/Metal.h> // Helper function to retrieve the `MTLBuffer` from a `torch::Tensor`. static inline id<MTLBuffer> getMTLBufferStorage(const torch::Tensor& tensor) { return __builtin_bit_cast(id<MTLBuffer>, tensor.storage().data()); } torch::Tensor& dispatchSoftShrinkKernel(const torch::Tensor& input, torch::Tensor& output, float lambda) { @autoreleasepool { id<MTLDevice> device = MTLCreateSystemDefaultDevice(); NSError *error = nil; // Set the number of threads equal to the number of elements within the input tensor. int numThreads = input.numel(); // Load the custom soft shrink shader. id<MTLLibrary> customKernelLibrary = [device newLibraryWithSource:[NSString stringWithUTF8String:CUSTOM_KERNEL] options:nil error:&error]; TORCH_CHECK(customKernelLibrary, "Failed to to create custom kernel library, error: ", error.localizedDescription.UTF8String); std::string kernel_name = std::string("softshrink_kernel_") + (input.scalar_type() == torch::kFloat ? "float" : "half"); id<MTLFunction> customSoftShrinkFunction = [customKernelLibrary newFunctionWithName:[NSString stringWithUTF8String:kernel_name.c_str()]]; TORCH_CHECK(customSoftShrinkFunction, "Failed to create function state object for ", kernel_name.c_str()); // Create a compute pipeline state object for the soft shrink kernel. id<MTLComputePipelineState> softShrinkPSO = [device newComputePipelineStateWithFunction:customSoftShrinkFunction error:&error]; TORCH_CHECK(softShrinkPSO, error.localizedDescription.UTF8String); // Get a reference to the command buffer for the MPS stream. id<MTLCommandBuffer> commandBuffer = torch::mps::get_command_buffer(); TORCH_CHECK(commandBuffer, "Failed to retrieve command buffer reference"); // Get a reference to the dispatch queue for the MPS stream, which encodes the synchronization with the CPU. dispatch_queue_t serialQueue = torch::mps::get_dispatch_queue(); dispatch_sync(serialQueue, ^(){ // Start a compute pass. id<MTLComputeCommandEncoder> computeEncoder = [commandBuffer computeCommandEncoder]; TORCH_CHECK(computeEncoder, "Failed to create compute command encoder"); // Encode the pipeline state object and its parameters. [computeEncoder setComputePipelineState:softShrinkPSO]; [computeEncoder setBuffer:getMTLBufferStorage(input) offset:input.storage_offset() * input.element_size() atIndex:0]; [computeEncoder setBuffer:getMTLBufferStorage(output) offset:output.storage_offset() * output.element_size() atIndex:1]; [computeEncoder setBytes:&lambda length:sizeof(float) atIndex:2]; MTLSize gridSize = MTLSizeMake(numThreads, 1, 1); // Calculate a thread group size. NSUInteger threadGroupSize = softShrinkPSO.maxTotalThreadsPerThreadgroup; if (threadGroupSize > numThreads) { threadGroupSize = numThreads; } MTLSize threadgroupSize = MTLSizeMake(threadGroupSize, 1, 1); // Encode the compute command. [computeEncoder dispatchThreads:gridSize threadsPerThreadgroup:threadgroupSize]; [computeEncoder endEncoding]; // Commit the work. torch::mps::commit(); }); } return output; } // C++ op dispatching the Metal soft shrink shader. torch::Tensor mps_softshrink(const torch::Tensor &input, float lambda = 0.5) { // Check whether the input tensor resides on the MPS device and whether it's contiguous. TORCH_CHECK(input.device().is_mps(), "input must be a MPS tensor"); TORCH_CHECK(input.is_contiguous(), "input must be contiguous"); // Check the supported data types for soft shrink. TORCH_CHECK(input.scalar_type() == torch::kFloat || input.scalar_type() == torch::kHalf, "Unsupported data type: ", input.scalar_type()); // Allocate the output, same shape as the input. torch::Tensor output = torch::empty_like(input); return dispatchSoftShrinkKernel(input, output, lambda); } // Create Python bindings for the Objective-C++ code. PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("mps_softshrink", &mps_softshrink); }
Posted
by
Post marked as solved
1 Replies
763 Views
In the video here, the speaker refers to MPSGraphTool, which is supposed to convert from CoreML and other formats to the new MPSGraphPackage format. Searching for MPSGraphTool on Google returns only that video, and there is no mention of it on the forums here or elsewhere. When can we expect the tool to be released? How can we find out more information about it? My use case is that the ANECompilerService that runs on the Mac / iOS devices to compile CoreML Models / Programs is extremely slow and unreliable for large models. It often crashes entirely, sitting at 100% CPU usage forever and never completing the task at hand, meaning the user is stuck in a loading state. This also applies in Xcode when running a performance test. I would really like to compile the graph once and just run it on device directly.
Posted
by
Post not yet marked as solved
0 Replies
434 Views
I am trying to convert a model I found on TensorFlow hub to CoreML so I can use it in an iOS app I'm developing. Converting the model so far has been quite simple except that I get an NotImplementedError when specifying ImageType as output. This is the code I used: model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(256, 256, 3)), tf_hub.KerasLayer( "https://tfhub.dev/rishit-dagli/mirnet-tfjs/1" ) ]) model.build([1, 256, 256, 3]) # Batch input shape. mlmodel = ct.convert(model, convert_to="mlprogram", inputs=[ct.ImageType()], outputs=[ct.ImageType()]) If only the inputs are specified as ImageType, then no error occurs, but when I include a specification for the outputs as ImageType, I get this error: NotImplementedError: Image output 'Identity' has symbolic dimensions in its shape FYI: I'm using TensorFlow version 2.12 and CoreML 6.3 Is there any way around this? Or, am I doing this wrong? I'm quite new to machine learning and CoreML, so any helpful input is much appreciated. Thanks in advance!
Posted
by
Post not yet marked as solved
0 Replies
477 Views
Hi all, I just tried to integrate my ML model (TF to CoreML) into my Xcode project, but couldn't create a performance report. As far as I'm aware, you only need to drag your .mlmodel file into the Navigator. I took this model from TF Hub and converted it to CoreML, and it has images as inputs and MultiArray as outputs (don't know if that has any significance). Other than that, I haven't made any changes to the model itself. If anyone could point me in the right direction that would be very much appreciated! I've included a screenshot of the error here:
Posted
by
Post not yet marked as solved
0 Replies
469 Views
First of all this vision api is amazing. the OCR is very accurate. I've been looking to multiprocess using the vision API. I have about 2 million PDFs I want to OCR, and I want to run multiple threads/run parallel processing to OCR each. I tried pyobjc but it does not work so well. Any suggestions on tackling this problem?
Posted
by
Post not yet marked as solved
0 Replies
692 Views
It appears that some of the jax core functions (in pjit, mlir) are not supported. Is this something to be supported in the future? For example, when I tested a diffrax example, from diffrax import diffeqsolve, ODETerm, Dopri5 import jax.numpy as jnp def f(t, y, args): return -y term = ODETerm(f) solver = Dopri5() y0 = jnp.array([2., 3.]) solution = diffeqsolve(term, solver, t0=0, t1=1, dt0=0.1, y0=y0) It generates an error saying EmitPythonCallback is not supported in metal. File ~/anaconda3/envs/jax-metal-0410/lib/python3.10/site-packages/jax/_src/interpreters/mlir.py:1787 in emit_python_callback raise ValueError( ValueError: `EmitPythonCallback` not supported on METAL backend. I uderstand that, currently, no M1 or M2 chips have multiple devices or can be arranged like that. Therefore, it may not be necessary to fully implement p*** functions (pmap, pjit, etc). But some powerful libraries use them. So, it would be great if at least some workaround for core functions are implemented. Or is there any easy fix for this?
Posted
by