Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics

Post

Replies

Boosts

Views

Activity

Comparing Performance: Inference CoreML Model with CoreMLTools in Python VS CoreML with Swift in a MacOS Application
Hello fellow developers, I am currently developing an application involving machine learning models, specifically CoreML models, and I have encountered an intriguing issue that I am hoping to get some insights on. In my current scenario, I'm planning to create a simple application with minimal UI, possibly using PyQT or similar tools. Therefore, I'm seeking a way to utilize NeuralEngine and GPU for CoreML model inference in Python. I discovered the 'predict' API in CoreMLTools which allows for model inference, but I'm unsure if its performance is on par with that of a properly built MacOS application using Swift and Neural Engine. Can anyone provide insights into whether there's a considerable difference in inference performance between these two methods? Is the performance of CoreMLTools 'predict' API comparable to that of a full-fledged Swift MacOS application leveraging the Neural Engine? Any clarification or guidance on this matter would be greatly appreciated. Thanks!
0
0
726
Jul ’23
How to get recommendations for new user in MLRecommender model
I have a dataset with 3 columns "item_id", "user_id", "rating". I created a coreML MLRecommender model from this dataset. I want to use this model to get the top 10 predictions for a new user (not in the original dataset) but who has rated a subset of the items in the dataset. I don't see any API in the Apple docs to do this. Both the recommendations APIs only seem to accept an existing user-id and get recommendations for that user. The WWDC tutorial talks about a prediction API to achieve this. But I dont see this in the Apple API documentation and code below from WWDC tutorial cannot be used since it does not give details on how to create the HikingRouteRecommenderInput class it passes into the prediction API. let hikes : [String : Double] = ["Granite Peak" : 5, "Wildflower Meadows" : 4] let input = HikingRouteRecommenderInput(items: hikes, k: 5) // Get results as sequence of recommended items let results = try model.prediction(input: input) Any pointers on how to get predictions for new user would be greatly appreciated.
0
0
394
Jul ’23
CoreML Converter Missing Tensorflow Package
I am trying to convert my Tensorflow 2.0 model to a CoreML model so I can deploy it to a mobile app. However, I continually get the error: ValueError: Converter was called with source="tensorflow", but missing tensorflow package I am working in a virtual environment with Python 3.7, Tensorflow 2.11, and Coremltools 5.3.1. I had saved the Tensorflow model by using tensorflow.saved_model.save and was attempting to convert the model with the following: import coremltools as ct image_input = ct.ImageType(shape=(1, 250, 250, 3,), bias=[-1,-1,-1], scale=1/255) classifier_config = ct.ClassifierConfig(['Billy','Not_Billy']) core_model = ct.convert( <path_to_saved_model>, convert_to='mlprogram', inputs=[image_input], classifier_config=classifier_config, source='tensorflow' ) I keep receiving this error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /var/folders/7n/vj_bf6q122bg43h_xm957hp80000gn/T/ipykernel_11024/1565729572.py in 6 inputs=[image_input], 7 classifier_config=classifier_config, ----> 8 source='tensorflow' 9 ) ~/Documents/Python/.venv/lib/python3.7/site-packages/coremltools/converters/_converters_entry.py in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug, pass_pipeline) 466 _validate_conversion_arguments(model, exact_source, inputs, outputs_as_tensor_or_image_types, 467 classifier_config, compute_precision, --> 468 exact_target, minimum_deployment_target) 469 470 if pass_pipeline is None: ~/Documents/Python/.venv/lib/python3.7/site-packages/coremltools/converters/_converters_entry.py in _validate_conversion_arguments(model, exact_source, inputs, outputs, classifier_config, compute_precision, convert_to, minimum_deployment_target) 722 if exact_source == "tensorflow" and not _HAS_TF_1: 723 raise ValueError( --> 724 'Converter was called with source="tensorflow", but missing ' "tensorflow package" 725 ) 726 ValueError: Converter was called with source="tensorflow", but missing tensorflow package
1
0
569
Jul ’23
Jax-metal breaks jax.numpy.take()
0.4.11 print(jax.version) [ 0. 1. 2. 3. 4. nan nan] <----print(jnp.take(jnp.arange(5).astype(float), jnp.arange(7))) [0. 1. 2. 3. 4. 4. 4.] <----print(jnp.take(jnp.arange(5).astype(float), jnp.arange(7), mode='clip')) [0. 1. 2. 3. 4. 4. 4.] <----print(np.take(np.arange(5).astype(float), np.arange(7), mode='clip')) Jax 0.4.11, jaxlib 0.4.10, without jax-metal 0.4.11 print(jax.version) [0. 1. 2. 3. 4. 0. 0.] <----print(jnp.take(jnp.arange(5).astype(float), jnp.arange(7))) [0. 1. 2. 3. 4. 0. 0.] <----print(jnp.take(jnp.arange(5).astype(float), jnp.arange(7), mode='clip')) [0. 1. 2. 3. 4. 4. 4.] <----print(np.take(np.arange(5).astype(float), np.arange(7), mode='clip')) Jax 0.4.11, jaxlib 0.4.10, jax-metal 0.0.3
0
0
416
Jul ’23
CoreML gives unexpected output shape for a model with dynamic input shape
Hello. I am manually constructing some models with the CoreML protobuf format. When the model has flexible input shapes, I am seeing unexpected output shapes in some cases after running prediction(from:). The model is a single matrix multiplication, A*B (one innerProduct layer), and the dynamic dimension is the first dimension of the only input A (B is constant). What I observe is that sometimes there are additional leading ones in the output shape. Some test program output showing the shapes: running model: dynamic_shape.mlmodel A shape: [1, 2] Y shape: [1, 1, 1, 1, 4] running model: dynamic_shape.mlmodel A shape: [2, 2] Y shape: [1, 1, 1, 2, 4] running model: dynamic_input_shape.mlmodel A shape: [1, 2] Y shape: [1, 4] running model: dynamic_input_shape.mlmodel A shape: [2, 2] Y shape: [1, 1, 1, 2, 4] running model: static_shape.mlmodel A shape: [1, 2] Y shape: [1, 4] I've put the model generation and test code below. Am I specifying the dynamic input/output shapes correctly when creating the .mlmodel? Is the output shape given by CoreML expected, and if so, why are there leading ones? Would appreciate any input. Python script to generate .mlmodel files. coremltools version is 6.3.0. from coremltools.proto.Model_pb2 import Model from coremltools.proto.FeatureTypes_pb2 import ArrayFeatureType from coremltools.proto.NeuralNetwork_pb2 import EXACT_ARRAY_MAPPING def build_model(with_dynamic_input_shape: bool, with_dynamic_output_shape: bool): model = Model() model.specificationVersion = 4 input = model.description.input.add() input.name = "A" input.type.multiArrayType.shape[:] = [1, 2] input.type.multiArrayType.dataType = ArrayFeatureType.FLOAT32 if with_dynamic_input_shape: range = input.type.multiArrayType.shapeRange.sizeRanges.add() range.upperBound = -1 range = input.type.multiArrayType.shapeRange.sizeRanges.add() range.lowerBound = 2 range.upperBound = 2 output = model.description.output.add() output.name = "Y" output.type.multiArrayType.shape[:] = [1, 4] output.type.multiArrayType.dataType = ArrayFeatureType.FLOAT32 if with_dynamic_output_shape: range = output.type.multiArrayType.shapeRange.sizeRanges.add() range.upperBound = -1 range = output.type.multiArrayType.shapeRange.sizeRanges.add() range.lowerBound = 4 range.upperBound = 4 layer = model.neuralNetwork.layers.add() layer.name = "MatMul" layer.input[:] = ["A"] layer.output[:] = ["Y"] layer.innerProduct.inputChannels = 2 layer.innerProduct.outputChannels = 4 layer.innerProduct.weights.floatValue[:] = [0.0, 4.0, 1.0, 5.0, 2.0, 6.0, 3.0, 7.0] model.neuralNetwork.arrayInputShapeMapping = EXACT_ARRAY_MAPPING return model if __name__ == "__main__": model = build_model(with_dynamic_input_shape=True, with_dynamic_output_shape=True) with open("dynamic_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) model = build_model(with_dynamic_input_shape=True, with_dynamic_output_shape=False) with open("dynamic_input_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) model = build_model(with_dynamic_input_shape=False, with_dynamic_output_shape=False) with open("static_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) Swift program to run the models and print the output shape. import Foundation import CoreML func makeFloatShapedArray(shape: [Int]) -> MLShapedArray<Float> { let size = shape.reduce(1, *) let values = (0 ..< size).map { Float($0) } return MLShapedArray(scalars: values, shape: shape) } func runModel(model_path: URL, m: Int) throws { print("running model: \(model_path.lastPathComponent)") let compiled_model_path = try MLModel.compileModel(at: model_path) let model = try MLModel(contentsOf: compiled_model_path) let a = MLMultiArray(makeFloatShapedArray(shape: [m, 2])) print("A shape: \(a.shape)") let inputs = try MLDictionaryFeatureProvider(dictionary: ["A": a]) let outputs = try model.prediction(from: inputs) let y = outputs.featureValue(for: "Y")!.multiArrayValue! print("Y shape: \(y.shape)") } func modelUrl(_ model_file: String) -> URL { return URL(filePath: "/path/to/models/\(model_file)") } try runModel(model_path: modelUrl("dynamic_shape.mlmodel"), m: 1) try runModel(model_path: modelUrl("dynamic_shape.mlmodel"), m: 2) try runModel(model_path: modelUrl("dynamic_input_shape.mlmodel"), m: 1) try runModel(model_path: modelUrl("dynamic_input_shape.mlmodel"), m: 2) try runModel(model_path: modelUrl("static_shape.mlmodel"), m: 1)
0
0
472
Aug ’23
coremltools convert imageType input shape format [batch_size, height, width, 3]
I need to convert a Super Resolution model to a mlmodel, but the input shape of the model is designed in the format [batch_size, height, width, 3]. Then I will convert with the following code model = hub.load("https://tfhub.dev/captain-pool/esrgan-tf2/1") tf.saved_model.save(model, "esrgan_saved_model") input_type = ct.ImageType(shape=(1 , 192, 192, 3),color_layout=ct.colorlayout.RGB) output_type = ct.ImageType(color_layout=ct.colorlayout.RGB) mlmodel = ct.convert( './esrgan_saved_model', inputs=[input_type], outputs=[output_type], source="tensorflow") mlmodel.save('esrgan.mlmodel') I got an error Shape of the RGB/BGR image output, must be of kind (1, 3, H, W), i.e., first two dimensions must be (1, 3) ImageType only seems to support input and output from [batch_size, 3, height, width]. What should I do to convert the model of format [batch_size, height, width, 3] to mlmodel?
1
0
728
Aug ’23
Ml inference ANE
Hi does anyone have a good link for ml inference times on the m2 chip.posted the coral edge chip as a good format model ,how much data was used in training model size with accuracy. Just hard to find the info or I’m looking in the wrong place just find it good to have like a cheat sheet of public optimised models there use case and model parameters for that’s a perfect fit for my problem x. thanks for your time
0
0
394
Aug ’23
CreateML Assertion Failure when training Hand Pose model with 5k+ static images
Hey all, we are currently training a Hand Pose model with the current release of CreateML, and during the feature extraction phase, we get the following error: Assertion failed: (/AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/TuriCore/turicreate_oss/src/core/storage/DataTable_data/DataColumn_v2_block_manager.cpp:105): seg->blocks.size()>column_id [0 > 0] We have tried to search this online and mitigate the issue, but we are getting nowhere - has anyone else experienced this issue?
3
4
796
Aug ’23
Issues with new MLE5Engine in Core ML
There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models: The output is wrong (just gray pixels) and not the same as on iOS 16. There is a large memory leak. The memory consumption is increasing rapidly with each new frame. Concerning 2): There are a lot of CVPixelBuffers leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created: 0 _malloc_zone_malloc_instrumented_or_legacy 1 _CFRuntimeCreateInstance 2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long) 3 CVPixe Buffer::alloc(_CFAllocator const*) 4 CVPixelBufferCreate 5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:] 6 MLE5OutputPixelBufferFeatureValueByCopyingTensor 7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:] 8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:] 9 __36-[MLE5OutputPortBinder featureValue]_block_invoke 10 _dispatch_client_callout 11 _dispatch_lane_barrier_sync_invoke_and_complete 12 -[MLE5OutputPortBinder featureValue] 13 -[MLE5OutputPort featureValue] 14 -[MLE5ExecutionStreamOperation outputFeatures] 15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:] 16 -[MLE5Engine _predictionFromFeatures:options:error:] 17 -[MLE5Engine predictionFromFeatures:options:error:] 18 -[MLDelegateModel predictionFromFeatures:options:error:] 19 StyleModel.prediction(input:options:) When manually disabling the use of the MLE5Engine, the models run as expected. Is this an issue caused by our model, or is it a bug in Core ML?
4
0
1.5k
Aug ’23
Tensor indexing
I have probably found a bug when indexing tensors with tensorflow-metal. It is best demonstrated by the following minimal example: import tensorflow as tf print(tf.constant([[1, 2], [3, 4]], dtype=tf.float32)[..., :2, 1]) The expected result is [2, 4] (i.e. the second column of the matrix) which is what I get when tensorflow-metal is not installed (and on other non-Apple machines), but using tensorflow-metal I get [2, 2] (i.e. the first element of the column is repeated - this also happens if there are more than two rows). The following conditions seem to be necessary in order to trigger this behavior: dtype must be float32; it works correctly with float64, int32 and int64. the sequence of ellipsis (for batch axes), stride (for row), index (for column) is critical; i.e. it does work correctly when the column is also a stride, and it does work if the row is a single number or the "full" slice :. the indexed tensor does not actually have batch axes (the ellipsis is there because it could have) The original context is: I have function that gets a tensor with 0 or more batch axes containing 4x4 homogenous matrices from which I want to extract the translation, i.e. the first three rows of the last column, which leads to [..., :3, 3]. Versions: python 3.9.6 (system) tensorflow-macos 2.13.0 tensorflow-metal 1.0.1
0
0
313
Aug ’23
Complete process of how to add my voice as speech synthesis
Hello, I am deaf and blind. So my Apple studies are in text vi aBraille. One question: how do I add my voice as voice synthesis? Do I have to record it somewhere first? What is the complete process, starting with recording my voice? Do I have to record my voice reading something and then add it as voice synthesis? What's the whole process of that? There is no text explaining this' I found one about authorizing personal voice, but not the whole process starting the recording and such' Thanks!
3
0
1.2k
Aug ’23
AVSpeechSynthesisVoice.speechVoices() Includes Voices That Aren't Available after Upgrading iOS
AVSpeechSynthesisVoice.speechVoices() returns voices that are no longer available after upgrading from iOS 16 to iOS 17 (although this has been an issue for a long time, I think). To reproduce: On iOS 16 download 1 or more enhanced voices under “Accessibility > Spoken Content > Voices”. Upgrade to iOS 17 Call AVSpeechSynthesisVoice.speechVoices() and note that the voices installed in step (1) are still present, yet they are no longer downloaded, therefore they don’t work. And there is no property on AVSpeechSynthesisVoice to indicate if the voice is still available or not. This is a problem for apps that allow users to choose among the available system voices. I receive many support emails surrounding iOS upgrades about this issue. I have to tell them to re-download the voices which is not obvious to them. I've created a feedback item for this as well (FB12994908).
1
1
745
Aug ’23
Error with GPU JIT function with GPU tensor UNIMPLEMENTED: DefaultDeviceAssignment not supported for Metal Client.
Hi everyone, I'm trying to test some functionality of jax-metal and got this error. Any help please? import jax import jax.numpy as jnp import numpy as np def f(x): y1=x+x*x+3 y2=x*x+x*x.T return y1*y2 x = np.random.randn(3000,3000).astype('float32') jax_x_gpu = jax.device_put(jnp.array(x), jax.devices('METAL')[0]) jax_x_cpu = jax.device_put(jnp.array(x), jax.devices('cpu')[0]) jax_f_gpu = jax.jit(f, backend='METAL') jax_f_gpu(jax_x_gpu) --------------------------------------------------------------------------- XlaRuntimeError Traceback (most recent call last) Cell In[1], line 17 13 jax_x_cpu = jax.device_put(jnp.array(x), jax.devices('cpu')[0]) 15 jax_f_gpu = jax.jit(f, backend='METAL') ---> 17 jax_f_gpu(jax_x_gpu) [... skipping hidden 5 frame] File ~/.virtualenvs/jax-metal/lib/python3.11/site-packages/jax/_src/pjit.py:817, in _create_sharding_with_device_backend(device, backend) 814 elif backend is not None: 815 assert device is None 816 out = SingleDeviceSharding( --> 817 xb.get_backend(backend).get_default_device_assignment(1)[0]) 818 return out XlaRuntimeError: UNIMPLEMENTED: DefaultDeviceAssignment not supported for Metal Client.
0
0
551
Aug ’23
Unable to use Personal Voice in background playback
Hi, When attempting to use the my Personal Voice with AVSpeechSythesizer with application in background I receive the below message: > Cannot use AVSpeechSynthesizerBufferCallback with Personal Voices, defaulting to output channel. Other voices can be used without issue. Is this a published limitation of Personal Voice within applications, i.e. no background playback?
1
0
621
Aug ’23
metal 0.5.0: converge ; metal 1.0.1: failure to converge
macbook pro m2 max/ 64G / macos:13.2.1 (22D68) import tensorflow as tf def runMnist(device = '/device:CPU:0'): with tf.device(device): #tf.config.set_default_device(device) mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) model.fit(x_train, y_train, epochs=10) runMnist(device = '/device:CPU:0') runMnist(device = '/device:GPU:0')
4
1
1.1k
Aug ’23
How to Make Personal Voice Recording in My Languag
I've been deaf and blind for 15 years' I'm not good at pronunciation in English, since I don't hear what I say, much less hear it from others. When I went to read the phrases to record my personal voice in Accessibility > Personal Voice, the 150 phrases to read are in English' How do I record phrases in Brazilian Portuguese? I speak Portuguese well' My English is very bad in pronunciation and deafness contributed' Help me.
1
0
644
Aug ’23
Tensorflow Metal Malfunctioning Completely
I am just starting to learn neural networks. If I run my code and try to fit a simple trigonometric function, the model builds a good-looking function. If I pip install tensorflow-metal and run, I get a straight line not resembling the non-linear function at all. if I uninstall metal, everything works again. Which suggests there is something wrong with metal. Any help would be appreciated. I would use the metal acceleration for the next steps in my project. Thank you
2
1
671
Aug ’23