Post not yet marked as solved
Hi there, I am trying to combine this code I have for stereo vision as well as the hand tracking code available (drawing when pinch) but I'm running into trouble with this. I believe it is to do with the fact that the hand tracking code sets up AV Session where as the stereo vision uses SceneKit. Could you please provide me with some feedback as how to start integrating these two very different sets of code?
StereoVision
Hand Tracking
Post not yet marked as solved
Hey guys,
Just got a brand new M1 MacBook and started all my developer tools. However, I'm stuck trying to setup the Tensorflow. Object Detection API. I'm having an issue copying and installing the 'setup.py' file as I keep getting dependency issues. Just for reference I'll leave a link to a guide that shows pretty much all the steps I'm taking (one I followed when I first started): https://github.com/nicknochnack/TFODCourse/blob/main/2.%20Training%20and%20Detection.ipynb.
Where I'm getting stuck is the 3rd part in section 1. I get a dependency conflicts from tensorflow-text (tf-models-official 2.6.0 depends on tensorflow-text>=2.5.0) and tensorflow-addons (tf-models-official 2.5.1 depends on tensorflow-addons). I looked up if anyone else was having the same problem and sure enough I found a couple issues on GitHub. Mainly, this one directly on the tensorflow/addons repo: https://github.com/tensorflow/addons/issues/2503.
Now, the solution that was proposed was following the steps in this PR (https://github.com/tensorflow/addons/pull/2504, linked on the bottom of the issue) which seemed to work for some of the members. However, when I go to bazel build I encounter two main errors. The first is a Symbol not found: _TF_AllocateOutput, Expected in: flat namespace error which, I believe leads to bazel not building correctly (which is the second error).
If anybody has any idea on how to get TFOD setup or fix any of these issue thanks in advance! Even some pointers or ideas are welcome as I'm kind of at a dead end for using TFOD natively on my Mac.
Post not yet marked as solved
hello,
When I used xcode to generate the model encryption key, an error was reported, the error was 'Failed to Generate Encryption Key and Sign in with you Apple ID in the Apple ID pane in System Preferences and retry '.But I have logged in my apple id in the system preferences, and this error still occurs.I reinstalled xcode and re-logged in to my apple id. This error still exists.
Xcode Version 12.4
macOS Catalina 10.15.7
thanks
Post not yet marked as solved
After training and exporting my model from playgrounds using createML, I wasn't able to instantiate the mlmodel. In the error logs, a warning: "Found 1 resource(s) that may be unavailable in Swift Playgrounds" and the filepath to my mlmodel. When going into the mlmodel, this message shows up: "Model class has not been generated yet.". I have tried building the project and running it, but the message doesn't change. Any tips for integrating a custom mlmodel trained with createml into a swift playgrounds app? thanks
Post not yet marked as solved
I try to transfer this Xcode sample project to the Playground app project.
However, when I move all swift files and the CoreML file called 'ExerciseClassifier.mlmodel' on the folder to the Playground app project, it shows the error, "Cannot find type 'ExerciseClassifier' in scope".
What could I do to remove the error and make a proper working project on Playground?
Post not yet marked as solved
I am doing my project on machine learning in an anaconda jupyter notebook and also I have tried spyder but in both of them, the kernel is automatically dead. so anyone can tell me how can I resolve this problem with the MacBook air m1.
Post not yet marked as solved
Hi, while I was able to successfully retrieve MLModelCollection with a list of model identifiers from Apple's CoreML Deployment Dashboard, loading encrypted models from a collection results in the following error:
NSUnderlyingError=0x281ffb810 {Error Domain=com.apple.CoreML Code=3 "failed to invoke mremap_encrypted with result = -1, error = 12" UserInfo={NSLocalizedDescription=failed to invoke mremap_encrypted with result = -1, error = 12}}}
I use the same MLModel.load(contentsOf:configuration:completionHandler:)
method with model URLs (from MLModelCollection) which works just fine for non–encrypted models.
Is there any workaround for this issue?
Post not yet marked as solved
Is there a Machine Learning API that can take handwriting (either as a bitmap or as a list of points) and convert it to text?
I know Scribble can be used to allow handwriting input into text fields, but in this API it is Scribble which controls the rendering of the handwriting. Is there an API where my app can render the handwriting and get information about the text content?
In the Keynote demo Craig was able to get text content from a photo of a whiteboard. Are there APIs which would allow an app developer to create something similar?
Post not yet marked as solved
Can anyone clarify which devices or chips the so-called "neural engine" can be used for machine learning model Training? As opposed to inference (or prediction)? And can external libraries such as Tensorflow (perhaps via the browser-based javascript library) access the neural engine in any manner for training or inference?
Post not yet marked as solved
Dear all,
I found the announced built-in sound classifier pretty amazing.
I would appreciate it if you could point me to a link or a document that is listed all 300 sound classes mentioned in https://developer.apple.com/videos/play/wwdc2021/10036/.
Thank you
Post not yet marked as solved
Hi everyone I'm trying to use createML hand action classifier to detect some simple actions, I'm having some trouble because the model only detects one hand at a time in the scene(even in the preview of the model, without any coding) and I would need both hands, is it a bug or am I doing something wrong?
Thank you in advance
Post not yet marked as solved
Most examples, including within documentation, of using CoreML with iOS involve the creation of the Model under Xcode on a Mac and then inclusion of the Xcode generated MLFeatureProvider class into the iOS app and (re)compiling the app. However, it’s also possible to download an uncompiled model directly into an iOS app and then compile it (background tasks) - but there’s no MLFeatureProvider class. The same applies when using CreateML in an iOS app (iOS 15 beta) - there’s no automatically generated MLFeatureProvider. So how do you get one? I’ve seen a few queries on here and elsewhere related to this problem, but couldn’t find any clear examples of a solution. So after some experimentation, here’s my take on how to go about it:
Firstly, if you don’t know what features the Model uses, print the model description e.g. print("Model: ",mlModel!.modelDescription). Which gives Model:
inputs: (
"course : String",
"lapDistance : Double",
"cumTime : Double",
"distance : Double",
"lapNumber : Double",
"cumDistance : Double",
"lapTime : Double"
)
outputs: (
"duration : Double"
)
predictedFeatureName: duration
............
A prediction is created by guard **let durationOutput = try? mlModel!.prediction(from: runFeatures) ** ……
where runFeatures is an instance of a class that provides a set of feature names and the value of each feature to be used in making a prediction. So, for my model that predicts run duration from course, lap number, lap time etc the RunFeatures class is:
class RunFeatures : MLFeatureProvider {
var featureNames: Set = ["course","distance","lapNumber","lapDistance","cumDistance","lapTime","cumTime","duration"]
var course : String = "n/a"
var distance : Double = -0.0
var lapNumber : Double = -0.0
var lapDistance : Double = -0.0
var cumDistance : Double = -0.0
var lapTime : Double = -0.0
var cumTime : Double = -0.0
func featureValue(for featureName: String) -> MLFeatureValue? {
switch featureName {
case "distance":
return MLFeatureValue(double: distance)
case "lapNumber":
return MLFeatureValue(double: lapNumber)
case "lapDistance":
return MLFeatureValue(double: lapDistance)
case "cumDistance":
return MLFeatureValue(double: cumDistance)
case "lapTime":
return MLFeatureValue(double: lapTime)
case "cumTime":
return MLFeatureValue(double: cumTime)
case "course":
return MLFeatureValue(string: course)
default:
return MLFeatureValue(double: -0.0)
}
}
}
Then in my DataModel, prior to prediction, I create an instance of RunFeatures with the input values on which I want to base the prediction:
var runFeatures = RunFeatures()
runFeatures.distance = 3566.0
runFeatures.lapNumber = 1.0
runFeatures.lapDistance = 1001.0
runFeatures.lapTime = 468.0
runFeatures.cumTime = 468.0
runFeatures.cumDistance = 1001.0
runFeatures.course = "Wishing Well Loop"
NOTE there’s no need to provide the output feature (“duration”) here, nor in the featureValue method above but it is required in featureNames.
Then get the prediction with guard let durationOutput = try? mlModel!.prediction(from: runFeatures)
Regards,
Michaela
Post not yet marked as solved
I'm now running Tensorflow models on my Macbook Air 2020 M1, but I can't find a way to monitor the Neural Engine 16 cores usage to fine tune my ML tasks.
The Activity Monitor only reports CPU% and GPU% and I can't find any APIs available on Mach include files in the MacOSX 11.1 sdk or documentation available so I can slap something together from scratch in C.
Could anyone point me in some direction as to get a hold of the API for Neural Engine usage. Any indicator I could grab would be a start. It looks like this has been omitted from all sdk documentation and general userland, I've only found a ledger_tag_neural_footprint attribute, which looks memory related, and that's it.
Post not yet marked as solved
I just got my new MacBook Pro with M1 Max chip and am setting up Python. I've tried several combinational settings to test speed - now I'm quite confused. First put my questions here:
Why python run natively on M1 Max is greatly (~100%) slower than on my old MacBook Pro 2016 with Intel i5?
On M1 Max, why there isn't significant speed difference between native run (by miniforge) and run via Rosetta (by anaconda) - which is supposed to be slower ~20%?
On M1 Max and native run, why there isn't significant speed difference between conda installed Numpy and TensorFlow installed Numpy - which is supposed to be faster?
On M1 Max, why run in PyCharm IDE is constantly slower ~20% than run from terminal, which doesn't happen on my old Intel Mac.
Evidence supporting my questions is as follows:
Here are the settings I've tried:
1. Python installed by
Miniforge-arm64, so that python is natively run on M1 Max Chip. (Check from Activity Monitor, Kind of python process is Apple).
Anaconda.: Then python is run via Rosseta. (Check from Activity Monitor, Kind of python process is Intel).
2. Numpy installed by
conda install numpy: numpy from original conda-forge channel, or pre-installed with anaconda.
Apple-TensorFlow: with python installed by miniforge, I directly install tensorflow, and numpy will also be installed. It's said that, numpy installed in this way is optimized for Apple M1 and will be faster. Here is the installation commands:
conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
python -m pip install tensorflow-metal
3. Run from
Terminal.
PyCharm (Apple Silicon version).
Here is the test code:
import time
import numpy as np
np.random.seed(42)
a = np.random.uniform(size=(300, 300))
runtimes = 10
timecosts = []
for _ in range(runtimes):
s_time = time.time()
for i in range(100):
a += 1
np.linalg.svd(a)
timecosts.append(time.time() - s_time)
print(f'mean of {runtimes} runs: {np.mean(timecosts):.5f}s')
and here are the results:
+-----------------------------------+-----------------------+--------------------+
| Python installed by (run on)→ | Miniforge (native M1) | Anaconda (Rosseta) |
+----------------------+------------+------------+----------+----------+---------+
| Numpy installed by ↓ | Run from → | Terminal | PyCharm | Terminal | PyCharm |
+----------------------+------------+------------+----------+----------+---------+
| Apple Tensorflow | 4.19151 | 4.86248 | / | / |
+-----------------------------------+------------+----------+----------+---------+
| conda install numpy | 4.29386 | 4.98370 | 4.10029 | 4.99271 |
+-----------------------------------+------------+----------+----------+---------+
This is quite slow. For comparison,
run the same code on my old MacBook Pro 2016 with i5 chip - it costs 2.39917s.
another post reports that run with M1 chip (not Pro or Max), miniforge+conda_installed_numpy is 2.53214s, and miniforge+apple_tensorflow_numpy is 1.00613s.
you may also try on it your own.
Here is the CPU information details:
My old i5:
$ sysctl -a | grep -e brand_string -e cpu.core_count
machdep.cpu.brand_string: Intel(R) Core(TM) i5-6360U CPU @ 2.00GHz
machdep.cpu.core_count: 2
My new M1 Max:
% sysctl -a | grep -e brand_string -e cpu.core_count
machdep.cpu.brand_string: Apple M1 Max
machdep.cpu.core_count: 10
I follow instructions strictly from tutorials - but why would all these happen? Is it because of my installation flaws, or because of M1 Max chip? Since my work relies heavily on local runs, local speed is very important to me. Any suggestions to possible solution, or any data points on your own device would be greatly appreciated :)
Hi I have been the following WWDC21 "dynamic training on iOS" - I have been able to get the training working, with an output of the iterations etc being printed out in the console as training progresses.
However I am unable to retrieve the checkpoints or result/model once training has completed (or is in progress) nothing in the callback fires.
If I try to create a model from the sessionDirectory - it returns nil (even though training has clearly completed).
Please can someone help or provide pointers on how to access the results/checkpoints so that I can make a MlModel and use it.
var subscriptions = [AnyCancellable]()
let job = try! MLStyleTransfer.train(trainingData: datasource, parameters: trainingParameters, sessionParameters: sessionParameters)
job.result.sink { result in
print("result ", result)
}
receiveValue: { model in
try? model.write(to: sessionDirectory)
let compiledURL = try? MLModel.compileModel(at: sessionDirectory)
let mlModel = try? MLModel(contentsOf: compiledURL!)
}
.store(in: &subscriptions)
This also does not work:
job.checkpoints.sink { checkpoint in
// Process checkpoint
let model = MLStyleTransfer(trainingData: checkpoint)
}
.store(in: &subscriptions)
}
This is the printout in the console:
Using CPU to create model
+--------------+--------------+--------------+--------------+--------------+
| Iteration | Total Loss | Style Loss | Content Loss | Elapsed Time |
+--------------+--------------+--------------+--------------+--------------+
| 1 | 64.9218 | 54.9499 | 9.97187 | 3.92s |
2022-02-20 15:14:37.056251+0000 DynamicStyle[81737:9175431] [ServicesDaemonManager] interruptionHandler is called. -[FontServicesDaemonManager connection]_block_invoke
| 2 | 61.7283 | 24.6832 | 8.30343 | 9.87s |
| 3 | 59.5098 | 27.7834 | 11.7603 | 16.19s |
| 4 | 56.2737 | 16.163 | 10.985 | 22.35s |
| 5 | 53.0747 | 12.2062 | 12.0783 | 28.08s |
+--------------+--------------+--------------+--------------+--------------+
Any help would be appreciated on how to retrieve models.
Thanks
Post not yet marked as solved
Describe the bug
When I convert (with coremltools framework) a scripted model which used a torch.nn.functional.upsample_bilinear() in forward() function, I get RuntimeError: PyTorch convert function for op 'uninitialized' not implemented.
?: What should I do to resolve this error? Please help.
Trace
% python3 pytorch_sandbox.py
Converting Frontend ==> MIL Ops: 47%|██▎ | 20/43 [00:00<00:00, 25123.11 ops/s]
Traceback (most recent call last):
File "pytorch_sandbox.py", line 22, in <module>
coreml_model = ct.convert(
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 326, in convert
mlmodel = mil_convert(
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 182, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 209, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 300, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 104, in __call__
return load(*args, **kwargs)
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 50, in load
return _perform_torch_convert(converter, debug)
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 95, in _perform_torch_convert
raise e
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 87, in _perform_torch_convert
prog = converter.convert()
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 240, in convert
convert_nodes(self.context, self.graph)
File "/Users/user/Projects/project/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 74, in convert_nodes
raise RuntimeError(
RuntimeError: PyTorch convert function for op 'uninitialized' not implemented.
To Reproduce
import torch
import torch.nn as nn
import torch.nn.functional as F
import coremltools as ct
class M(nn.Module):
def __init__(self):
super(M, self).__init__()
def forward(self, x):
return F.upsample_bilinear(x, size=512)
m = M()
scripted_m = torch.jit.script(m)
example_input = torch.rand(1, 1, 64, 64)
image_input = ct.ImageType(name="input_1", shape=example_input.shape)
coreml_model = ct.convert(
scripted_m,
source='pytorch',
inputs=[image_input]
)
System environment (please complete the following information):
coremltools version: 5.1.0
OS: MacOS
macOS version: 12.1
XCode version : 13.1
How you install python: system + venv
python version: 3.8.10
any other relevant information:
torch version: 1.9.0
torchvision version: 0.10.0
Post not yet marked as solved
I am training a simple Neural Network on my M1 Max with the following code in Tensorflow:
import tensorflow as tf
def get_and_pad_imdb_dataset(num_words=10000, maxlen=None, index_from=2):
from tensorflow.keras.datasets import imdb
# Load the reviews
(x_train, y_train), (x_test, y_test) = imdb.load_data(path='imdb.npz',
num_words=num_words,
skip_top=0,
maxlen=maxlen,
start_char=1,
oov_char=2,
index_from=index_from)
x_train = tf.keras.preprocessing.sequence.pad_sequences(x_train,
maxlen=None,
padding='pre',
truncating='pre',
value=0)
x_test = tf.keras.preprocessing.sequence.pad_sequences(x_test,
maxlen=None,
padding='pre',
truncating='pre',
value=0)
return (x_train, y_train), (x_test, y_test)
def get_imdb_word_index(num_words=10000, index_from=2):
imdb_word_index = tf.keras.datasets.imdb.get_word_index(
path='imdb_word_index.json')
imdb_word_index = {key: value + index_from for
key, value in imdb_word_index.items() if value <= num_words-index_from}
return imdb_word_index
(x_train, y_train), (x_test, y_test) = get_and_pad_imdb_dataset(maxlen=25)
imdb_word_index = get_imdb_word_index()
max_index_value = max(imdb_word_index.values())
embedding_dim = 16
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim = max_index_value+1, output_dim = embedding_dim, mask_zero = True),
tf.keras.layers.LSTM(units = 16),
tf.keras.layers.Dense(units = 1, activation = 'sigmoid')
])
model.compile(loss = 'binary_crossentropy', metrics = ['accuracy'], optimizer = 'adam')
history = model.fit(x_train, y_train, epochs=3, batch_size = 32)
I ran this code on Google Colab and it works perfectly fine without any problem at all.
However, on my M1 Max it just gets stucked at the very first epoch and it does not progress at all (even after a couple of hours).
This is all I get from the output after calling the .fit method:
2022-02-15 23:44:20.097795: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
Epoch 1/3
2022-02-15 23:44:22.461438: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
I installed tensorflow on my machine following this guide: https://developer.apple.com/metal/tensorflow-plugin/
I am using a Conda environment with miniforge and the tensorflow related package (obtained with conda list) are:
tensorboard 2.6.0 pyhd8ed1ab_1 conda-forge
tensorboard-data-server 0.6.0 py39hfb8cd70_1 conda-forge
tensorboard-plugin-wit 1.8.0 pyh44b312d_0 conda-forge
tensorflow-deps 2.7.0 0 apple
tensorflow-estimator 2.7.0 pypi_0 pypi
tensorflow-macos 2.7.0 pypi_0 pypi
tensorflow-metal 0.3.0 pypi_0 pypi
My python version is 3.9.0
Post not yet marked as solved
Since there are only 80 class labels for existing YOLOv3 Coreml model, I want to add some more categories to be used in my app, can I do that? If this is possible, how can I do?
Post not yet marked as solved
I want to detect an image of a dart target (https://commons.wikimedia.org/wiki/File:WA_80_cm_archery_target.svg) in my iOS app.
For that I am creating an object detector with CreateML. I am using the Transfer Learning algorithm and 114 annotated images, the validation data is set to auto.
After 2000 iterations I got the following stats: 96% Training and 0% Validation.
As I understand it, the percentages are the I/U 50% scores (= percentage of intersection over union ratios from the bounding boxes with over 50%).
If the validation data is automatically chosen from the set of images, how can its score be 0%?
Post not yet marked as solved
I am a student from a non-English speaking country. I would like to get the sample code from the "Make apps smarter with Natural Language" keynote on dynamic word embeddings and custom sentence embeddings. I can't find relevant examples on google to keep learning about it. I hope the developer community can share the full text of the sample code for the Nosh app and Merch app in the video. Thanks Vivek and Doug.