Post not yet marked as solved
Hey guys,
Just got a brand new M1 MacBook and started all my developer tools. However, I'm stuck trying to setup the Tensorflow. Object Detection API. I'm having an issue copying and installing the 'setup.py' file as I keep getting dependency issues. Just for reference I'll leave a link to a guide that shows pretty much all the steps I'm taking (one I followed when I first started): https://github.com/nicknochnack/TFODCourse/blob/main/2.%20Training%20and%20Detection.ipynb.
Where I'm getting stuck is the 3rd part in section 1. I get a dependency conflicts from tensorflow-text (tf-models-official 2.6.0 depends on tensorflow-text>=2.5.0) and tensorflow-addons (tf-models-official 2.5.1 depends on tensorflow-addons). I looked up if anyone else was having the same problem and sure enough I found a couple issues on GitHub. Mainly, this one directly on the tensorflow/addons repo: https://github.com/tensorflow/addons/issues/2503.
Now, the solution that was proposed was following the steps in this PR (https://github.com/tensorflow/addons/pull/2504, linked on the bottom of the issue) which seemed to work for some of the members. However, when I go to bazel build I encounter two main errors. The first is a Symbol not found: _TF_AllocateOutput, Expected in: flat namespace error which, I believe leads to bazel not building correctly (which is the second error).
If anybody has any idea on how to get TFOD setup or fix any of these issue thanks in advance! Even some pointers or ideas are welcome as I'm kind of at a dead end for using TFOD natively on my Mac.
Post not yet marked as solved
I am trying to build an app that uses CoreML. However, I would like the data that was used to build the model to grow and the model to predict taking that growth into account. So, at the end of the day the more the user uses the app the smarter the app gets at predicting what the user will select next.
For example:
If the user is presented with a variety of clothes to choose from and the user selects pants, the app will present a list of colors to choose from and let's say the user chooses blue, the next time the user chooses pants the blue color is ranked higher than it was the previous time. Is this possible to do? And how do I make selection updates?
Thanks in advance for any ideas or suggestions.
Post not yet marked as solved
I was trying to test out ResNet for my new M1 MacBook Pro - with Apple's new tensorflow version 2.4.0-rc0 and Numpy version 1.21.1 - with the following code:
import tensorflow as tf
from tensorflow import keras
import numpy as np
from sklearn.datasets import load_sample_image
model = keras.applications.resnet50.ResNet50(weights="imagenet")
china = load_sample_image("china.jpg") / 255
flower = load_sample_image("flower.jpg") / 255
images = np.array([china, flower])
images_resized = tf.image.resize(images, [224, 224])
inputs = keras.applications.resnet50.preprocess_input(images_resized * 255)
y_proba = model.predict(inputs)
Which gives the following error:
*** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[__NSArrayM objectAtIndexedSubscript:]: index 0 beyond bounds for empty array'
*** First throw call stack:
(
0 CoreFoundation 0x000000018cea8c78 __exceptionPreprocess + 240
1 libobjc.A.dylib 0x000000018cbd10a8 objc_exception_throw + 60
2 CoreFoundation 0x000000018cf73b68 -[__NSCFString characterAtIndex:].cold.1 + 0
3 CoreFoundation 0x000000018ce16ac8 -[__NSArrayM objectAtIndexedSubscript:] + 188
4 MLCompute 0x00000001962f06a0 -[MLCDeviceCPU(MLCLayerOperations) updateTensorsForFusedPaddingAndConvolutionLayer:layerNext:] + 276
5 MLCompute 0x00000001962f0e5c -[MLCDeviceCPU(MLCLayerOperations) fuseLayersForGraph:stopGradientTensorList:startAtLayerIndex:forInference:] + 1264
6 MLCompute 0x0000000196352f68 -[MLCInferenceGraph compileWithOptions:device:inputTensors:inputTensorsData:] + 1868
7 _pywrap_tensorflow_internal.so 0x0000000144e16848 _ZN10tensorflow9mlcompute7convert26MLCGraphConversionPassImpl15ConvertSubgraphEPNS_15OpKernelContextEPNS1_11TFGraphInfoEPKNS_5GraphERKNSt3__16vectorINSA_12basic_stringIcNSA_11char_traitsIcEENSA_9allocatorIcEEEENSF_ISH_EEEERKNSB_IiNSF_IiEEEEPNS1_24MLCSubgraphConvertResultE + 3516
8 _pywrap_tensorflow_internal.so 0x0000000144df8498 _ZN10tensorflow9mlcompute7kernels13MLCSubgraphOp20ProcessMLCSubgraphOpEPNS_15OpKernelContextEPPNS1_10MLCContextEPPNS1_15TFContextStatusE + 416
9 _pywrap_tensorflow_internal.so 0x0000000144dfb1c0 _ZN10tensorflow9mlcompute7kernels13MLCSubgraphOp7ComputeEPNS_15OpKernelContextE + 804
10 libtensorflow_framework.2.dylib 0x00000001587a7598 _ZN10tensorflow12_GLOBAL__N_113ExecutorStateINS_21SimplePropagatorStateEE7ProcessENS2_10TaggedNodeEx + 2772
11 libtensorflow_framework.2.dylib 0x000000015881a50c _ZN5Eigen15ThreadPoolTemplIN10tensorflow6thread16EigenEnvironmentEE10WorkerLoopEi + 552
12 libtensorflow_framework.2.dylib 0x000000015881a1e4 _ZZN10tensorflow6thread16EigenEnvironment12CreateThreadENSt3__18functionIFvvEEEENKUlvE_clEv + 80
13 libtensorflow_framework.2.dylib 0x000000015880bacc _ZN10tensorflow12_GLOBAL__N_17PThread8ThreadFnEPv + 104
14 libsystem_pthread.dylib 0x000000018cd2b878 _pthread_start + 320
15 libsystem_pthread.dylib 0x000000018cd265e0 thread_start + 8
)
libc++abi: terminating with uncaught exception of type NSException
Please let me know whats up - it seems that none of the other questions with the same NSRangeException are concerning Keras. The same issue on GitHub was closed.
I am new to this so any help would be greatly appreciated!
Is it possible to use coreML to implement a multi-label text classification?
Post not yet marked as solved
In the past we have tested iOS 13 and iOS 12 iPhone 6, 6s, and 10 with the face anti spoofing. It was working. However, with iOS 14, we have learned that the input from camera is not working with face anti spoofing. The image taken from camera is producing poor scores on whether the face (in image) is a real person. The machine learning model works by reading the pixels and checks for many things, including the depth of the face, the background of the head, and whether there appears to be image manipulation in the pixel. we are very confident we have not changed our app in anyway, so we are asking if there has been any changes made to the iOS 14 camera that affected the image being outputted to the public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection). Currently, the model works great on Android phones.
Post not yet marked as solved
I would like to generate and run ML program inside an app.
I got familiar with the coremltools and MIL format, however I can't seem to find any resources on how to generate mlmodel/mlpackage files using Swift on the device.
Is there any Swift equivalent of coremltools? Or is there a way to translate MIL description of a ML program into instance of a MLModel? Or something similar.
Post not yet marked as solved
download code for wwdc21-10041
Post not yet marked as solved
I wish there was a tool to create a Memoji from a photo using AI
📸➡️👨
It is a pity there are no tools for artists
I'm working with a style transfer model trained with pytorch in google colaboratory and then converted to an ML package. When I bring it into xcode and try to preview the asset I see the following error.
There was a problem decoding this Core ML document
missingMetadataField(named: "inputSchema")
I've been able to train and convert models as .mlmodel files, I'm only seeing this issue with .mlpackage files.
I'm using xcode 13 beta, which as far as I know is the only version of xcode that can handle ML packages/programs at the moment, and I'm using the coremltools beta to handle the conversion. Prior to the conversion, or if I convert to an ML Model instead it seems to work just fine.
Is this a problem with how the model is being structured, or converted? Is this a problem with how I've set up my xcode environment/swift project? Is there some way to update the metadata associated with ML packages to make sure the missing input schema is included?
Got the following error when added the --encrypt flag to the build phase for my .coreml model file.
coremlc: error: generate command model encryption is not supported on the specific deployment target macos.
Any insights would be appreciated. Thanks.
Post not yet marked as solved
I already installed tensorflow latest version using the documentation given (link). But when I tried to run notebook with command "%tensorflow_version 2.x" , its giving error "UsageError: Line magic function %tensorflow_version not found.". Please tell me, what to do ?
Post not yet marked as solved
Question 1: I'm trying to follow along with the code from WWDC20-10657 , but my Xcode won't recognise MLWordEmbedding . I am importing Natural Language, CoreML and CreateML
Question 2: More generally - I have not grasped how an .mlmodel (which I built in Playgrounds from my domain specific text corpus) can be easily converted into a custom sentence embedding.
Right now I have 'something' that I can use in that I brute force unpacked the .mlmodel into [key: [vector]] dictionary, which I am now trying the reformat as a custom embedding - but the video implied that the .mlmodel could be used more or less directly.
Post not yet marked as solved
I already installed tensorflow latest version using the documentation given (link). But when I tried to run notebook with command "%tensorflow_version 2.x" , its giving error "UsageError: Line magic function %tensorflow_version not found.". Please tell me, what to do ?
Post not yet marked as solved
I'm getting an error very early in the the process and these tutorials seems very simple so I'm stumped.
This tutorial seems straightforward but I can't make it past the step where I drag in image sets in.
https://developer.apple.com/documentation/createml/creating_an_image_classifier_model
video tutorial: https://www.youtube.com/watch?v=DSOknwpCnJ4
I have 1 folder titled "Training Data" with 2 sub-folders "img1" and "img2". When I drag my folder "Training Data" into the Training Data section I get the error: "No training data found. 0 invalid files found."
I have no idea what is causing this. Images are .jpg and taken from my phone. I only have 6 total images in the initial test. I've tried it with and without an annotations.json file created in COCO Annotator, that didn't make a difference same error with or without.
Big Sur 11.5.2
Create ML 3.0
Post not yet marked as solved
I just created my custom hand poses classification ml model and I don't know how to implement it in any app..So where can I get the demo app that they showed us in the video?
Post not yet marked as solved
Device: MacBook Pro 16 M1 Max, 64GB running MacOS 12.0.1.
I tried setting up GPU Accelerated TensorFlow on my Mac using the following steps:
Setup: XCode CLI / Homebrew/ Miniforge
Conda Env: Python 3.9.5
conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
python -m pip install tensorflow-metal
brew install libjpeg
conda install -y matplotlib jupyterlab
In Jupyter Lab, I try to execute this code:
from tensorflow.keras import layers
from tensorflow.keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
The code executes, but I get this warning, indicating no GPU Acceleration can be used as it defaults to a 0MB GPU.
Error:
Metal device set to: Apple M1 Max
2021-10-27 08:23:32.872480: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-10-27 08:23:32.872707: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Anyone has any idea how to fix this? I came across a bunch of posts around here related to the same issue but with no solid fix. I created a new question as I found the other questions less descriptive of the issue, and wanted to comprehensively depict it. Any fix would be of much help.
I followed this guideline to install tensorflow https://developer.apple.com/metal/tensorflow-plugin/
but sklearn cannot be found so I used conda install sklearn and then somehow sklearn module still cannot be imported.
Here is the outputs when I tried to import sklearn:
(base) (tensorflow-metal) a@A ~ % python
Python 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:24:02)
[Clang 11.1.0 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sklearn
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/a/miniforge3/lib/python3.9/site-packages/sklearn/__init__.py", line 82, in <module>
from .base import clone
File "/Users/a/miniforge3/lib/python3.9/site-packages/sklearn/base.py", line 17, in <module>
from .utils import _IS_32BIT
File "/Users/a/miniforge3/lib/python3.9/site-packages/sklearn/utils/__init__.py", line 28, in <module>
from .fixes import np_version, parse_version
File "/Users/a/miniforge3/lib/python3.9/site-packages/sklearn/utils/fixes.py", line 20, in <module>
import scipy.stats
File "/Users/a/miniforge3/lib/python3.9/site-packages/scipy/stats/__init__.py", line 441, in <module>
from .stats import *
File "/Users/a/miniforge3/lib/python3.9/site-packages/scipy/stats/stats.py", line 37, in <module>
from scipy.spatial.distance import cdist
File "/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/__init__.py", line 98, in <module>
from .qhull import *
ImportError: dlopen(/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: /Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so
Reason: tried: '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file)
>>>
some people said sklearn cannot be used in M1 Chip, is it right?
tensorflow-macos: 2.6.0
tensorflow-metal: 0.2.0
macOS: 12.0.1
Many thanks for any help.
Post not yet marked as solved
Hi,
I installed skearn successfully and ran the MINIST toy example successfully.
then I started to run my project. The finning thing everything seems good at the start point (at least no ImportError occurs). but when I made some changes of my code and try to run all cells (I use jupyter lab) again, ImportError occurs.....
ImportError: dlopen(/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: /Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so
Reason: tried: '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file)
then I have to uninstall scipy, sklearn, etc and reinstall all of them. and my code can be ran again.....
Magically I hate to say, anyone knows how to permanently solve this problem? make skearn more stable?
Post not yet marked as solved
Hi everyone,
I found that the performance of GPU is not good as I expected (as slow as a turtle), I wanna switch from GPU to CPU. but mlcompute module cannot be found, so wired.
The same code ran on colab and my computer (jupyter lab) take 156s vs 40 minutes per epoch, respectively.
I only used a small dataset (a few thousands of data points), and each epoch only have 20 baches.
I am so disappointing and it seems like the "powerful" GPU is a joke.
I am using 12.0.1 macOS and the version of tensorflow-macos is 2.6.0
Can anyone tell me why this happens?