Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

88 Posts
Sort by:
Post not yet marked as solved
0 Replies
294 Views
Hi, I would like to use a recent version of JAX on my new Macbook Pro with an M1 chip but unfortunately have run into the issue that tensorflow_macos seems to require numpy<=1.19 but JAX requires numpy>=1.20. Is there any way to manually compile tensorflow_macos with a newer numpy version? Or are you planning to release an update any time soon? Thanks!
Posted
by bhahn.
Last updated
.
Post not yet marked as solved
2 Replies
429 Views
Being brand new to create ML I tried to run my own ML project. When creating my own image classifier (same with tabular classification) I fail from the start. When selecting valid training data create ML says "Data Analysis stopped". I'm using Create ML Version 3.0 (78.7). Any suggestions?
Posted
by MarcoGMuc.
Last updated
.
Post marked as solved
3 Replies
1.1k Views
I followed this guideline to install tensorflow https://developer.apple.com/metal/tensorflow-plugin/ but sklearn cannot be found so I used conda install sklearn and then somehow sklearn module still cannot be imported. Here is the outputs when I tried to import sklearn: (base) (tensorflow-metal) a@A ~ % python Python 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:24:02)  [Clang 11.1.0 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sklearn Traceback (most recent call last):   File "<stdin>", line 1, in <module>   File "/Users/a/miniforge3/lib/python3.9/site-packages/sklearn/__init__.py", line 82, in <module>     from .base import clone   File "/Users/a/miniforge3/lib/python3.9/site-packages/sklearn/base.py", line 17, in <module>     from .utils import _IS_32BIT   File "/Users/a/miniforge3/lib/python3.9/site-packages/sklearn/utils/__init__.py", line 28, in <module>     from .fixes import np_version, parse_version   File "/Users/a/miniforge3/lib/python3.9/site-packages/sklearn/utils/fixes.py", line 20, in <module>     import scipy.stats   File "/Users/a/miniforge3/lib/python3.9/site-packages/scipy/stats/__init__.py", line 441, in <module>     from .stats import *   File "/Users/a/miniforge3/lib/python3.9/site-packages/scipy/stats/stats.py", line 37, in <module>     from scipy.spatial.distance import cdist   File "/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/__init__.py", line 98, in <module>     from .qhull import * ImportError: dlopen(/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib   Referenced from: /Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so   Reason: tried: '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file) >>>  some people said sklearn cannot be used in M1 Chip, is it right? tensorflow-macos: 2.6.0 tensorflow-metal: 0.2.0 macOS: 12.0.1 Many thanks for any help.
Posted Last updated
.
Post not yet marked as solved
0 Replies
333 Views
I already installed tensorflow latest version using the documentation given (link). But when I tried to run notebook with command "%tensorflow_version 2.x" , its giving error "UsageError: Line magic function %tensorflow_version not found.". Please tell me, what to do ?
Posted
by 006.
Last updated
.
Post not yet marked as solved
1 Replies
379 Views
I'm getting an error very early in the the process and these tutorials seems very simple so I'm stumped. This tutorial seems straightforward but I can't make it past the step where I drag in image sets in. https://developer.apple.com/documentation/createml/creating_an_image_classifier_model video tutorial: https://www.youtube.com/watch?v=DSOknwpCnJ4 I have 1 folder titled "Training Data" with 2 sub-folders "img1" and "img2". When I drag my folder "Training Data" into the Training Data section I get the error: "No training data found. 0 invalid files found." I have no idea what is causing this. Images are .jpg and taken from my phone. I only have 6 total images in the initial test. I've tried it with and without an annotations.json file created in COCO Annotator, that didn't make a difference same error with or without. Big Sur 11.5.2 Create ML 3.0
Posted Last updated
.
Post not yet marked as solved
1 Replies
385 Views
Question 1: I'm trying to follow along with the code from WWDC20-10657 , but my Xcode won't recognise MLWordEmbedding . I am importing Natural Language, CoreML and CreateML Question 2: More generally - I have not grasped how an .mlmodel (which I built in Playgrounds from my domain specific text corpus) can be easily converted into a custom sentence embedding. Right now I have 'something' that I can use in that I brute force unpacked the .mlmodel into [key: [vector]] dictionary, which I am now trying the reformat as a custom embedding - but the video implied that the .mlmodel could be used more or less directly.
Posted
by ERoberts.
Last updated
.
Post not yet marked as solved
0 Replies
326 Views
I already installed tensorflow latest version using the documentation given (link). But when I tried to run notebook with command "%tensorflow_version 2.x" , its giving error "UsageError: Line magic function %tensorflow_version not found.". Please tell me, what to do ?
Posted
by 006.
Last updated
.
Post marked as solved
1 Replies
454 Views
I'm working with a style transfer model trained with pytorch in google colaboratory and then converted to an ML package. When I bring it into xcode and try to preview the asset I see the following error. There was a problem decoding this Core ML document missingMetadataField(named: "inputSchema") I've been able to train and convert models as .mlmodel files, I'm only seeing this issue with .mlpackage files. I'm using xcode 13 beta, which as far as I know is the only version of xcode that can handle ML packages/programs at the moment, and I'm using the coremltools beta to handle the conversion. Prior to the conversion, or if I convert to an ML Model instead it seems to work just fine. Is this a problem with how the model is being structured, or converted? Is this a problem with how I've set up my xcode environment/swift project? Is there some way to update the metadata associated with ML packages to make sure the missing input schema is included?
Posted
by TaylorHFN.
Last updated
.
Post marked as solved
1 Replies
280 Views
Got the following error when added the --encrypt flag to the build phase for my .coreml model file. coremlc: error: generate command model encryption is not supported on the specific deployment target macos. Any insights would be appreciated. Thanks.
Posted
by Brianyan.
Last updated
.
Post not yet marked as solved
0 Replies
279 Views
I wish there was a tool to create a Memoji from a photo using AI 📸➡️👨 It is a pity there are no tools for artists
Posted
by Lebizhor.
Last updated
.
Post not yet marked as solved
2 Replies
649 Views
I would like to generate and run ML program inside an app. I got familiar with the coremltools and MIL format, however I can't seem to find any resources on how to generate mlmodel/mlpackage files using Swift on the device. Is there any Swift equivalent of coremltools? Or is there a way to translate MIL description of a ML program into instance of a MLModel? Or something similar.
Posted
by mlajtos.
Last updated
.
Post not yet marked as solved
2 Replies
536 Views
Is there anyway we can set the number of threads used during coreML inference? My model is relatively small and the overhead of launching new threads is too expensive. When using TensorFlow C API, forcing to single thread results in significant decrease in CPU usage. (So far coreML with multiple threads has 3 times the cpu usage compares to TensorFlow with single thread). Also, wondering if anyone has compared the performance between TensorFlow in C and coreML?
Posted
by Brianyan.
Last updated
.
Post not yet marked as solved
1 Replies
587 Views
I have a MacBook Pro M1 (16 GB RAM), and testing CreateML's StyleTransfer model training. When I press «Train», it starts processing and fails with error «Could not create buffer with format BGRA -6662». During the «processing» it allocates about 4.5 GB of RAM. I guess it runs out of memory, however, I've closed all other programs and I can see that there's lot of free RAM when it fails. It happens even if I use just 3 small (500*500) images, 1 for each training, content and validation. So, how to fix it?
Posted
by int_32.
Last updated
.
Post not yet marked as solved
0 Replies
406 Views
In the past we have tested iOS 13 and iOS 12 iPhone 6, 6s, and 10 with the face anti spoofing. It was working. However, with iOS 14, we have learned that the input from camera is not working with face anti spoofing. The image taken from camera is producing poor scores on whether the face (in image) is a real person. The machine learning model works by reading the pixels and checks for many things, including the depth of the face, the background of the head, and whether there appears to be image manipulation in the pixel. we are very confident we have not changed our app in anyway, so we are asking if there has been any changes made to the iOS 14 camera that affected the image being outputted to the public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection). Currently, the model works great on Android phones.
Posted Last updated
.
Post not yet marked as solved
0 Replies
397 Views
I was trying to test out ResNet for my new M1 MacBook Pro - with Apple's new tensorflow version 2.4.0-rc0 and Numpy version 1.21.1 - with the following code: import tensorflow as tf from tensorflow import keras import numpy as np from sklearn.datasets import load_sample_image model = keras.applications.resnet50.ResNet50(weights="imagenet") china = load_sample_image("china.jpg") / 255 flower = load_sample_image("flower.jpg") / 255 images = np.array([china, flower]) images_resized = tf.image.resize(images, [224, 224]) inputs = keras.applications.resnet50.preprocess_input(images_resized * 255) y_proba = model.predict(inputs) Which gives the following error: *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[__NSArrayM objectAtIndexedSubscript:]: index 0 beyond bounds for empty array' *** First throw call stack: ( 0 CoreFoundation 0x000000018cea8c78 __exceptionPreprocess + 240 1 libobjc.A.dylib 0x000000018cbd10a8 objc_exception_throw + 60 2 CoreFoundation 0x000000018cf73b68 -[__NSCFString characterAtIndex:].cold.1 + 0 3 CoreFoundation 0x000000018ce16ac8 -[__NSArrayM objectAtIndexedSubscript:] + 188 4 MLCompute 0x00000001962f06a0 -[MLCDeviceCPU(MLCLayerOperations) updateTensorsForFusedPaddingAndConvolutionLayer:layerNext:] + 276 5 MLCompute 0x00000001962f0e5c -[MLCDeviceCPU(MLCLayerOperations) fuseLayersForGraph:stopGradientTensorList:startAtLayerIndex:forInference:] + 1264 6 MLCompute 0x0000000196352f68 -[MLCInferenceGraph compileWithOptions:device:inputTensors:inputTensorsData:] + 1868 7 _pywrap_tensorflow_internal.so 0x0000000144e16848 _ZN10tensorflow9mlcompute7convert26MLCGraphConversionPassImpl15ConvertSubgraphEPNS_15OpKernelContextEPNS1_11TFGraphInfoEPKNS_5GraphERKNSt3__16vectorINSA_12basic_stringIcNSA_11char_traitsIcEENSA_9allocatorIcEEEENSF_ISH_EEEERKNSB_IiNSF_IiEEEEPNS1_24MLCSubgraphConvertResultE + 3516 8 _pywrap_tensorflow_internal.so 0x0000000144df8498 _ZN10tensorflow9mlcompute7kernels13MLCSubgraphOp20ProcessMLCSubgraphOpEPNS_15OpKernelContextEPPNS1_10MLCContextEPPNS1_15TFContextStatusE + 416 9 _pywrap_tensorflow_internal.so 0x0000000144dfb1c0 _ZN10tensorflow9mlcompute7kernels13MLCSubgraphOp7ComputeEPNS_15OpKernelContextE + 804 10 libtensorflow_framework.2.dylib 0x00000001587a7598 _ZN10tensorflow12_GLOBAL__N_113ExecutorStateINS_21SimplePropagatorStateEE7ProcessENS2_10TaggedNodeEx + 2772 11 libtensorflow_framework.2.dylib 0x000000015881a50c _ZN5Eigen15ThreadPoolTemplIN10tensorflow6thread16EigenEnvironmentEE10WorkerLoopEi + 552 12 libtensorflow_framework.2.dylib 0x000000015881a1e4 _ZZN10tensorflow6thread16EigenEnvironment12CreateThreadENSt3__18functionIFvvEEEENKUlvE_clEv + 80 13 libtensorflow_framework.2.dylib 0x000000015880bacc _ZN10tensorflow12_GLOBAL__N_17PThread8ThreadFnEPv + 104 14 libsystem_pthread.dylib 0x000000018cd2b878 _pthread_start + 320 15 libsystem_pthread.dylib 0x000000018cd265e0 thread_start + 8 ) libc++abi: terminating with uncaught exception of type NSException Please let me know whats up - it seems that none of the other questions with the same NSRangeException are concerning Keras. The same issue on GitHub was closed. I am new to this so any help would be greatly appreciated!
Posted
by EW0824.
Last updated
.
Post not yet marked as solved
2 Replies
608 Views
During the launch of M1, Apple mentioned clearly that new M1 has superior performance for running Tensorflow. On the ground, it's not working as expected and their fork has tons of issues and no continuous updates. On the other side, TensorFlow team said clearly that they are not required to attend to ARM64 support. I end in no support from both sides; Apple that is not caring to attend issues in their tensorflow fork &amp; Google that they are not willing to support ARM64. Is there any advice for that?
Posted
by mAbidou.
Last updated
.
Post not yet marked as solved
1 Replies
377 Views
I am trying to build an app that uses CoreML. However, I would like the data that was used to build the model to grow and the model to predict taking that growth into account. So, at the end of the day the more the user uses the app the smarter the app gets at predicting what the user will select next. For example: If the user is presented with a variety of clothes to choose from and the user selects pants, the app will present a list of colors to choose from and let's say the user chooses blue, the next time the user chooses pants the blue color is ranked higher than it was the previous time. Is this possible to do? And how do I make selection updates? Thanks in advance for any ideas or suggestions.
Posted
by iakar.
Last updated
.