Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

81 Posts
Sort by:
Post not yet marked as solved
28 Replies
12k Views
Device: MacBook Pro 16 M1 Max, 64GB running MacOS 12.0.1. I tried setting up GPU Accelerated TensorFlow on my Mac using the following steps: Setup: XCode CLI / Homebrew/ Miniforge Conda Env: Python 3.9.5 conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal brew install libjpeg conda install -y matplotlib jupyterlab In Jupyter Lab, I try to execute this code: from tensorflow.keras import layers from tensorflow.keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax')) model.summary() The code executes, but I get this warning, indicating no GPU Acceleration can be used as it defaults to a 0MB GPU. Error: Metal device set to: Apple M1 Max 2021-10-27 08:23:32.872480: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2021-10-27 08:23:32.872707: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) Anyone has any idea how to fix this? I came across a bunch of posts around here related to the same issue but with no solid fix. I created a new question as I found the other questions less descriptive of the issue, and wanted to comprehensively depict it. Any fix would be of much help.
Posted Last updated
.
Post not yet marked as solved
1 Replies
141 Views
I am training a model using tensorflow-metal 0.5.1. and at one point the training hangs and I have to stop the kernel. It is quite a long training. It starts fine with all 10 available cores running using the GPU as shown by Activity Monitor. After about 20 hours into the training, in one of the core processes the %GPU drops to zero while still showing some CPU activity. In succession after some hours, the same thing happens to another 4 core processes, nevertheless the training continues as it outputs some progress. When the remaining core processes that still use the GPU terminate, the progression output stops. The core processes with no GPU activity still run but they do nothing and the training hangs forever. The only thing left is stopping the kernel from jupyter notebook. Here is the code to reproduce the problem: # imports import pandas as pd import matplotlib import matplotlib.pyplot as plt import tensorflow from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.layers import Conv2D, MaxPooling2D from tensorflow.keras.layers import BatchNormalization, Dropout from sklearn.model_selection import GridSearchCV from scikeras.wrappers import KerasClassifier from tensorflow.keras.optimizers import SGD import numpy # pip install extra-keras-datasets first from extra_keras_datasets import kmnist # Model configuration no_classes = 10 validation_split = 0.2 # Load KMNIST dataset (input_train, target_train), (input_test, target_test) = kmnist.load_data(type='kmnist') # Shape of the input sets input_train_shape = input_train.shape input_test_shape = input_test.shape # Keras layer input shape. input_shape = (input_train_shape[1], input_train_shape[2], 1) # Reshape the training data to include channels input_train = input_train.reshape(input_train_shape[0], input_train_shape[1], input_train_shape[2], 1) input_test = input_test.reshape(input_test_shape[0], input_test_shape[1], input_test_shape[2], 1) # Parse numbers as floats input_train = input_train.astype('float32') input_test = input_test.astype('float32') # Normalize input data input_train = input_train / 255 input_test = input_test / 255 # Function to create model def create_model_SGD(neurons): model = Sequential() model.add(Conv2D(32, kernel_size = (3, 3), activation = 'relu', input_shape = input_shape, padding = 'same')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(BatchNormalization()) model.add(Conv2D(64, kernel_size = (3, 3), activation = 'relu', padding = 'same')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(BatchNormalization()) model.add(Conv2D(128, kernel_size = (3, 3), activation = 'relu', padding = 'same')) model.add(Conv2D(128, kernel_size = (3, 3), activation = 'relu', padding = 'same')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dense(neurons, activation = 'relu')) model.add(Dropout(rate = 0.2)) model.add(BatchNormalization()) model.add(Dense(neurons, activation = 'relu')) model.add(Dropout(rate = 0.2)) model.add(BatchNormalization()) model.add(Dense(no_classes, activation = 'softmax')) # compilation of the model model.compile(loss=tensorflow.keras.losses.sparse_categorical_crossentropy, optimizer='SGD', metrics=['accuracy']) return model # fix random seed for reproducibility seed = 7 tensorflow.random.set_seed(seed) # create model model = KerasClassifier(model=create_model_SGD, verbose=0) # define the grid search parameters learn_rate = [0.001, 0.01, 0.1] momentum = [0.0, 0.5, 0.9] neurons = [256, 512, 1024] batch_size = [100, 250, 350] epochs = [10, 25, 50] param_grid = dict(model__neurons=neurons, optimizer__learning_rate=learn_rate, optimizer__momentum=momentum, batch_size=batch_size, epochs=epochs) grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3, verbose=3) grid_result = grid.fit(input_train, target_train) Test configurations are: MacBook Pro M1 Max macOS 12.5.1 tensorflow-deps 2.9 tensorflow-macos 2.9.2 tensorflow-metal 0.5.1
Posted
by gppower.
Last updated
.
Post not yet marked as solved
8 Replies
17k Views
I just got my new MacBook Pro with M1 Max chip and am setting up Python. I've tried several combinational settings to test speed - now I'm quite confused. First put my questions here: Why python run natively on M1 Max is greatly (~100%) slower than on my old MacBook Pro 2016 with Intel i5? On M1 Max, why there isn't significant speed difference between native run (by miniforge) and run via Rosetta (by anaconda) - which is supposed to be slower ~20%? On M1 Max and native run, why there isn't significant speed difference between conda installed Numpy and TensorFlow installed Numpy - which is supposed to be faster? On M1 Max, why run in PyCharm IDE is constantly slower ~20% than run from terminal, which doesn't happen on my old Intel Mac. Evidence supporting my questions is as follows: Here are the settings I've tried: 1. Python installed by Miniforge-arm64, so that python is natively run on M1 Max Chip. (Check from Activity Monitor, Kind of python process is Apple). Anaconda.: Then python is run via Rosseta. (Check from Activity Monitor, Kind of python process is Intel). 2. Numpy installed by conda install numpy: numpy from original conda-forge channel, or pre-installed with anaconda. Apple-TensorFlow: with python installed by miniforge, I directly install tensorflow, and numpy will also be installed. It's said that, numpy installed in this way is optimized for Apple M1 and will be faster. Here is the installation commands: conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal 3. Run from Terminal. PyCharm (Apple Silicon version). Here is the test code: import time import numpy as np np.random.seed(42) a = np.random.uniform(size=(300, 300)) runtimes = 10 timecosts = [] for _ in range(runtimes): s_time = time.time() for i in range(100): a += 1 np.linalg.svd(a) timecosts.append(time.time() - s_time) print(f'mean of {runtimes} runs: {np.mean(timecosts):.5f}s') and here are the results: +-----------------------------------+-----------------------+--------------------+ | Python installed by (run on)→ | Miniforge (native M1) | Anaconda (Rosseta) | +----------------------+------------+------------+----------+----------+---------+ | Numpy installed by ↓ | Run from → | Terminal | PyCharm | Terminal | PyCharm | +----------------------+------------+------------+----------+----------+---------+ | Apple Tensorflow | 4.19151 | 4.86248 | / | / | +-----------------------------------+------------+----------+----------+---------+ | conda install numpy | 4.29386 | 4.98370 | 4.10029 | 4.99271 | +-----------------------------------+------------+----------+----------+---------+ This is quite slow. For comparison, run the same code on my old MacBook Pro 2016 with i5 chip - it costs 2.39917s. another post reports that run with M1 chip (not Pro or Max), miniforge+conda_installed_numpy is 2.53214s, and miniforge+apple_tensorflow_numpy is 1.00613s. you may also try on it your own. Here is the CPU information details: My old i5: $ sysctl -a | grep -e brand_string -e cpu.core_count machdep.cpu.brand_string: Intel(R) Core(TM) i5-6360U CPU @ 2.00GHz machdep.cpu.core_count: 2 My new M1 Max: % sysctl -a | grep -e brand_string -e cpu.core_count machdep.cpu.brand_string: Apple M1 Max machdep.cpu.core_count: 10 I follow instructions strictly from tutorials - but why would all these happen? Is it because of my installation flaws, or because of M1 Max chip? Since my work relies heavily on local runs, local speed is very important to me. Any suggestions to possible solution, or any data points on your own device would be greatly appreciated :)
Posted Last updated
.
Post not yet marked as solved
7 Replies
2.8k Views
Hi everyone, I found that the performance of GPU is not good as I expected (as slow as a turtle), I wanna switch from GPU to CPU. but mlcompute module cannot be found, so wired. The same code ran on colab and my computer (jupyter lab) take 156s vs 40 minutes per epoch, respectively. I only used a small dataset (a few thousands of data points), and each epoch only have 20 baches. I am so disappointing and it seems like the "powerful" GPU is a joke. I am using 12.0.1 macOS and the version of tensorflow-macos is 2.6.0 Can anyone tell me why this happens?
Posted Last updated
.
Post not yet marked as solved
1 Replies
220 Views
Hello, I want to fine tune a CoreML model, with multiple binary outputs, on device. Therefore I would need multiple Loss Functions. If I try to compile that model I get the error: Error Domain=com.apple.CoreML Code=3 "Error reading protobuf spec. validator error: This model has more than one loss layers specified, which is not supported at the moment." UserInfo={NSLocalizedDescription=Error reading protobuf spec. validator error: This model has more than one loss layers specified, which is not supported at the moment.} Is it somehow possible to train an updatable CoreML model with multiple outputs on the device itself? I would really appreciate any help. Thank you!
Posted
by sthuettr.
Last updated
.
Post not yet marked as solved
0 Replies
150 Views
Hi, I am doing multiprocessing currently in python and need to upgrade my laptop. The scientific computing I am doing which usually takes awhile so I'd like to have as many cores as I can get. The question: How, if possible, might I use the NPU instead of (or in tandem with) the CPU for multiprocessing? In python it is fairly straight forward with the concurrent futures package, https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor .
Posted
by ThomK.
Last updated
.
Post not yet marked as solved
1 Replies
188 Views
I'm looking for a way to easily (or more easy than rewriting a time series data framework) deal with stock market data. I apparently need to preprocess much of the data I could get from typical APIs (Finnhub.io, AlphaVantage.co) to remove the weekend days from the datasets. Problem: When using the awesome NEW Charts framework to plot prices by daily close price - I get weekends and holidays in my charts. No "real" stock charting tool does this. They some how remove the non-market days from their charts. How? Researching I found the Python Pandas library for TimeSeries data... Can Apple's TabularData do this TimeSeries data manipulation for me? Can to share an example? Thanks! David
Posted Last updated
.
Post not yet marked as solved
0 Replies
155 Views
We're well into COVID times now so building vision app involving people wearing masks should be expected. Vision's face rectangle detector works perfectly fine on faces with masks, but that's not the case for face landmarks. Even when someone is wearing a mask, there are still a lot of landmarks exposed (e.g., pupils, eyes, nose, eyebrows, etc.). When can expect face landmark detection to work on faces with masks?
Posted
by kaccie14.
Last updated
.
Post not yet marked as solved
5 Replies
752 Views
Good Morning, I'm not sure whether I am alone but the BERTSQUAD seems to not work anymore (https://developer.apple.com/machine-learning/models/#text) since iOS 15.4 update. I tried different configurations and the basic example model and it does not work at all. Do you have also the issue? If yes, is there a workaround to make it work with the iOS update? Thank you in advance for your help
Posted
by Alexis-M.
Last updated
.
Post not yet marked as solved
4 Replies
301 Views
This does not seem to be effecting the training, but it seems somewhat important (no clue on how to read it however): Error: command buffer exited with error status. The Metal Performance Shaders operations encoded on it may not have completed. Error: (null) Internal Error (0000000e:Internal Error) <AGXG13XFamilyCommandBuffer: 0x29b027b50> label = <none> device = <AGXG13XDevice: 0x12da25600> name = Apple M1 Max commandQueue = <AGXG13XFamilyCommandQueue: 0x106477000> label = <none> device = <AGXG13XDevice: 0x12da25600> name = Apple M1 Max retainedReferences = 1 This is happening during a "heavy" model training on "heavy" dataset, so maybe is related to some memory issue, but I have no clue how to confront it
Posted Last updated
.
Post not yet marked as solved
3 Replies
253 Views
I've restarted my ColorProposer app and renamed it Chroma and the app suggests a color for a string. My main function is here: import Foundation import NaturalLanguage import CoreML func predict(for string: String) -> SingleColor? {     var model: MLModel     var predictor: NLModel     do {         model = try ChromaClassifier(configuration: .init()).model     } catch {         print("NIL MDL")         return ni     }     do {         predictor = try NLModel(mlModel: model)     } catch {         print("NIL PREDICT")         return nil     }     let colorKeys = predictor.predictedLabelHypotheses(for: string, maximumCount: 1) // set the maximumCount to 1...7     print(colorKeys)     var color: SingleColor = .init(red: 0, green: 0, blue: 0)     for i in colorKeys {         coor.morphing((ColorKeys.init(rawValue: i.key) ?? .white).toColor().percentage(of: i.value))         print(color)     }     return color } extension SingleColor {     mutating func morphing(_ color: SingleColor) {         self.blue += color.blue         self.green += color.green         self.red += color.red     }     func percentage(of percentage: Double) -> SingleColor {         return .init(red: slf.red * percentage, green: self.green * percentage, blue: self.blue * percentage)     } } struct SingleColor: Codable, Hashable, Identifiable {     var id: UUID {         get {             return .init()         }     }     var red: Double     var green: Double     var blue: Double     var color: Color {         get {             return Color(red: red / 255, green: green / 255, blue: blue / 255)         }     } } enum ColorKeys: String, CaseIterable {     case red = "RED"     case orange = "ORG"     case yellow = "YLW"     case green = "GRN"     case mint = "MNT"     case blue = "BLU"     case violet = "VLT"     case white = "WHT" } extension ColorKeys {     func toColor() -> SingleColor {         print(self)         switch self {         case .red:             return .init(red: 255, green: 0, blue: 0)         case .orange:             return .init(red: 255, green: 125, blue: 0)         case .yellow:             return .init(red: 255, green: 255, blue: 0)         case .green:             return .init(red: 0, green: 255, blue: 0)         case .mint:             return .init(red: 0, green: 255, blue: 255)         case .blue:             return .init(red: 0, green: 0, blue: 255)         case .violet:             return .init(red: 255, green: 0, blue: 255)         case .white:             return .init(red: 255, green: 255, blue: 255)         }     } } here's my view, quite simple: import SwiftUI import Combine struct ContentView: View {     @AppStorage("Text") var text: String = ""     let timer = Timer.publish(every: 1, on: .main, in: .common).autoconnect()     @State var color: Color? = .white     var body: some View {       TextField("Text...", text: $text).padding().background(color).onReceive(timer) { _ in             color = predict(for: text)?.color             print(color)         }     } } But the problem of not updating the view still persists. In prints, I discovered a really strange issue: The line of print(colorKeys) is always the same.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.2k Views
I have been trying to install TensorFlow 2.6.0 in a conda environment. Here's the command: python -m pip install tensorflow-macos==2.6.0 But it gives me this error:         TypeError: str expected, not int        [end of output]            note: This error originates from a subprocess, and is likely not a problem with pip.     error: legacy-install-failure           × Encountered error while trying to install package.     ╰─> numpy           note: This is an issue with the package mentioned above, not pip.     hint: See above for output from the failure.           [notice] A new release of pip available: 22.1.2 -> 22.2.1     [notice] To update, run: python3.8 -m pip install --upgrade pip     [end of output]     note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install backend dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. [notice] A new release of pip available: 22.1.2 -> 22.2.1 [notice] To update, run: python3.8 -m pip install --upgrade pip The full output is too large to fit in here. So I put the output here - https://docs.google.com/document/d/1eKL5UbeK8y0nNbp3mnWPBUutrTOTiWHALjZliQtB7jw/edit?usp=sharing Go check it out. Please help me to successfully install Tensorflow in my M1 MacBook Pro. OS: macOS Big Sur v11.6; Environment python: Python 3.8.13; Environment pip: Pip v22.1.2
Posted
by arannya.
Last updated
.
Post not yet marked as solved
3 Replies
240 Views
Hi! GPU acceleration lacks of M1 GPU support (only with this specific model), getting this message when trying to run a trained model on GPU: NotFoundError: Graph execution error: No registered 'AddN' OpKernel for 'GPU' devices compatible with node {{node model_3/keras_layer_3/StatefulPartitionedCall/StatefulPartitionedCall/StatefulPartitionedCall/roberta_pack_inputs/StatefulPartitionedCall/RaggedConcat/ArithmeticOptimizer/AddOpsRewrite_Leaf_0_add_2}} (OpKernel was found, but attributes didn't match) Requested Attributes: N=2, T=DT_INT64, _XlaHasReferenceVars=false, _grappler_ArithmeticOptimizer_AddOpsRewriteStage=true, _device="/job:localhost/replica:0/task:0/device:GPU:0" . Registered: device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 16534343205130372495, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64, DT_VARIANT] device='GPU'; T in [DT_FLOAT] device='DEFAULT'; T in [DT_INT32] device='CPU'; T in [DT_UINT64] device='CPU'; T in [DT_INT64] device='CPU'; T in [DT_UINT32] device='CPU'; T in [DT_UINT16] device='CPU'; T in [DT_INT16] device='CPU'; T in [DT_UINT8] device='CPU'; T in [DT_INT8] device='CPU'; T in [DT_INT32] device='CPU'; T in [DT_HALF] device='CPU'; T in [DT_BFLOAT16] device='CPU'; T in [DT_FLOAT] device='CPU'; T in [DT_DOUBLE] device='CPU'; T in [DT_COMPLEX64] device='CPU'; T in [DT_COMPLEX128] device='CPU'; T in [DT_VARIANT] [[model_3/keras_layer_3/StatefulPartitionedCall/StatefulPartitionedCall/StatefulPartitionedCall/roberta_pack_inputs/StatefulPartitionedCall/RaggedConcat/ArithmeticOptimizer/AddOpsRewrite_Leaf_0_add_2]] [Op:__inference_train_function_300451]
Posted
by sm_96.
Last updated
.
Post marked as solved
1 Replies
300 Views
I am trying to run a TensorFlow model on M1 Mac with the following settings: MacBook Pro M1 macOS 12.4 tensorflow-deps & tensorflow-estimator --> 2.9.0 tensorflow-macos --> 2.9.2 tensorflow-metal --> 0.5.0 keras --> 2.9.0 keras-preprocessing --> 1.1.2 Python 3.8.13 When resizing and rescaling from keras.layers, I got the following error: resize_and_rescale = keras.Sequential([ layers.experimental.preprocessing.Resizing(IMAGE_SIZE, IMAGE_SIZE), layers.experimental.preprocessing.Rescaling(1./255), ]) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [15], in <cell line: 1>() 1 resize_and_rescale = keras.Sequential([ ----> 2 layers.experimental.preprocessing.Resizing(IMAGE_SIZE, IMAGE_SIZE), 3 layers.experimental.preprocessing.Rescaling(1./255), 4 ]) AttributeError: module 'keras.layers' has no attribute 'experimental' Any suggestions? Thanks
Posted
by mtoseef99.
Last updated
.
Post not yet marked as solved
1 Replies
221 Views
Can someone tell me if using copyrighted content for neural network training is infringement or a fair use? For example: Someone took 100000 superhero pictures from Google for training. After this neural network can create superhero pictures with the query of user. Is it an infringement or a fair use? Can developer sell these created pictures to users (or a subscription to service)? Or everyone uses only public domain and open source content for training?
Posted
by Dimbill.
Last updated
.
Post not yet marked as solved
0 Replies
270 Views
Hello everyone, I am new to iOS development and currently working on an iOS app for a project where I want to detect and recognize text in an image. I am taking the image from the AVFoundation framework and getting a UIImage. For the text detection and recognition I decided to use the Google MLToolkit, therefore I simply put the UIImage into the specific MLKit function: func runTextRecognition(with image: UIImage) { let visionImage = VisionImage(image: image) textRecognizer.process(visionImage) { features, error in self.processResult(from: features, error: error) } } Now I have the problem, that when I am taking a picture having the iPhone in portrait mode I do not get any results out of the MLKit. Whereas, when I am holding the phone in landscape mode (homebutton to the right) I get the text which is shown on the image as my result. Is there any parameter I have to set to make Google MLKit work for portrait mode pictures? Probably I would suggest the error to be in the CameraController.swift file where I define my capture output. You can find the repository under: https://gitlab.com/lukas.kl/TextScan. I hope you can help me, thank you in advance! Kind regards, Lukas
Posted
by luk_as.
Last updated
.
Post not yet marked as solved
1 Replies
387 Views
https://betterdatascience.com/install-tensorflow-2-7-on-macbook-pro-m1-pro/ I installed tensorflow following the above link. However, when I use tensorflow in Jupyter notebook, the kernel dies randomly! And it happens often. I use the newest anaconda which supports M1. I wonder how to solve this problem. It's quit annoying. I've tried the same code on Mac Pro 2019 and the code runs well.
Posted Last updated
.
Post marked as solved
1 Replies
405 Views
Hello there, I have installed tensorflow in my M1 Macbook Pro using these commands in a conda environment: conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal python -m pip install tensorflow-datasets conda install jupyter pandas numpy matplotlib scikit-learn I did it following this instruction - https://github.com/mrdbourke/m1-machine-learning-test But when I import the packages I installed it gives me an error message. Here: import numpy as np import pandas as pd import sklearn import tensorflow as tf import matplotlib.pyplot as plt # Check for TensorFlow GPU access print(f"TensorFlow has access to the following devices:\n{tf.config.list_physical_devices()}") # See TensorFlow version print(f"TensorFlow version: {tf.__version__}") Error: --------------------------------------------------------------------------- NotFoundError Traceback (most recent call last) Input In [1], in <cell line: 4>() 2 import pandas as pd 3 import sklearn ----> 4 import tensorflow as tf 5 import matplotlib.pyplot as plt 7 # Check for TensorFlow GPU access File ~/miniforge3/lib/python3.9/site-packages/tensorflow/__init__.py:443, in <module> 441 _plugin_dir = _os.path.join(_s, 'tensorflow-plugins') 442 if _os.path.exists(_plugin_dir): --> 443 _ll.load_library(_plugin_dir) 444 # Load Pluggable Device Library 445 _ll.load_pluggable_device_library(_plugin_dir) File ~/miniforge3/lib/python3.9/site-packages/tensorflow/python/framework/load_library.py:151, in load_library(library_location) 148 kernel_libraries = [library_location] 150 for lib in kernel_libraries: --> 151 py_tf.TF_LoadLibrary(lib) 153 else: 154 raise OSError( 155 errno.ENOENT, 156 'The file or folder to load kernel libraries from does not exist.', 157 library_location) NotFoundError: dlopen(/Users/arannya/miniforge3/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 6): Symbol not found: __ZNKSt3__115basic_stringbufIcNS_11char_traitsIcEENS_9allocatorIcEEE3strEv Referenced from: /Users/arannya/miniforge3/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib (which was built for Mac OS X 12.3) Expected in: /usr/lib/libc++.1.dylib Please help me solve this issue. I have to make a robot for a competition. So I gotta run TensorFlow.
Posted
by arannya.
Last updated
.