Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

88 Posts
Sort by:
Post not yet marked as solved
1 Replies
668 Views
The tensorflow-macos repo has been archived and the last commit redirects users to the plugin page; however, this page still instructs users to install the now-archived fork. Are these instructions still up to date? Also, what is the plan long-term for Metal/M1 acceleration in Tensorflow—will the necessary changes eventually be upstreamed, if they haven't already?
Posted Last updated
.
Post not yet marked as solved
2 Replies
528 Views
Good day people! I'm currently working on my master thesis in media informatics. I'd really appreciate to discuss my topic with you guys, so I may get some interesting ideas or new information. The goal is to implement an app, specifically designed for places like museums where the envrionment isn't perfect for AR tracking. (Darkness, no network connection, maybe exhibits made out of glass...) Therefore, i'd like to develop a neuronal network for the new ipad pro that takes rgb-d data to predict a pose estimation in a scene for an object, so that it matches the real world object perfectly. This placed object will be a perfect 3d model replica of the real object. (hand modeled or scanned and revised) This should allow me to place AR Content precisely over the real world object, even in difficult lightlings and stuff. Maybe it will improve occlusion, too. I can imagine that the neuronal network may also detect structures, edges and semantic coherences better than the usual approach. My first thought was to work with CoreML, Metal, maybe Vision and ARKit. I will also try out XCode for the first time. Maybe you guys have interesting ideas for improvement or can guide me a little bit, since i fell a bit lost at the moment. Would you use rather point clouds or the raw depth buffer to train the model? Would you also train with edge filter images and stuff? Why or why not? Thanks in advance, it would mean the world to me! Kind regards, Miri :-)
Posted
by MiriamJo.
Last updated
.
Post not yet marked as solved
1 Replies
368 Views
Is it possible to do any of the following: Export a model created using MetalPerformanceShadersGraph to a CoreML file; Failing 1., save a trained MetalPerformanceShadersGraph model in any other way for deployment; Import a CoreML model and use it as a part of a MetalPerformanceShadersGraph model. Thanks!
Posted
by Alan_Z.
Last updated
.
Post not yet marked as solved
1 Replies
306 Views
Greetings. I was shopping around for an external GPU and Machine Learning + GPU solution for my Mac. Are there any suggestions? It looks like Keras 2.4 has dropped the multi backed support which is worrying. I'm trying to make sure I make a purchase that will fly with what I have. I'm using a Mac Mini (2018) with 64GB RAM. I have multiple Thunderbolt 3 ports. This is an Intel chipset machine, not an M1. Is this feasible?
Posted
by DrColinL.
Last updated
.
Post not yet marked as solved
0 Replies
328 Views
Hello! I’m having an issue with retrieving the trained weights from MLCLSTMLayer in ML Compute when training on a GPU. I maintain references to the input-weights, hidden-weights, and biases tensors and use the following code to extract the data post-training: extension MLCTensor { func dataArray<Scalar>(as _: Scalar.Type) throws -> [Scalar] where Scalar: Numeric { let count = self.descriptor.shape.reduce(into: 1) { (result, value) in result *= value } var array = [Scalar](repeating: 0, count: count) self.synchronizeData() // This *should* copy the latest data from the GPU to memory that’s accessible by the CPU _ = try array.withUnsafeMutableBytes { (pointer) in guard let data = self.data else { throw DataError.uninitialized // A custom error that I declare elsewhere } data.copyBytes(to: pointer) } return array } } The issue is that when I call dataArray(as:) on a weights or biases tensor for an LSTM layer that has been trained on a GPU, the values that it retrieves are the same as they were before training began. For instance, if I initialize the biases all to 0 and then train the LSTM layer on a GPU, the biases values seemingly remain 0 post-training, even though the reported loss values decrease as you would expect. This issue does not occur when training an LSTM layer on a CPU, and it also does not occur when training a fully-connected layer on a GPU. Since both types of layers work properly on a CPU but only MLCFullyConnectedLayer works properly on a GPU, it seems that the issue is a bug in ML Compute’s GPU implementation of MLCLSTMLayer specifically. For reference, I’m testing my code on M1 Max. Am I doing something wrong, or is this an actual bug that I should report in Feedback Assistant?
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.9k Views
Hi everyone, I found that the performance of GPU is not good as I expected (as slow as a turtle), I wanna switch from GPU to CPU. but mlcompute module cannot be found, so wired. The same code ran on colab and my computer (jupyter lab) take 156s vs 40 minutes per epoch, respectively. I only used a small dataset (a few thousands of data points), and each epoch only have 20 baches. I am so disappointing and it seems like the "powerful" GPU is a joke. I am using 12.0.1 macOS and the version of tensorflow-macos is 2.6.0 Can anyone tell me why this happens?
Posted Last updated
.
Post not yet marked as solved
0 Replies
227 Views
wwdc20-10673 briefly shows how to visualize optical flow generated by VNGenerateOpticalFlowRequest and sample code is available through the developer app. But how can we build the OpticalFlowVisualizer.ci.metallib file from the CI-kernel code provided as OpticalFlowVisualizer.cikernel?
Posted
by dabx.
Last updated
.
Post not yet marked as solved
1 Replies
556 Views
After collecting data I wanted to create an ML model I have done this before but for some reason I was getting the error: error: Couldn't lookup symbols:  CreateML.MLDataTable.init(contentsOf: Foundation.URL, options: CreateML.MLDataTable.ParsingOptions) throws -> CreateML.MLDataTable  CreateML.MLDataTable.init(contentsOf: Foundation.URL, options: CreateML.MLDataTable.ParsingOptions) throws -> CreateML.MLDataTable So I went to test a working example that was created by apple using this link: https://developer.apple.com/documentation/createml/creating_a_model_from_tabular_data After running this test with no data changed, I still get the same error logged. I don't know if I'm doing something wrong. Any advice would be greatly appreciated.
Posted Last updated
.
Post not yet marked as solved
4 Replies
11k Views
Can I run inference on the new MacBook Pro with M1 Chips (Apple Silicon) using Keras Models (sometimes PyTorch). These would be computer vision models, some might have custom loss functions or metrics and would have been trained on lets say, Google Colab. If I can perform inference, how do I do that? Also, will the Neural Engines help while performing inference or will it boost training if I have to train on the Mac?
Posted
by jmayank23.
Last updated
.
Post not yet marked as solved
0 Replies
311 Views
Hi all, I've spent some time experimenting with the BNNS (Accelerate) LSTM-related APIs lately and despite a distinct lack of documentation (even though the headers have quite a few) a got most things to a point where I think I know what's going on and I get the expected results. However, one thing I have not been able to do is to get this working if inputSize != hiddenSize. I am currently only concerned with a simple unidirectional LSTM with a single layer but none of my permutations of gate "iw_desc" matrices with various 2D layouts and reordering input-size/hidden-size made any difference, ultimately BNNSDirectApplyLSTMBatchTrainingCaching always returns -1 as an indication of error. Any help would be greatly appreciated. PS: The bnns.h framework header file claims that "When a parameter is invalid or an internal error occurs, an error message will be logged. Some combinations of parameters may not be supported. In that case, an info message will be logged.", and yet, I've not been able to find any such messages logged to NSLog() or stderr or Console. Is there a magic environment variable that I need to set to get more verbose logging?
Posted
by andi.
Last updated
.
Post not yet marked as solved
0 Replies
410 Views
I'm using Vision to conduct some OCR from a live camera feed. I've setup my VNRecognizeTextRequests as follows: let request = VNRecognizeTextRequest(completionHandler: recognizeTextCompletionHandler) request.recognitionLevel = .accurate request.usesLanguageCorrection = false And I handle the results as follows: guard let observations = request.results as? [VNRecognizedTextObservation] else { return } for observation in observations { if let recognizedText = observation.topCandidates(1).first { guard recognizedText.confidence >= self.confidenceLimit, // set to 0.5 let foundText = validateRegexPattern(text: recognizedText.string, regexPattern: self.regexPattern), let foundDecimal = Double(foundText) else { continue } } This is actually working great and yielding very accurate results, but the confidence values I'm receiving from the results are generally either 0.5 or 1.0, and rarely 0.3. I find these to be pretty nonsensical confidence values and I'm wondering if this is the intended result or some sort of bug. Conversely, using recognitionLevel = .fast yields more realistic and varied confidence values, but much less accurate results overall (even though fast is recommended for OCR from a live camera feed, I've had significantly better results using the accurate recognition level, which is why I've been using the accurate recognition level)
Posted
by ctj388.
Last updated
.
Post not yet marked as solved
0 Replies
303 Views
I'm trying to apply TensorFlow-macOS to accelerate the fine-tune of hugging face transformer program on my Radeon Pro 5300M. However, there shows that "Cannot assign a device for operation", would it be possible to tell me why and how to solve this problem? issues: Version of TensorFlow-macOS is 2.5.0; I've tried to update into 2.6.0, and it've worked. However, it's extremely slower than the 2.5.0 one. It would be a better way to fix 2.5.0 according to the 2.6.0, but I don't know how to do.
Posted Last updated
.
Post not yet marked as solved
1 Replies
830 Views
this is the message i got as i tried to install tensorflow on my mac M1ship ERROR: tensorflow_addons_macos-0.1a2-cp38-cp38-macosx_11_0_arm64.whl is not a supported wheel on this platform. i have followed all the instruction as presented on the website can somebody help please.?
Posted Last updated
.
Post not yet marked as solved
2 Replies
390 Views
Hi, I have seen this video: https://developer.apple.com/videos/play/wwdc2021/10041/ and in my project i am trying to draw detected barcodes. I am using Vision framework and i have the barcode position in boundingBox parameter, but i dont understand cgrect of that parameter. I am programming in objective c and i don't see resources, and for more complication i have not an image, i am capturing barcodes from video camera sesion. for parts: 1-how can i draw detected barcode like in the video (from an image). 2-how can i draw detected barcode in capturesession. I have used VNImageRectForNormalizedRect to pass from normalized to pixel, but the result is not correct. thank you very much.
Posted Last updated
.
Post not yet marked as solved
1 Replies
332 Views
In a section of my app I would like to recommend restaurants to users based on certain parameters. Some parameters have a higher weighting than others. In this WWDC Video a very similar app is made. Here if a user likes a dish a value of 1.0 is assigned to those specific keywords (that apple to the dish) with a value of -1.0 to all other keywords that don't apply to that dish. For my app if the user has ordered I then apply a value of 1.0 (to the keywords that apply) but if a user has just expressed interest (but not ordered) then can I apply a value of 0.5 (and -0.5) instead, would the model adapt to this?
Posted Last updated
.