Hello,
I would like to inquire about the release date of Swift Assist’s beta version. Apple has stated that it will be released later this year, but they have not provided a specific date or time.
Could you please provide information on the beta version’s release date? Additionally, is there a trial version available? If so, when was it released?
Thank you for your assistance.
ML Compute
RSS for tagAccelerate training and validation of neural networks using the CPU and GPUs.
Posts under ML Compute tag
39 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
func testMLTensor() {
let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self)
let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self)
for _ in 0...50 {
let t = Date()
let x = (t1 * t2)
print("MLTensor", t.timeIntervalSinceNow * 1000, "ms")
}
}
testMLTensor()
The above code took more time than expected, especially in the early stage of iteration.
func testMLTensor() {
let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self)
let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self)
for _ in 0...50 {
let t = Date()
let x = (t1 * t2)
print("MLTensor", t.timeIntervalSinceNow * 1000, "ms")
}
}
testMLTensor()
The above code took more time than expected, especially in the early stage of iteration.
func testMLTensor() {
let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self)
let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self)
for _ in 0...50 {
let t = Date()
let x = (t1 * t2)
print("MLTensor", t.timeIntervalSinceNow * 1000, "ms")
}
}
testMLTensor()
The above code took more time than expected, especially in the early stage of iteration.
I'm trying to cast the error thrown by TranslationSession.translations(from:) as Translation.TranslationError. However, the app crashes at runtime whenever Translation.TranslationError is used in the project.
Environment:
iOS Version: 18.1 beta
Xcode Version: 16 beta
yld[14615]: Symbol not found: _$s11Translation0A5ErrorVMa
Referenced from: <3426152D-A738-30C1-8F06-47D2C6A1B75B> /private/var/containers/Bundle/Application/043A25BC-E53E-4B28-B71A-C21F77C0D76D/TranslationAPI.app/TranslationAPI.debug.dylib
Expected in: /System/Library/Frameworks/Translation.framework/Translation
I dragged a folder containing two subfolders directly into CreateML. One subfolder contains images, and the other contains labeled datasets. The number of files in the labeled dataset matches the number of image files. However, it shows "Missing data for label dianjiaoyise.jsons. Detailed list of labels missing files: ["dianjiaoyise.jsons"]."
Hello,
I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary.
I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
We have to convert a local DOC file to PDF without any server interaction. It will be in offline mode.
Any suggestion will be appreciated.
Hello everyone,
I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2.
I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h))
The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there).
When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error:
Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000
It is the first time I am using it, so I don't really have so much of experience.
Could you help me to understand what could be the problem?
Thanks a lot
Hi everyone,
I was wondering, on how accurate is the Hand Classification ML? For Example: Is it possible to understand the different letters of the Sign Language Alphabet or is it only capable of recognizing simple poses like a thumbs up?
I wrote a watch-only App using Bluetooth wich is running on the watch. But no prints or logs appear on output.
I only get:
[S:1] Error received: Connection invalidated.
[S:3] Error received: Connection invalidated.
[S:4] Error received: Connection invalidated.
[S:5] Error received: Connection invalidated.
Message from debugger: killed
Program ended with exit code: 9
In the launch log I find:
Showing Recent Messages
Launch com.apple.Carousel
Platform: watchOS
Device Identifier: 00008310-001244D611D1A01E
Operating System Version: 10.5 (21T576)
Model: Apple Watch Series 9 (Watch7,1)
Apple Watch von Draha is connected via network
Installing com.apple.Carousel on Apple Watch von Draha
Installing on Apple Watch von Draha
Successfully installed
XPC/App Extension Debugging
Setup XPC Debugging for: gwe.WatchBleTest.watchkitapp.WatchBleWidget
Console logging policy: Synchronously obtain os_logs via libLogRedirect, and read stdio from File Descriptors
Stop XPC Debugging for: gwe.WatchBleTest.watchkitapp.WatchBleWidget
View debugging: disabled
Insert view debugging dylib on launch: enabled
Queue debugging: enabled
Memory graph on resource exception: disabled
Address sanitizer: disabled
Thread sanitizer: disabled
Using LLDBRPC. The LLDB framework is from /Applications/Xcode.app/Contents/SharedFrameworks
Device support directory: /Users/gertelsholz/Library/Developer/Xcode/watchOS DeviceSupport/Watch7,1 10.5 (21T576)/Symbols
Attached to process with pid 469
What could be the cause of the issue, and how to fix it?
Thanks for your help!
Hi all,
I'm having trouble even getting jax-metal latest version to install on my M1 MacBook Pro. In a clean conda environment, I pip install jax-metal and get
In [1]: import jax; print(jax.numpy.arange(10))
Platform 'METAL' is experimental and not all JAX functionality may be correctly supported!
---------------------------------------------------------------------------
XlaRuntimeError Traceback (most recent call last)
[... skipping hidden 1 frame]
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:977, in _init_backend(platform)
976 logger.debug("Initializing backend '%s'", platform)
--> 977 backend = registration.factory()
978 # TODO(skye): consider raising more descriptive errors directly from backend
979 # factories instead of returning None.
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:666, in register_plugin.<locals>.factory()
665 if not xla_client.pjrt_plugin_initialized(plugin_name):
--> 666 xla_client.initialize_pjrt_plugin(plugin_name)
667 updated_options = {}
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jaxlib/xla_client.py:176, in initialize_pjrt_plugin(plugin_name)
169 """Initializes a PJRT plugin.
170
171 The plugin needs to be loaded first (through load_pjrt_plugin_dynamically or
(...)
174 plugin_name: the name of the PJRT plugin.
175 """
--> 176 _xla.initialize_pjrt_plugin(plugin_name)
XlaRuntimeError: INVALID_ARGUMENT: Mismatched PJRT plugin PJRT API version (0.47) and framework PJRT API version 0.51).
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import jax; print(jax.numpy.arange(10))
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py:2952, in arange(start, stop, step, dtype)
2950 ceil_ = ufuncs.ceil if isinstance(start, core.Tracer) else np.ceil
2951 start = ceil_(start).astype(int) # type: ignore
-> 2952 return lax.iota(dtype, start)
2953 else:
2954 if step is None and start == 0 and stop is not None:
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/lax/lax.py:1282, in iota(dtype, size)
1277 def iota(dtype: DTypeLike, size: int) -> Array:
1278 """Wraps XLA's `Iota
1279 <https://www.tensorflow.org/xla/operation_semantics#iota>`_
1280 operator.
1281 """
-> 1282 return broadcasted_iota(dtype, (size,), 0)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/lax/lax.py:1292, in broadcasted_iota(dtype, shape, dimension)
1289 static_shape = [None if isinstance(d, core.Tracer) else d for d in shape]
1290 dimension = core.concrete_or_error(
1291 int, dimension, "dimension argument of lax.broadcasted_iota")
-> 1292 return iota_p.bind(*dynamic_shape, dtype=dtype, shape=tuple(static_shape),
1293 dimension=dimension)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:387, in Primitive.bind(self, *args, **params)
384 def bind(self, *args, **params):
385 assert (not config.enable_checks.value or
386 all(isinstance(arg, Tracer) or valid_jaxtype(arg) for arg in args)), args
--> 387 return self.bind_with_trace(find_top_trace(args), args, params)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:391, in Primitive.bind_with_trace(self, trace, args, params)
389 def bind_with_trace(self, trace, args, params):
390 with pop_level(trace.level):
--> 391 out = trace.process_primitive(self, map(trace.full_raise, args), params)
392 return map(full_lower, out) if self.multiple_results else full_lower(out)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:879, in EvalTrace.process_primitive(self, primitive, tracers, params)
877 return call_impl_with_key_reuse_checks(primitive, primitive.impl, *tracers, **params)
878 else:
--> 879 return primitive.impl(*tracers, **params)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/dispatch.py:86, in apply_primitive(prim, *args, **params)
84 prev = lib.jax_jit.swap_thread_local_state_disable_jit(False)
85 try:
---> 86 outs = fun(*args)
87 finally:
88 lib.jax_jit.swap_thread_local_state_disable_jit(prev)
[... skipping hidden 17 frame]
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:902, in backends()
900 else:
901 err_msg += " (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.)"
--> 902 raise RuntimeError(err_msg)
904 assert _default_backend is not None
905 if not config.jax_platforms.value:
RuntimeError: Unable to initialize backend 'METAL': INVALID_ARGUMENT: Mismatched PJRT plugin PJRT API version (0.47) and framework PJRT API version 0.51). (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.)
jax.__version__ is 0.4.27.
Will macos support amd rx7600?
I hope this message finds you well. I recently had the opportunity to watch the insightful session titled "Improve Core ML Integration with Async Prediction" and was thoroughly impressed by the depth of information and the practical demonstration provided. The session offered valuable insights that I believe would greatly benefit my ongoing projects and my understanding of Core ML integration.
As I am keen on implementing the demonstrated workflows and techniques within my own work, I am reaching out to kindly request access to the source code and any related material presented during the session. Having access to the code would enable me to better understand the concepts discussed and apply them more effectively in real-world scenarios.
I believe that being able to review and experiment with the actual code would significantly enhance my learning experience and the implementation efficiency of my projects. It would also serve as a valuable resource for referencing best practices in Core ML integration and async prediction techniques.
Thank you very much for considering my request. I greatly appreciate the effort that went into creating such an informative session and am looking forward to potentially exploring the material in greater depth.
Best regards,
Fabio G.
Hi
can you add new feature in Pages and Numbers using Ai to apply style from PDF or template to documents, so ai arrange footers and headers and fonts , pages breaks , pages numbers, like one in PDF or templates , so we can auto format documents to desired look standard, also for Numbers. So we can on raw text upload pdf of another documents or report and get documents in that style for export to pdf or print
Best regards,
NLEmembedding.wordEmbedding is not available in your language.
This is a very serious issue for any service that caters to Koreans, please fix it quickly. We have added the sample code below.
import UIKit
import CoreML
import NaturalLanguage
class MLTextViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
execute()
}
func execute() {
if let embedding = NLEmbedding.wordEmbedding(for: .korean) {
let word = "bicycle"
if let vector = embedding.vector(for: word) {
print(vector)
}
let specificDistance = embedding.distance(between: word, and: "motorcycle")
print("✅ \(specificDistance.description)")
embedding.enumerateNeighbors(for: word, maximumCount: 5) { neighbor, distance in
print("\(neighbor): \(distance.description)")
return true
}
}
}
}
I cannot find the bug ... but run this code (python) on torch device mps0 is slow
quicker and cpu0 or cpu1 ... but where is the bug? or run it on neural engine with cpu1?
you need a setup like this:
#!/bin/bash
export HOMEBREW_BREW_GIT_REMOTE="https://github.com/Homebrew/brew" # put your Git mirror of Homebrew/brew here
export HOMEBREW_CORE_GIT_REMOTE="https://github.com/Homebrew/homebrew-core" # put your Git mirror of Homebrew/homebrew-core here
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
eval "$(/opt/homebrew/bin/brew shellenv)"
brew update --force --quiet
chmod -R go-w "$(brew --prefix)/share/zsh"
export OPENBLAS=$(/opt/homebrew/bin/brew --prefix openblas)
export CFLAGS="-falign-functions=8 ${CFLAGS}"
brew install wget
brew install unzip
conda init --all
conda create -n torch-gpu python=3.10
conda activate torch-gpu
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 -c pytorch
conda install -c conda-forge jupyter jupyterlab
python3 -m pip install --upgrade pip
python3 -m pip install insightface==0.2.1 onnx imageio scikit-learn scikit-image moviepy
python3 -m pip install googledrivedownloader
python3 -m pip install imageio==2.4.1
python3 -m pip install Cython
python3 -m pip install --no-use-pep517 numpy
python3 -m pip install torch
python3 -m pip install image
python3 -m pip install timm
python3 -m pip install PlL
python3 -m pip install h5py
for i in `seq 1 6`; do
python3 test.py
done
conda deactivate
exit 0
test.py:
import torch
import math
# this ensures that the current MacOS version is at least 12.3+
print(torch.backends.mps.is_available())
# this ensures that the current current PyTorch installation was built with MPS activated.
print(torch.backends.mps.is_built())
dtype = torch.float
device = torch.device("cpu",0)
#device = torch.device("cpu",1)
#device = torch.device("mps",0)
# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x ** 2 + d * x ** 3
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
Hi,
I am looking for a routine to perform complex-valued linear algebra on the GPU in python for scientific programming, in particular quantum physics simulations.
At the moment I am looking for a routine for complex-valued matrix multiplication. I found MLX has a routine for float matrix multiplication, but it does not directly work for complex-valued matrices. I figured a work-around by splitting the complex valued matrix into real and imaginary part and working with the pair, but it makes it cumbersome to integrate with the remainder of the code. I was hoping for a library-based implementation similar to cupy.
I also tried out using the tensorflow linear algebra routines, but I couldn't get them to run on the GPU by now. Specifically, a testfile with a tensorflow.keras.applications.ResNet50 routine runs on the GPU, but the routines from tensorflow.linalg and tensorflow.math that I tested (matmul, expm, eigh) were not running on the GPU.
Any advice on how to make linear algebra calculations on mac GPUs work is highly appreciated! For my application the unified memory might be especially beneficial.
Thank you!
I want to use CoreML to process video data. The ML model will take multiple frames as input. How should I get multi frames from ios and process it?
Thanks in advance for any suggestions.
In theory, sending signals from iPhone apps to and from the brain with non-invasive technology could be achieved through a combination of brain-computer interface (BCI) technologies, machine learning algorithms, and mobile app development.
Brain-Computer Interface (BCI): BCI technology can be used to record brain signals and translate them into commands that can be understood by a computer or a mobile device. Non-invasive BCIs, such as electroencephalography (EEG), can track brain activity using sensors placed on or near the head[6]. For instance, a portable, non-invasive, mind-reading AI developed by UTS uses an AI model called DeWave to translate EEG signals into words and sentences[3].
Machine Learning Algorithms: Machine learning algorithms can be used to analyze and interpret the brain signals recorded by the BCI. These algorithms can learn from large quantities of EEG data to translate brain signals into specific commands[3].
Mobile App Development: A mobile app can be developed to receive these commands and perform specific actions on the iPhone. The app could also potentially send signals back to the brain using technologies like transcranial magnetic stimulation (TMS), which can deliver information to the brain[5].
However, it's important to note that while this technology is theoretically possible, it's still in the early stages of development and faces significant technical and ethical challenges. Current non-invasive BCIs do not have the same level of fidelity as invasive devices, and the practical application of these systems is still limited[1][3]. Furthermore, ethical considerations around privacy, consent, and the potential for misuse of this technology must also be addressed[13].
Sources
[1] You can now use your iPhone with your brain after a major breakthrough | Semafor https://www.semafor.com/article/11/01/2022/you-can-now-use-your-iphone-with-your-brain
[2] ! Are You A Robot? https://www.sciencedirect.com/science/article/pii/S1110866515000237
[3] Portable, non-invasive, mind-reading AI turns thoughts into text https://techxplore.com/news/2023-12-portable-non-invasive-mind-reading-ai-thoughts.html
[4] Elon Musk's Neuralink implants brain chip in first human https://www.reuters.com/technology/neuralink-implants-brain-chip-first-human-musk-says-2024-01-29/
[5] BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains - Scientific Reports https://www.nature.com/articles/s41598-019-41895-7
[6] Brain-computer interfaces and the future of user engagement https://www.fastcompany.com/90802262/brain-computer-interfaces-and-the-future-of-user-engagement
[7] Mobile App + Wearable For Neurostimulation - Accion Labs https://www.accionlabs.com/mobile-app-wearable-for-neurostimulation
[8] Signal Generation, Acquisition, and Processing in Brain Machine Interfaces: A Unified Review https://www.frontiersin.org/articles/10.3389/fnins.2021.728178/full
[9] Mind-reading technology has arrived https://www.vox.com/future-perfect/2023/5/4/23708162/neurotechnology-mind-reading-brain-neuralink-brain-computer-interface
[10] Synchron Brain Implant - Breakthrough Allows You to Control Your iPhone With Your Mind - Grit Daily News https://gritdaily.com/synchron-brain-implant-controls-tech-with-the-mind/
[11] Mind uploading - Wikipedia https://en.wikipedia.org/wiki/Mind_uploading
[12] BirgerMind - Express your thoughts loudly https://birgermind.com
[13] Elon Musk wants to merge humans with AI. How many brains will be damaged along the way? https://www.vox.com/future-perfect/23899981/elon-musk-ai-neuralink-brain-computer-interface
[14] Models of communication and control for brain networks: distinctions, convergence, and future outlook https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7655113/
[15] Mind Control for the Masses—No Implant Needed https://www.wired.com/story/nextmind-noninvasive-brain-computer-interface/
[16] Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them https://www.theverge.com/2019/7/16/20697123/elon-musk-neuralink-brain-reading-thread-robot
[17] Essa and Kotte https://arxiv.org/pdf/2201.04229.pdf
[18] Synchron's Brain Implant Breakthrough Lets Users Control iPhones And iPads With Their Mind https://hothardware.com/news/brain-implant-breakthrough-lets-you-control-ipad-with-your-mind
[19] An Apple Watch for Your Brain https://www.thedeload.com/p/an-apple-watch-for-your-brain
[20] Toward an information theoretical description of communication in brain networks https://direct.mit.edu/netn/article/5/3/646/97541/Toward-an-information-theoretical-description-of
[21] A soft, wearable brain–machine interface https://news.ycombinator.com/item?id=28447778
[22] Portable neurofeedback App https://www.psychosomatik.com/en/portable-neurofeedback-app/
[23] Intro to Brain Computer Interface http://learn.neurotechedu.com/introtobci/