Hello,
I am trying to use gpu for machine learning task from apple using "mps" as device for GPU but it is not working. I am using PyTorch Stable version. How can I use MacBook GPU for machine learning tasks?
Metal
RSS for tagRender advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.
Post
Replies
Boosts
Views
Activity
with my MacBook m2.
The code works correctly both on CPU and GPU, but the speed on GPU is much slower!
I have loaded my statistic and my model on GPU, and it seemed to work.
/Users/guoyijun/Desktop/iShot_2023-08-20_09.57.41.png
I printed my code runtime. when the following function "train" is called, the loop speed among them runs extraordinarily slow.
def train(net, device, train_features, train_labels, test_features, test_labels,
num_epochs, learning_rate, weight_decay, batch_size):
train_ls, test_ls = [], []
train_iter = d2l.load_array((train_features, train_labels), batch_size, device)
# Adam
optimizer = torch.optim.Adam(net.parameters(), lr = learning_rate, weight_decay = weight_decay)
for epoch in range(num_epochs):
for X, y in train_iter:
optimizer.zero_grad()
l = loss(net(X), y)
l.backward()
optimizer.step() #
train_ls.append(log_rmse(net, train_features, train_labels))
return train_ls, test_ls
I use Macmini with MacOS Ventura 13.3.1, while the Mac running MetalFX sample code, and choose Temporal Scaler, makeTemporalScaler return nil value, and print "The temporal scaler effect is not usable!". If i choose SpatialScaler, it is ok.
guard let temporalScaler = desc.makeTemporalScaler(device: device) else {
print("The temporal scaler effect is not usable!")
mfxScalingMode = .defaultScaling
return
}
Sample code:
https://developer.apple.com/documentation/metalfx/applying_temporal_antialiasing_and_upscaling_using_metalfx?language=objc
I downloaded this sample:
https://developer.apple.com/documentation/metal/basic_tasks_and_concepts/using_metal_to_draw_a_view_s_contents?preferredLanguage=occ
I commented out this line in AAPLViewController.mm
// _view.enableSetNeedsDisplay = YES;
I modified the presentDrawable line in AAPLRenderer.mm to add afterMinimumDuration:
[commandBuffer presentDrawable:drawable afterMinimumDuration:1.0/60];
I then added a presentedHandler before the above line that records the time between successive presents.
Most of the time it correctly reports 0.166667s. However, about every dozen or so frames (it varies) it seems to present a frame early with an internal of 0.0083333333s followed by the next frame after around 0.24s.
Is this expected behaviour, I was hoping that afterMinimumDuration would specifically make things consistent. Why would it present a frame early?
This is on a new MacBook Pro 16 running latest macOS Monterrey, and the sample project upgraded to have a minimum deployment target of 11.0. Xcode latest public release 13.1.
When I use metal to render, the application switch to the background resulting in metal rendering failure in iOS 15 sys.
How can I do?
Error:
Execution of the command buffer was aborted due to an error during execution.Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
Hi,
I am training an adversarial auto encoder using PyTorch 2.0.0 on Apple M2 (Ventura 13.1), with conda 23.1.0 as manager.
I encountered this error:
/AppleInternal/Library/BuildRoots/5b8a32f9-5db2-11ed-8aeb-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSNDArray/Kernels/MPSNDArrayConvolutionA14.mm:3967: failed assertion `destination kernel width and filter kernel width mismatch'
/Users/vk/miniconda3/envs/betavae/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
To my knowledge, the code broke down when running self.manual_backward(loss["g_loss"]) this block:
g_opt.zero_grad()
self.manual_backward(loss["g_loss"])
g_opt.step()
The same code run without problems on linux distribution.
Any thoughts on how to fix it are highly appreciated!