MacBook Pro M2 Max: 32GB vs 64GB RAM for Machine Learning and Longevity

Hi everyone,

I'm a Machine Learning Engineer, and I'm planning to buy the MacBook Pro M2 Max with a 38-core GPU variant. I'm uncertain about whether to choose the 32GB RAM or 64GB RAM option. Based on my research and use case, it seems that 32GB should be sufficient for most tasks, including the 4K video rendering I occasionally do. However, I'm concerned about the longevity of the device, as I'd like to keep the MacBook up-to-date for at least five years.

Additionally, considering the 38-core GPU, I wonder if 32GB of unified memory might be insufficient, particularly when I need to train Machine Learning models or run docker or even kubernetes cluster.

I don't have any budget constraints, as the additional $400 cost isn't an issue, but I want to make a wise decision. I would appreciate any advice on this matter. Thanks in advance!

Post not yet marked as solved Up vote post of Aditya-ai Down vote post of Aditya-ai
4.0k views

Replies

Don’t expect that your Mac with M2 will make you able to train a decent model. MPS on PyTorch is handicapped, you need cuda to play around some models.

  • So do you recommend I stay with the 32gb unified memory and that should be enough for good long five years with the usecase? Offcourse as a ML engineer I have access to GPU clusters and AWS Cloud Compute, however, as I mentioned I would like to train ML models sometimes and run Docker, K8S, etc. on it.

    Also the benchmarks shown here look compelling: https://youtu.be/u9ECps9b664

Add a Comment

I just bought a Macbook Pro. Last time I got a Mac laptop, it was a 2011 Mabook Air, 1.8GHz Intel core i7 with 4 GB memory, which was a high-end specification for the time. As this has lasted over 10 years and is only now beginning to struggle with everyday workloads (not ML), I feel that the high-end spec policy paid off, so this time have gone for an M2 Max with 96 GB. I suspect that the point made in another reply about it not being the best solution for serious training is well taken, and installation of pytorch, tensorflow, and transformers is proving much trickier than I had hoped, but it seems to be performing well on basic vector operations such as cosine similarity. I expect that future applications will use this kind of processing, and am hoping that this macbook won't run out of steam for such applications for another 10 years. I guess that Docker and K8s would be no problem, and that small-scale training might be OK. But it's a bit of a "finger in the air" decision. Good luck with whatever you decide on!

  • BTW, I installed tensorflow and checked the installation as per https://developer.apple.com/metal/tensorflow-plugin/ Running the check program produced the following result.

    Epoch 1/5 2023-06-08 10:27:44.586233: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled. 782/782 [==============================] - 47s 50ms/step - loss: 4.9358 - accuracy: 0.0682

    . . .

Add a Comment