LLM size for fine-tuning using MLX in MacBook

Hi, recently i tried to fine-tune Gemma-2-2b mlx model on my macbook (24 GB UMA). The code started running, after few seconds i saw swap size reaching 50GB and ram around 23 GB and then it stopped. I ran the Gemma-2-2b (cuda) on colab, it ran and occupied 27 GB on A100 gpu and worked fine. Here i didn't experienced swap issue.

Now my question is if my UMA was more than 27 GB, i also would not have experienced swap disk issue.

Thanks.

Answered by DTS Engineer in 861478022

Hello,

The MLX folks are requesting that you create an issue in the MLX GitHub repo with steps to reproduce the problem.

They aim to debug the problem.

We'd greatly appreciate it if you posted any solutions back here to the developer forums.

Hello,

The MLX folks are requesting that you create an issue in the MLX GitHub repo with steps to reproduce the problem.

They aim to debug the problem.

We'd greatly appreciate it if you posted any solutions back here to the developer forums.

LLM size for fine-tuning using MLX in MacBook
 
 
Q