Setting up a home GPU deep-learning server (the easiest way for Mac users)
This article outlines how to turn an old desktop PC with a GPU into a remote GPU server for doing deep learning or other tasks. This works well in my setup where I want to do all the work from my Mac M1 and using Mac OS. In this setup, the PC GPU sits in a different room does all the heavy lifting in the background. This takes 2–3 hours to setup if all goes well.
Apple M1 is unfortunately not yet ready for deep learning
I use Apple currently as my main operating system on a Mac M1 Mini and M1 Macbook Air.
I spent a number of weeks using the M1 on various machine learning tasks using the the M1 GPUs. This uses Tensorflow in particular as it is currently better supported for M1 Apple silicon, but Pytorch for Apple silicon is currently under development.
There were two big problems with using the Apple M1.
Firstly, performance is erratic. There are many articles and Youtube posts on performance using M1 and its GPUs for training ML models. My experience was that for some tasks it was surprisingly fast (particularly small models), but for deep and large models it was around 10x slower than a decent PC GPU.
Secondly, and more importantly, a lot of time is wasted in fiddling to get the setup to work on Apple currently. Whilst Tensorflow for Mac is now released and works fine, many other packages on top don’t work out of the box and you have to try to align the right version numbers. It’s possible to do but is a fiddle and therefore time is wasted on setup rather than doing the real work.
However I currently much prefer the Apple operating system UI to work with and do not want to be switching between Linux and Apple as that will also slow me down. So how do I get the best of both worlds? The Apple UI and the Linux GPU ML ecosystem …