Multi-GPU Training on a single GPU System in 3 Minutes | by Sascha Kirch | May, 2023
I guess the problem is obvious and you probably experienced it yourself. You want to train a deep learning model and you want to take advantage of multiple GPUs, a TPU or even multiple workers for some extra speed or larger batch size. But of course you cannot (let’s say should not because I’ve seen it quite often 😅) block the usually shared hardware for debugging or even spend a ton of money on a paid cloud instance.Let me tell you, it is not important how many physical GPUs your system has but rather how many your…