multiple gpu slow?

2 views (last 30 days)
Tai-Wu Chiang
Tai-Wu Chiang on 6 May 2017
Commented: Marco Francini on 23 Sep 2019
I just got a machine with 4 1080 TI GPUs and I want to see how fast it can go. So I ran the demo DeepLearningRCNNObjectDetectionExample.m with various ExecutionEnvironment settings in trainingOptions. I first used 'gpu' and it runs about 0.6 sec per minibatch. Then I tried to use 'multi-gpu', then it created 4 workers (one for each GPU), but then the time for each minibatch took about 4 sec. Why the multi-gpu option resulted in about 7X slower than one gpu option? Is it a bug or what? BTW, I use Matlab 2017a on Windows 10 Server 2016 and CUDA 8.0 kit.

Answers (2)

Birju Patel
Birju Patel on 8 May 2017
Edited: Birju Patel on 8 May 2017
Hi,
This is due to a limitation with NVIDIA’s GPU-to-GPU communication on Windows. If you have the option, you should consider using Linux for multi-GPU training instead. You should increase the batch size from 128 to something like 1024. This provides more work to each GPU while reducing the communication cost, giving better overall utilization.
See the following reference page for additional details about the MiniBatchSize:
https://www.mathworks.com/help/nnet/ref/trainingoptions.html#namevaluepairarguments
You can read more about getting the best performance from multi-gpu training here:
https://www.mathworks.com/content/dam/mathworks/tag-team/Objects/d/Deep_Learning_in_Cloud_Whitepaper.pdf
  6 Comments
Birju Patel
Birju Patel on 15 May 2017
Hi Tai-Wu,
There are two network training calls in the demo. The first one calls trainNetwork, the second one calls trainRCNNObject Detector.
Which one are you reporting utilization numbers for? For example, do you see 50% usage for 1 GPU for trainNetwork or trainRCNNObjectDetector?
You can comment out the call to trainRCNNObjectDetector and re-run the demo to figure this out.
Marco Francini
Marco Francini on 23 Sep 2019
Hi Birju Patel.
I am setting up a new machine with multi-gpu for CNN training. Do you suggest me to intall Windows or Linux as SO? Do you know if there are any improvement in Windows system to solve the "issue" of 2017?

Sign in to comment.


Marco Francini
Marco Francini on 4 Sep 2017
I also have this issue with a 2x GTX1080 Ti GPUs system. I use transfer learning (Alexnet) for my application following https://www.mathworks.com/content/dam/mathworks/tag-team/Objects/d/Deep_Learning_in_Cloud_Whitepaper.pdf using ImageDatastore with a ssd drive.
The number of images per second the system can process during training with 2 GPUs is the half of what the system can do with 1 GPU! Looking at GPU load with GPU-Z, I see that with 2 GPUs the utilization jumps from 40% to 0% continuiosly while with one GPU the utilization is always above 50%.
I use Windows 10 Enterprise with Nvidia driver 385.41-desktop-win10-64bit-international-whql.exe installed.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!