Training NN with single precision data on GPU

16 views (last 30 days)
I am trying to use fitnet to train a network on my GPU using single-precision input data (X and T). However, this always returns an error, which starts with:
"Error using nnGPUOp.bg (line 134) Variable 'perfs1' changed type. Consider renaming variable on left hand side of assignment."
This only seems to be a problem when using single-precision data AND the GPU. When I train using double-precision on GPU, it works fine, and when I use single- or double-precision data on the CPU, it also works fine.
Anyone found a way around this?
  3 Comments
Cameron Lee
Cameron Lee on 31 Jan 2020
Edited: Cameron Lee on 31 Jan 2020
Hi Raunak... Thanks for addressing this issue. Here is some code. Obviously I don't use random x and t variables, but nonetheless, this thows the same error. Notice that if you leave x and t as double-precision, it works fine. Further, if run on the CPU rather than the GPU, it will also work fine with either single or double precision x and t variables (but will take quite a bit longer). Ideally, I want this to work on the GPU with single-precision data, as my Titan RTX GPUs are best equipped to process such data types. I am using MATLAB Version: 9.7.0.1261785 (R2019b) Update 3 and all the updated toolboxes.
neurons=10;
xvars=rand(700000,6);
yvar=rand(700000,1);
% CHANGING THEM TO SINGLE-PRECISION DATA-TYPE DOES NOT WORK
% (THROWS ERROR: "Error using nnGPUOp.bg (line 134)
% Variable 'perfs1' changed type. Consider renaming variable on left hand side
% of assignment.")
x = single(xvars');
t = single(yvar');
% LEAVING THEM AS DOUBLE-PRECISION DATA-TYPE WORKS FINE
% x = xvars';
% t = yvar';
trainFcn='trainscg';
net = fitnet(neurons,trainFcn);
net.input.processFcns = {'removeconstantrows','mapminmax'};
net.output.processFcns = {'removeconstantrows','mapminmax'};
net.trainParam.showWindow = 0;
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 60/100;
net.divideParam.valRatio = 20/100;
net.divideParam.testRatio = 20/100;
net.trainParam.max_fail = 10;
net.performFcn = 'mse'; % Mean Squared Error
net.trainParam.epochs=100;
[net,tr] = train(net,x,t,'useGPU','yes');
y = net(x)';

Sign in to comment.

Accepted Answer

Raunak Gupta
Raunak Gupta on 19 Feb 2020
Edited: Raunak Gupta on 19 Feb 2020
Hi,
The single precision GPU training can only be done in the ‘nnGPU’ calculation mode. By default train uses nnGPUOp’ which doesn’t support single precision GPU Training.
As a workaround, you may do single precision GPU training by any of the two ways mentioned below:
  • You can use the nndata2gpu function:
% Here x,t are original double precision data
net = configure(net,x,t);
sx = nndata2gpu(x,'single');
st = nndata2gpu(t,'single');
[net,tr] = train(net,sx,st,'useGPU','yes');
  • You can specify single precision GPU training:
% Here x,t are single precision data
[net,tr] = train(net,x,t,nnGPU('precision','single'));
Hope it helps.
  3 Comments
Raunak Gupta
Raunak Gupta on 20 Feb 2020
Hi Cameron,
The speed up will not happen because by using single-precision instead of double-precision the memory used by the GPU decreases which doesn't translates to speed. Instead if you have more available memory maybe increasing the batch-size (In Case of Deep Neural Network like CNNs) would speed up the code.
Cameron Lee
Cameron Lee on 21 Feb 2020
Hi Raunak,
I appreciate the suggestion, but this still does not make sense to me. I understand/agree that the single-precision data requires the GPU to use less memory, but doesn't it also mean that each individual calculation should proceed faster considering that there is less precision (and less memory) required in each operation? That is, in terms of TFLOPS, according to the specs from Nvidia, my GPUs should be performing MUCH faster (at about 30x the speed) using single-precision data vs double-precision data. Indeed, using gpuBench (https://www.mathworks.com/matlabcentral/fileexchange/34080-gpubench), my GPU performs anywhere from 8x (Backslash test) to 28x (MTimes test) faster using single-precision data than with double-precision data. The only explination I have for this is that all of the training is STILL being done with double-precision data (and evidence for this might be that y (from line final line of my example code above) and net.IW are still output as double-precision data types even after using your solutions). This seems like a pretty important drawback to using MatLab for shallow networks.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!