Speeding up calculation of thousands of small matrices with CUDA GPU - at the moment, it's slower than CPU...

I have a 3.0 compute capability GPU in my computer, and the parallel processing toolbox.
My current code runs significantly faster on the CPU, even without parfor or spmd, than it does on the GPU. You can run the attached code, if you would like to try it.
My question is: how can I make this faster on the GPU, if a GPU is even the right tool for this kind of problem. I have looked at arrayfun and vectorization (I suspect it's as vectorized as it's getting) and glanced at writing CUDA kernals.
Two primary points:
1. I think CUDA/GPU is made more for a small number of operations of enormous matrices (operating with themselves, such as x=x*x, where size(x) > 1000). But as you can see, my code is thousands of operations for many different small matrices.
2. There are only 6 elements in this particular case that I need to change (5000 times). Everything else is the same.
Thank you for your help.
%%definitions
gm = 6e6*2*pi;
llimit=-.01;
ulimit=-llimit;
step=2*ulimit;
p=llimit:step/5000:ulimit;
%%vector
B=ones(256,1);
%%matrix
M = rand(256,256);
% comment for quick disabling of gpu arrays to compare to CPU speed
p = gpuArray(p);
B = gpuArray(B);
M = gpuArray(M);
gm = gpuArray(gm);
C=gpuArray(0);
R = C;
Q = gpuArray.zeros(256,256);
% comment above for quick disable
Delta=p*2*pi*1e6;
tic;
for n=1:length(p),
Q(3,3) = -1i*(Delta(n)/2)-gm/2;
Q(4,4) = 1i*(Delta(n)/2)-gm/2;
Q(5,5) = -1i*(Delta(n)/2)-gm/2;
Q(6,6) = 1i*(Delta(n)/2)-gm/2;
Q(7,7) = -1i*Delta(n);
Q(8,8) = 1i*Delta(n);
Md = M+Q;
C = Md\B;
R(n) = real(C(2)); % C(2) = excited state pop rho_33
end
toc;
figure;
plot(p, gather(R))

2 Comments

Another thing to be aware of is that some 3.0 devices are not intended to provide fast computation for double arithmetic. They only have good performance for single.
Thank you, this did help, as did some of the comments below, but alas, the CPU remains faster.

Sign in to comment.

 Accepted Answer

It doesn't look well-suited to the GPU to me. The GPU is meant for many parallel computations each requiring a small total amount of data. It's true that each of your tasks involves a small amount of new data, but there is still a large amount of additional, old data in the computation (the data in the matrix M).
PARFOR on the CPU would be the best bet, I'd say. It would help, though, if you preallocated R to its full intended length, length(p).

7 Comments

Thank you. Yes, the R is actually pre-allocated, but this code I posted has that mistake in it. It is a negligible difference.
So, are you saying that the old data remains in the GPU/calculation and slows it down? e.g. the non-changing values of M are causing a problem. I thought that would be the least of the issues.
So, are you saying that the old data remains in the GPU/calculation and slows it down? e.g. the non-changing values of M are causing a problem.
Well, the bottleneck of the computation is C=Md\B and that's going to require a 256x256 linear solve regardless of the fact that you're changing only a few elements per n. After some more thought though, I would have expected gpuArray's mldvide() method to give you some acceleration. Maybe Jill's point about doubles versus singles is the reason.
Aside from that, though, there is more you can be doing to optimize the computations. For one thing, there is no need to be creating and adding a whole additional matrix Q in the computations,
iDelta=1i*Delta/2;
tic;
for n=1:length(p),
const0=iDelata(n);
const1=const0-gm/2;
const2=-const0(n)-gm/2;
Md([4,6],[4,6])=Md([4,6],[4,6])+const1;
Md([3,5],[3,5])=Md([3,5],[3,5])+const2;
Md(7,7)=Md(7,7)-const0;
Md(8,8)=Md(8,8)+const0;
C = Md\B;
R(n) = real(C(2)); % C(2) = excited state pop rho_33
end
toc;
Thank you. Changing to single precision did increase the speed, but the CPU was still faster, using double precision.
Also, your change to the code for multiple assignments saved a good deal of time, too. Thank you for that. Post hoc, it seems obvious.
Currently, given these changes, the GPU code is on average 2 seconds slower than the CPU. I am not sure what else I can do. It is possible GPU is not the right tool for this speedup.
Definitely. I thought sparse was not supported on GPU anyway.
Well, if it were sparse, you might not have needed the GPU, or even the Parallel Computing Toolbox.
Anyway, PARFOR on the CPU seems like the more sensible way to parallelize this. It's murky how much speed-up in Md\B to expect on the GPU. I assume gpuArrays' MLDIVIDE method is parallelized similar to the way it is multi-threaded on the CPU for normal matrices. It's not clear why parallelizing MLDIVIDE on the GPU should be any better than parallelizing it on the CPU.

Sign in to comment.

More Answers (0)

Categories

Find more on Parallel Computing Toolbox in Help Center and File Exchange

Asked:

on 21 May 2013

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!