Strange performance of MATLAB cuda on matrixes. Any idea?

1 view (last 30 days)
ehsan monfared on 10 Dec 2014
Commented: ehsan monfared on 12 Dec 2014
I have recently employed MATLAB CUDA library for some absolutely simple matrix calculations on gpu. But the performance results are very strange. could any body help me understand what exactly is going on and how I can solve the issue. Thanks in advance. Please note that the following codes are run on geforce GTX TITAN black gpu.
assume a0,a1,...a6 be 1000*1000 gpuarrays and U=0.5 and V=0.0
titan = gpuDevice(); tic();
for i=1:10000 a6(1,1)=(0.5.*(a5(1,1)-a0(1,1)))-(a1(1,1)+a2(1,1)+a3(1,1))-(a5(1,1).*U./3.0)-(a5(1,1).*V./2.0)+(0.25.*a5(1,1).*a4(1,1)); end
wait(titan); time = toc() the result for time=17.98 seconds
now re-defining a0,a1,...a6 and U and V for employment on cpu and calculating the time needed:
tic();
for i=1:10000 a6(1,1)=(0.5.*(a5(1,1)-a0(1,1)))-(a1(1,1)+a2(1,1)+a3(1,1))-(a5(1,1).*U./3.0)-(a5(1,1).*V./2.0)+(0.25.*a5(1,1).*a4(1,1)); end
time= toc() the result for time=0.0098 seconds
therefore more than 1800 times faster on cpu!!!!
then I decided to do the previous calculations on the whole matrix rather than specific elements, and here are the results:
Results for the run on gpu:
titan = gpuDevice(); tic(); for i=1:10000 a6=(0.5.*(a5-a0))-(a1+a2+a3)-(a5.*U./3.0)-(a5.*V./2.0)+(0.25.*a5.*a4); end wait(titan); time = toc() the result for time=6.32 seconds which means that the operation on the whole matrix is much faster than on a specific element!
Results for the run on CPU:
tic(); for i=1:10000 a6=(0.5.*(a5-a0))-(a1+a2+a3)-(a5.*U./3.0)-(a5.*V./2.0)+(0.25.*a5.*a4); end
time= toc() the result for time=35.2 seconds
AND HERE IS THE MOST SURPRISING RESULT: assuming a0,a1,...a6 and U and V to be just 1*1 gpuarrays and running the following:
titan = gpuDevice(); tic(); for i=1:10000 a6=(0.5.*(a5-a0))-(a1+a2+a3)-(a5.*U./3.0)-(a5.*V./2.0)+(0.25.*a5.*a4); end wait(titan); time = toc() the result for time=7.8 seconds
it is even slower than the corresponding 1000*1000 case!
Unfortunately the line a6(1,1)=(0.5.*(a5(1,1)-a0(1,1)))-(a1(1,1)+a2(1,1)+a3(1,1))-(a5(1,1).*U./3.0)-(a5(1,1).*V./2.0)+(0.25.*a5(1,1).*a4(1,1)); is one of the lines among about 100 lines, all in a single for-loop and this line proved itself as a real bottleneck taking about 50% of all calculation time needed! could anybody help me? note that transferring this part of calculations on cpu is not a choice because the bottleneck line is in a for-loop and sending a1,...a6 to cpu and calling the results to gpu in each iteration is much more time consuming. any advice is really really appreciated.
ehsan monfared on 12 Dec 2014
Joss, I am off-campus right now. I will test all the suggestions and let you know the results in a week. Once again I have to thank you for the great help.

matt dash on 10 Dec 2014
Edited: matt dash on 10 Dec 2014
Calculating timings of GPU functions is very tricky business. You should read all about gpu occupancy and block sizes and all that good stuff. The short story is that more data does not always equal longer computation times.
Also, if you are really concerned with performance, you should write your calculations in a .cu file, compile it to a ptx, and call that from Matlab instead of relying on Matlab equations. Read/implement the demo described here to see how much of a difference this makes: Mandelbrot Set Demo