Solving sparse matrix on GPU and memory problems (even with free memory available)

14 views (last 30 days)
Hi there,
I am running a backslash computation A\b where A is an array of type sparse and b is a vector, both stored as gpuArrays. It works fine for small matrices, but the following error is given for a matrix A of order 2e5:
_ Error using \ The GPU failed to allocate memory. To continue, reset the GPU by running 'gpuDevice(1)'. If this problem persists, partition your computations into smaller pieces._
KGR_gpu=gpuArray(KGR);
FR_gpu=gpuArray(FR);
sol=KGR_gpu\FR_gpu;
This same computation works fine when gpuArrays are not used. In fact, the size of A is only 50Mb and the GPU is a Nvidia GTX 1050, 4Gb. Following is the result of a gpuDevice call after this error:
CUDADevice with properties:
Name: 'GeForce GTX 1050'
Index: 1
ComputeCapability: '6.1'
SupportsDouble: 1
DriverVersion: 9.2000
ToolkitVersion: 8
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 4.2950e+09
AvailableMemory: 3.3949e+09
MultiprocessorCount: 5
ClockRateKHz: 1493000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
It seems that this is an issue with the backslash operator since both A and b can be stored on the GPU without problems.
Any thoughts on how to solve large sparse systems on the GPU? Thanks!
Regards, Paulo

Answers (2)

Walter Roberson
Walter Roberson on 21 May 2018
The result of \ between two sparse arrays is generally a dense array. For example sprand(1000,1000,.01) \ sprand(1000,1000,.01) gave me a result with fill fraction of 0.997031
  2 Comments
Paulo Ribeiro
Paulo Ribeiro on 21 May 2018
Dear Walter, thanks. But b is a vector. So it's a sparse array A and a vector b, both stored in the GPU. Perhaps this specific GPU is not able to perform these operations using double arrays? I know that single performance works great, but cant be used with sparse.
Walter Roberson
Walter Roberson on 21 May 2018
>> t = sprand(1000,1000,.01) \ sprand(1000,1,.01); nnz(t)./numel(t)
ans =
1
>> whos t
Name Size Bytes Class Attributes
t 1000x1 16016 double sparse
Notice this is twice the storage size that would be required for a non-sparse array with the same number of elements, due to the overhead of storing sparse arrays.

Sign in to comment.


Joss Knight
Joss Knight on 21 May 2018
Hi Paulo. I must admit, I'm not extremely familiar with the behaviour of the matrix factorization we use to implement the sparse direct solve; however it wouldn't surprise me if the result is quite dense. It might be interesting to (on the CPU) look at the density of the QR factors using the qr function, for your particular input. Certainly, when I did it on a random matrix with 10% fill, the Q factor was nearly completely dense and R factor was 50%.
>> A = sprand(1000, 1000, 0.1);
>> [Q,R] = qr(A);
>> nnz(Q)/numel(Q)
ans =
0.9903
>> nnz(R)/numel(R)
ans =
0.5005
For LU, both factors are 50% dense.
Obviously random sparse matrices don't properly reflect the structure of real sparse matrices, so your problem would be different. But it's not unreasonable to surmise that the intermediate factors might be very large.
To circumvent such problems a normal approach would be to use an iterative solver like gmres, bicg, pcg, cgs, lsqr etc. It is not uncommon for these to converge quicker than the direct solve can, especially if you can give them a good preconditioner.
  9 Comments
Paulo Ribeiro
Paulo Ribeiro on 12 Jun 2018
Thanks Joss,
For this specific case the pcg performs better. No further improvement was achieved using a preconditioner. What still amazes me is the fact that a 0.20Gb matrix can't be solved using the backslash operator in a nominal 4Gb GPU. On the other hand, while monitoring the CPU version of this code (on Windows task manager), I can see that MATLAB bites almost 4Gb of RAM during the backslash operation. That might explain something.
I wonder how far can you get even with high-end GPU's.
Regards,
Paulo
Walter Roberson
Walter Roberson on 12 Jun 2018
I seem to recall that some memory is required to marshal the data on the GPU, which uses arrays in a different order than MATLAB uses. I do not recall the details at the moment, but potentially you might not be able to create an output array larger than half of your GPU memory.

Sign in to comment.

Categories

Find more on Sparse Matrices in Help Center and File Exchange

Products

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!