MATLAB Answers


Why can I not use my GPU within MATLAB for GPU computing?

When trying to use gpuDevice, gpuArray or any GPU function from the Parallel Computing Toolbox in MATLAB I receive errors suggesting that my GPU cannot be detected by MATLAB.
"There is a problem with the CUDA driver or with this GPU device. Be sure that you have a supported GPU and that the latest driver is installed.
Caused by
No supported GPU device was found on this computer. To learn more about supported GPU devices, see <>"
"There is a problem with the CUDA driver or with this GPU device. Be sure that you have a supported GPU and that the latest driver is installed.
Caused by
The CUDA driver could not be loaded. The library name used was **. The error was: The specified module could not be found."
** = 'nvcuda.dll' (Windows), '/usr/local/cuda/lib/libcuda.dylib' (Mac OS), '' (Linux).


No tags entered yet.

2 Answers

Answer by MathWorks Support Team on 23 Sep 2016
 Accepted Answer

1.    Make sure your license of Parallel Computing Toolbox works
In MATLAB you can run the following command to check your license:
license checkout Distrib_Computing_Toolbox
You will receive the output "ANS=1" if the license is working. Otherwise, you will see a license manager error. In that case, searching for the license error message should give a solution for how to solve your issue.
2.    Check you have a supported GPU card, see NVIDIA website to check compute capability (<> for older cards consult Compute capability must be 1.3 or higher for versions R2014a and earlier and 2.0 or higher for versions R2014b or later and 3.0 or higher for convolution neural networks (introduced in R2016a).
3.    Update your NVIDIA graphics drivers to their latest version for your card (<>). It is not necessary to download the CUDA SDK for built-in GPU functionality in MATLAB. The CUDA SDK is required if you wish to compile your own CUDA code to use with MATLAB.
4.    If you still cannot access your CUDA enabled device from within MATLAB check whether the device is correctly identified by your operating system using a program like NVIDIA-smi. If it is not seek advice from your sysadmin or NVIDIA support.
To run NVIDIA-smi:
Windows: Navigate to "C:\Program Files\NVIDIA Corporation\NVSMI\" and execute nvidia-smi in a command window.
Linux: nvidia-smi from terminal window. Normally installed into /usr/bin/
Mac: Not available.
5.    In some environments such as computing clusters access to CUDA capable devices on machines are handled at a system level by techniques like control groups or general resource allocation. Checking for the presence of these systems can explain why GPUs are visible but restricted in access for other programs.
CUDA_VISIBLE_DEVICES is an environment variable which when set will only allow access to the valid CUDA devices listed in the variable. The environment variable acts as a mask on any underlying CUDA devices.
CUDA_VISIBLE_DEVICES = '0'                         %Only device 0 is accessible and usable.
CUDA_VISIBLE_DEVICES = '0, 1'                      %Devices 0 and 1 are accessible and usable.
CUDA_VISIBLE_DEVICES= 'NoDevApps'          %(or any other invalid input) no CUDA devices are accessible.
If CUDA_VISIBLE_DEVICES is unset then all CUDA devices are unrestricted.
In a cgroup (Linux) setup resource accessibility is controlled at a kernel level. This is a sysadmin level setting. This is more likely to exist on controlled HPC clusters.
If the steps in this support solution have not helped you resolve the problem, please contact MathWorks support directly:

  1 Comment

Sign in to comment.

Answer by Pablo Dias on 23 Nov 2017

I'm having the same issue, only when try to train an autoencoder network using explicitly 'UseGPU',true option.


Sign in to comment.