MATLAB Answers

CUDA kernel MaxThreadsPerBlock not constant

1 view (last 30 days)
Martin Strambach
Martin Strambach on 30 Jan 2020
Answered: Edric Ellis on 3 Feb 2020
I create a CUDA kernel using KERN = parallel.gpu.CUDAKernel(PTXFILE,CUFILE,FUNC). Block size is computed from KERN.MaxThreadsPerBlock which may vary based on a function which is used to build the kernel. I presumed MaxThreadsPerBlock is only dependent on gpuDevice properties. So far, it seems there might be some connection to number of function parameters. Can someone explain how this is actually determined or am I missing something?
I'm using Matlab 2019b, GCC 8.3, CUDA Toolkit 10.1 with NVidia V100 (CC 7.0).

  2 Comments

Joss Knight
Joss Knight on 2 Feb 2020
I can't work out how you'd see this for the same device. Can you post some reproduction code?
Martin Strambach
Martin Strambach on 2 Feb 2020
Hi Joss, thanks for your reply!
You can find an example in the attachement. It's not exactly minimal working example, but it should do. The file computeITT contains two entry points - one for single precision and the other one for double precision. I've also attached compiled ptx code (GCC 8.3, CUDA toolkit 10.1.243) compiled as follows: nvcc -ptx --gpu-architecture=compute_70. The rest of the files are just includes. When you construct a kernel with single precision entry point then KERN.MaxThreadsPerBlock is 1024, when you do the same thing for double precision entry point then KERN.MaxThreadsPerBlock is 512.
As odd as it sounds, the MaxThreadsPerBlock parameter isn't always half for double precision compared to single precision.

Sign in to comment.

Answers (1)

Edric Ellis
Edric Ellis on 3 Feb 2020
In your comment you mention that you see different values of MaxThreadsPerBlock for different kernels. This is expected. The CUDAKernel object builds on the underlying CUDA Driver API. Different kernel functions have different requirements in terms of shared memory, registers, and other resources, and this affects how many threads per block can be launched. This is described (briefly) in the CUDA Driver reference documentation here: https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__EXEC.html#group__CUDA__EXEC_1g5e92a1b0d8d1b82cb00dcfb2de15961b (In case that link goes stale - it describes the function cuFuncGetAttribute which allows you to query the CUDA attribute CU_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK).

  0 Comments

Sign in to comment.

Sign in to answer this question.