Main Content

gpucoder.atomicDec

Atomically decrement variable in global or shared memory within upper bound

Since R2021b

    Description

    The gpucoder.atomicDec function reads a value from a global or shared GPU memory location, decrements the value, and writes the result back to the memory location. In generated GPU code, the operation is atomic, which means that a GPU thread performs the read-modify-write operation without interference from other threads.

    A = gpucoder.atomicDec(A,B) decrements the value of A within the upper bound B.

    Call the gpucoder.atomicDec function directly inside a for-loop that you want to execute on the GPU in the generated CUDA® code. Each iteration of the loop must be able to read and write to A. Use the coder.gpu.kernel pragma before the loop that contains gpucoder.atomicDec.

    example

    [A,oldA] = gpucoder.atomicDec(A,B) returns the previous value of A as oldA.

    Examples

    collapse all

    Perform a simple atomic wrap around decrement operation by using the gpucoder.atomicDec function and generate CUDA code that calls corresponding CUDA atomicDec() APIs.

    In one file, write an entry-point function myAtomicDec that accepts matrix inputs a and b.

    function a = myAtomicDec(a,b)
    
    coder.gpu.kernel;
    for i =1:numel(a)
        a(i) = gpucoder.atomicDec(a(i), b);
    end
    
    end
    

    To create a type for a uint32 matrix for use in code generation, use the coder.newtype function.

    A = coder.newtype('uint32', [1 30], [0 1]);
    B = coder.newtype('uint32', [1 1], [0 0]);
    inputArgs = {A,B};
    

    To generate a CUDA library, use the codegen function.

    cfg = coder.gpuConfig('lib');
    cfg.GenerateReport = true;
    
    codegen -config cfg -args inputArgs myAtomicDec -d myAtomicDec
    

    The generated CUDA code contains the myAtomicDec_kernel1 kernel with calls to the atomicDec() CUDA APIs.

    //
    // File: myAtomicDec.cu
    //
    ...
    
    static __global__ __launch_bounds__(1024, 1) void myAtomicDec_kernel1(
        const uint32_T b, const int32_T i, uint32_T a_data[])
    {
      uint64_T loopEnd;
      uint64_T threadId;
    
    ...
    
      for (uint64_T idx{threadId}; idx <= loopEnd; idx += threadStride) {
        int32_T b_i;
        b_i = static_cast<int32_T>(idx);
        atomicDec(&a_data[b_i], b);
      }
    }
    ...
    
    void myAtomicDec(uint32_T a_data[], int32_T a_size[2], uint32_T b)
    {
      dim3 block;
      dim3 grid;
    ...
    
        cudaMemcpy(gpu_a_data, a_data, a_size[1] * sizeof(uint32_T),
                   cudaMemcpyHostToDevice);
        myAtomicDec_kernel1<<<grid, block>>>(b, i, gpu_a_data);
        cudaMemcpy(a_data, gpu_a_data, a_size[1] * sizeof(uint32_T),
                   cudaMemcpyDeviceToHost);
    ...
    
    }
    

    Input Arguments

    collapse all

    Reference to a shared or global GPU memory location, specified as a scalar. Use the reference as an input and output argument of gpucoder.atomicDec.

    Data Types: uint32

    Operand, specified as a scalar.

    Data Types: uint32

    Extended Capabilities

    expand all

    C/C++ Code Generation
    Generate C and C++ code using MATLAB® Coder™.

    GPU Code Generation
    Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.

    Version History

    Introduced in R2021b