Main Content

partition

Partition parallel pool

Since R2025a

    Description

    partition divides an existing parallel pool into pool partitions, allowing you to use specific resources from the existing pool. Use the partition function to create pools that target specific resources or to create multiple pools to execute more than one parallel workflow at a time.

    newPool = partition(pool,"MaxNumWorkersPerHost",MaxNumWorkersPerHost) creates a pool object, newPool, with up to MaxNumWorkersPerHost workers from each host contributing workers to the input pool, pool.

    A host is a machine that runs MATLAB® workers. On a local pool, the host is your computer. On a cluster pool, the hosts are the machines within the cluster. A cluster pool can have workers running on multiple hosts.

    If a host runs fewer pool workers than the number of workers you request, the partition function allocates all available workers from that host to newPool.

    example

    newPool = partition(pool,"MaxNumWorkersPerGPU",MaxNumWorkersPerGPU) creates a pool object, newpool, with up to MaxNumWorkersPerGPU workers for each unique GPU in the input pool, pool.

    A GPU must be attached to the same host as a MATLAB worker to be assigned to the worker. If the partition function is unable to assign the requested number of workers to a GPU, the function allocates all workers that it can assign to the GPU to newpool. On hosts with multiple GPUs, the number of MATLAB workers and GPUs on the host determines the maximum number of workers that the partition function can assign to each GPU. On hosts that do not have a GPU, the partition function returns an empty pool unless other hosts in the pool have GPUs.

    This partitioning method uses the result of the gpuDevice function on each parallel pool worker to determine which worker is associated with which GPU. The partition function considers only workers with an allocated GPU and does not modify the property values of any GPU. The partition function executes code on all workers in pool to gather gpuDevice information. If any worker is busy executing a parfeval computation, the partition function waits until the parfeval computation completes.

    example

    newPool = partition(pool,"Workers",Workers) partitions the input pool object, pool, to create a new pool object, newPool, containing only workers specified by the parallel.Worker object or array of parallel.Worker objects, Workers. Every worker in Workers must be a member of the Workers property of the input pool.

    example

    [newPool,remainingPool] = partition(pool,___) also returns remainingPool, a pool object that contains the set of workers in pool but not in newpool.

    example

    Examples

    collapse all

    Partition a parallel pool to allocate one worker per GPU, which is useful for GPU-intensive computations.

    For this example, the myCluster profile requests a parallel pool with four GPUs. Start a parallel pool of 12 workers using the myCluster profile.

    pool = parpool("myCluster",12);
    Starting parallel pool (parpool) using the 'myCluster' profile ...
    Connected to parallel pool with 12 workers.

    Partition the pool to assign one worker per GPU.

    [gpuWorkers,cpuWorkers] = partition(pool,"MaxNumWorkersPerGPU",1);
    

    Display the number of workers in the new pools.

    fprintf("Number of GPU workers: %d\n" + ...
        "Number of CPU workers: %d\n",gpuWorkers.NumWorkers,cpuWorkers.NumWorkers);
    
    Number of GPU workers: 4
    Number of CPU workers: 8

    You can use the CPU pool to execute work at the same time as the GPU pool.

    Allocating one worker per host can be beneficial for tasks that require a significant amount of memory or CPU resources per worker.

    Start a parallel pool with 15 workers using the remote cluster profile myMJSCluster. The myMJSCluster creates a pool with three hosts.

    myClusterPool = parpool("myMJSCluster",15);
    Starting parallel pool (parpool) using the 'myMJSCluster' profile ...
    Connected to parallel pool with 15 workers.
    

    Partition the myClusterPool pool to have one worker per host.

    [perHostPool,remainingPool] = partition(myClusterPool,"MaxNumWorkersPerHost",1);
    

    Display the number of workers in the new pools.

    fprintf("Number of workers in perHostPool: %d\n" + ...
        "Number of workers in remainingPool: %d\n",perHostPool.NumWorkers,remainingPool.NumWorkers)
    
    Number of workers in perHostPool: 3
    Number of workers in remainingPool: 12

    Isolating a single worker can be useful when you need to run tasks on a dedicated worker.

    Start a parallel pool, and use the Workers property of the pool object to obtain the parallel.Worker objects for the workers in the pool.

    pool = parpool("Processes");
    allWorkers = pool.Workers;

    Partition the pool to isolate the first worker in the allWorkers array.

    [singleWkrPool,remainingPool] = partition(pool,"Workers",allWorkers(1));
    

    Inspect the partitioned single worker pool.

    singleWkrPool
    singleWkrPool = 
    
     ProcessPool with properties: 
    
                Connected: true
               NumWorkers: 1
                     Busy: false
                  Cluster: Processes (Local Cluster)
            AttachedFiles: {}
        AutoAddClientPath: true
                FileStore: [1x1 parallel.FileStore]
               ValueStore: [1x1 parallel.ValueStore]
              IdleTimeout: 30 minutes (26 minutes remaining)
              SpmdEnabled: true

    Input Arguments

    collapse all

    Input pool object, specified as a parallel.Pool object.

    Example: pool = parpool;

    Maximum number of workers per GPU to partition, specified as a positive integer.

    The partition function attempts to assign at least MaxNumWorkersPerGPU workers to each unique GPU of the input pool. If a GPU has fewer than MaxNumWorkersPerGPU assigned workers, partition selects all available workers.

    Maximum number of workers per host to partition, specified as a positive integer.

    The partition function attempts to assign at least MaxNumWorkersPerHost workers to each unique host of the input pool. If a host has fewer than MaxNumWorkersPerHost available workers, partition selects all available workers.

    Workers to partition, specified as a parallel.Worker object or an array of parallel.Worker objects.

    To obtain parallel.Worker objects, you can use the Workers property of the input pool. You can also use the getCurrentWorker function on the pool workers you want to target to return their parallel.Worker objects.

    Example: allWorkers = pool.Workers;

    Output Arguments

    collapse all

    Parallel pool with partitioned workers, returned as a parallel.Pool object.

    Parallel pool with remaining workers, returned as a parallel.Pool object.

    Tips

    • The pool partition contains some workers from the input pool therefore both the input pool and its partitions can schedule work on the same workers. Consequently, using one pool can delay the execution of work on other pools. Deleting any pool instance deletes the underlying collection of workers and any parallel job running on the pool. All pools using the same collection of resources become invalid. If you no longer need a pool partition, allow the instance to go out of scope rather than explicitly deleting it.

    • The NumWorkers property value of the pool partition reflects the number of workers the pool can use.

    • The new pool shares some properties with the input pool. Changes to these properties affect all pools. These shared properties are:

      • AttachedFiles

      • FileStore

      • ValueStore

      • IdleTimeout

    Version History

    Introduced in R2025a