Main Content

Run Experiments in Parallel

By default, Experiment Manager runs one trial of your experiment at a time on a single CPU. If you have Parallel Computing Toolbox™, you can configure your experiment to run multiple trials at the same time or to run a single trial at a time on multiple GPUs, on a cluster, or in the cloud.

Tip

For more information on running deep learning experiments in parallel, see Run Experiments in Parallel (Deep Learning Toolbox).

Run Multiple Simultaneous Trials

To run multiple trials of your experiment at the same time using one parallel worker for each trial:

  1. Set up your parallel environment as described in Set Up Parallel Environment.

  2. On the Experiment Manager toolstrip, set Mode to Simultaneous.

    Alternatively, to offload the experiment as a batch job, set Mode to Batch Simultaneous and specify your cluster and pool size. For more information, see Offload Experiments as Batch Jobs to a Cluster.

  3. Click Run .

Experiment Manager runs as many simultaneous trials as there are workers in your parallel pool. All other trials in your experiment are queued for later evaluation.

Tip

Load data for your experiment from a location that is accessible to all your parallel workers. For example, store your data outside the project and access the data by using an absolute path. Alternatively, create a datastore object that can access the data on another machine by setting up the AlternateFileSystemRoots property of the datastore. For more information, see Set Up Datastore for Processing on Different Machines or Clusters.

Run Single Trial on Multiple Workers

To run a single trial of your experiment at a time on multiple parallel workers:

  1. In your experiment function, set up your parallel environment as described in Set Up Parallel Environment. Then, use spmd, parfor, or parfeval to define the parallel algorithm for your experiment. For more information, see Choose Between spmd, parfor, and parfeval (Parallel Computing Toolbox).

  2. On the Experiment Manager toolstrip, set Mode to Sequential.

    Alternatively, to offload the experiment as a batch job, set Mode to Batch Sequential and specify your cluster and pool size. For more information, see Offload Experiments as Batch Jobs to a Cluster.

  3. Click Run .

Set Up Parallel Environment

Run on Multiple GPUs

If you have multiple GPUs, parallel execution typically increases the speed of your experiment. Using a GPU requires Parallel Computing Toolbox and a supported GPU device. For more information, see GPU Computing Requirements (Parallel Computing Toolbox). To determine whether a usable GPU is available, call the canUseGPU function.

For best results, before you run your experiment, create a parallel pool with as many workers as GPUs. Otherwise, multiple workers share the same GPU, so you do not get the desired computational speed-up and you increase the chance that the GPUs run out of memory. You can check the number of available GPUs by using the gpuDeviceCount (Parallel Computing Toolbox) function.

numGPUs = gpuDeviceCount("available");
parpool(numGPUs)

Run on Cluster or in Cloud

If your experiments take a long time to run on your local machine, you can improve performance by using a computer cluster on your onsite network or by renting high-performance GPUs in the cloud. After you complete the initial setup, you can run your experiments with minimal changes to your code. Working on a cluster or in the cloud requires MATLAB® Parallel Server™. For more information, see Scale Up from Desktop to Cluster (Parallel Computing Toolbox).

See Also

Apps

Functions

  • (Parallel Computing Toolbox) | (Parallel Computing Toolbox) | (Parallel Computing Toolbox) | (Parallel Computing Toolbox) | (Parallel Computing Toolbox)

Related Topics