(Not recommended) Add parallelization support to datastore
matlab.io.datastore.PartitionableByIndex is not recommended. For
more information, see Compatibility
matlab.io.datastore.PartitionableByIndex is an abstract mixin class
that adds parallelization support to your custom datastore for use with Deep Learning Toolbox™. This class requires Parallel Computing Toolbox™.
To use this mixin class, you must inherit from the
matlab.io.datastore.PartitionableByIndex class in addition to
inheriting from the
matlab.io.Datastore base class. Type the following syntax as the first line
of your class definition
classdef MyDatastore < matlab.io.Datastore & ... matlab.io.datastore.PartitionableByIndex ... end
To add support for parallel processing to your custom datastore, you must:
Inherit from an additional class
Define the additional method:
For more details and steps to create your custom datastore with parallel processing support, see Develop Custom Mini-Batch Datastore.
|(Not recommended) Partition a datastore according to indices|
For information on class attributes, see Class Attributes.
Handle. To learn how handle classes affect copy operations, see Copying Objects.
matlab.io.datastore.PartitionableByIndexis not recommended
Not recommended starting in R2019a
Before R2018a, to perform custom image preprocessing for training deep learning
networks, you had to specify a custom read function using the
readFcn property of
However, reading files using a custom read function was slow because
imageDatastore did not prefetch files.
In R2018a, four classes including
matlab.io.datastore.PartitionableByIndex were introduced as a
solution to perform custom image preprocessing with support for prefetching,
shuffling, and parallel training. Implementing a custom mini-batch datastore using
matlab.io.datastore.MiniBatchable has several challenges and limitations.
In addition to specifying the preprocessing operations, you must also define properties and methods to support reading data in batches, reading data by index, and partitioning and shuffling data.
You must specify a value for the
property, but this value may be ill-defined or difficult to define in
Custom mini-batch datastores are not flexible enough to support common deep learning workflows, such as deployed workflows using GPU Coder™.
Starting in R2019a, datastores natively support prefetch, shuffling, and parallel
training when reading batches of data. The
transform function is the preferred way to perform custom data
preprocessing, or transformations. The
combine function is the preferred way to concatenate read data from
multiple datastores, including transformed datastores. Concatenated data can serve
as the network inputs and expected responses for training deep learning networks.
combine functions have
several advantages over
The functions enable data preprocessing and concatenation for all
types of datastores, including
transform function only requires you to define
the data processing pipeline.
When used on a deterministic datastore, the functions support
tall data types and MapReduce.
The functions support deployed workflows.
The recommended solution to transform data with basic image preprocessing
operations, including resizing, rotation, and reflection, is
augmentedImageDatastore. For more information, see Preprocess Images for Deep Learning.
There are no plans to remove
matlab.io.datastore.PartitionableByIndex at this time.