Main Content


3-D global max pooling layer


A 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input.



layer = globalMaxPooling3dLayer creates a 3-D global max pooling layer.


layer = globalMaxPooling3dLayer('Name',name) sets the optional Name property.


expand all

Layer name, specified as a character vector or a string scalar. To include a layer in a layer graph, you must specify a nonempty, unique layer name. If you train a series network with the layer and Name is set to '', then the software automatically assigns a name to the layer at training time.

Data Types: char | string

Number of inputs of the layer. This layer accepts a single input only.

Data Types: double

Input names of the layer. This layer accepts a single input only.

Data Types: cell

Number of outputs of the layer. This layer has a single output only.

Data Types: double

Output names of the layer. This layer has a single output only.

Data Types: cell

Object Functions


collapse all

Create a 3-D global max pooling layer with name 'gmp1'.

layer = globalMaxPooling3dLayer('Name','gmp1')
layer = 
  GlobalMaxPooling3DLayer with properties:

    Name: 'gmp1'

Include a 3-D max pooling layer in a Layer array.

layers = [ ...
    image3dInputLayer([28 28 28 3])
layers = 
  7x1 Layer array with layers:

     1   ''   3-D Image Input          28x28x28x3 images with 'zerocenter' normalization
     2   ''   Convolution              20 5x5x5 convolutions with stride [1  1  1] and padding [0  0  0; 0  0  0]
     3   ''   ReLU                     ReLU
     4   ''   3-D Global Max Pooling   3-D global max pooling
     5   ''   Fully Connected          10 fully connected layer
     6   ''   Softmax                  softmax
     7   ''   Classification Output    crossentropyex


  • In an image classification network, you can use a globalMaxPooling3dLayer before the final fully connected layer to reduce the size of the activations without sacrificing performance. The reduced size of the activations means that the downstream fully connected layers will have fewer weights, reducing the size of your network.

  • You can use a globalMaxPooling3dLayer towards the end of a classification network instead of a fullyConnectedLayer. Since global pooling layers have no learnable parameters, they can be less prone to overfitting and can reduce the size of the network. These networks can also be more robust to spatial translations of input data. You can also replace a fully connected layer with a globalAveragePooling3dLayer instead. Whether a globalAveragPooling3dLayer or a globalMaxPooling3dLayer is more appropriate depends on your data set.

    To use a global average pooling layer instead of a fully connected layer, the size of the input to globalMaxPooling3dLayer must match the number of classes in the classification problem

Introduced in R2020a