A convolutional neural network (CNN or ConvNet) is a network architecture for deep learning that learns directly from data.
CNNs are particularly useful for finding patterns in images to recognize objects, classes, and categories. They can also be quite effective for classifying audio, time-series, and signal data.
A convolutional neural network can have tens or hundreds of layers that each learn to detect different features of an image. Filters are applied to each training image at different resolutions, and the output of each convolved image is used as the input to the next layer. The filters can start as very simple features, such as brightness and edges, and increase in complexity to features that uniquely define the object.
Get Started with Examples
Feature Learning, Layers, and Classification
A CNN is composed of an input layer, an output layer, and many hidden layers in between.
These layers perform operations that alter the data with the intent of learning features specific to the data. Three of the most common layers are convolution, activation or ReLU, and pooling.
- Convolution puts the input images through a set of convolutional filters, each of which activates certain features from the images.
- Rectified linear unit (ReLU) allows for faster and more effective training by mapping negative values to zero and maintaining positive values. This is sometimes referred to as activation, because only the activated features are carried forward into the next layer.
- Pooling simplifies the output by performing nonlinear downsampling, reducing the number of parameters that the network needs to learn.
These operations are repeated over tens or hundreds of layers, with each layer learning to identify different features.
Shared Weights and Biases
Unlike a traditional neural network, a CNN has shared weights and bias values, which are the same for all hidden neurons in a given layer.
This means that all hidden neurons are detecting the same feature, such as an edge or a blob, in different regions of the image. This makes the network tolerant to translation of objects in an image. For example, a network trained to recognize cars will be able to do so wherever the car is in the image.
After learning features in many layers, the architecture of a CNN shifts to classification.
The next-to-last layer is a fully connected layer that outputs a vector of K dimensions (where K is the number of classes able to be predicted) and contains the probabilities for each class of an image being classified.
The final layer of the CNN architecture uses a classification layer to provide the final classification output.
Medical Imaging: CNNs can examine thousands of pathology reports to visually detect the presence or absence of cancer cells in images.
Audio Processing: Keyword detection can be used in any device with a microphone to detect when a certain word or phrase is spoken (“Hey Siri!”). CNNs can accurately learn and detect the keyword while ignoring all other phrases regardless of the environment.
Object Detection: Automated driving relies on CNNs to accurately detect the presence of a sign or other object and make decisions based on the output.
When Should You Use CNNs?
Consider using CNNs when you have a large amount of complex data (such as image data). You can also use CNNs with signal or time-series data when preprocessed to work with the network structure.
See these examples for working with signals and CNNs:
Consider Using Pretrained Models
When working with CNNs, engineers and scientists prefer to initially start with a pretrained model and that can be used to learn and identify features from a new data set.
Models like GoogLeNet, AlexNet, and Inception provide a starting point to explore deep learning, taking advantage of proven architectures built by experts.
New Deep Learning Models and Examples
See a list of all available modes and explore new models by category.
Designing and Training Networks
Using Deep Network Designer, you can import pretrained models or build new models from scratch.
You can also train networks directly in the app and monitor training with plots of accuracy, loss, and validation metrics.
Using Pretrained Models for Transfer Learning
Fine-tuning a pretrained network with transfer learning is typically much faster and easier than training from scratch. It requires the least amount of data and computational resources. Transfer learning uses knowledge from one type of problem to solve similar problems. You start with a pretrained network and use it to learn a new task. One advantage of transfer learning is that the pretrained network has already learned a rich set of features. For example, you can take a network trained on millions of images and retrain it for new object classification using only hundreds of images.
Hardware Acceleration with GPUs
A convolutional neural network is trained on hundreds, thousands, or even millions of images. When working with large amounts of data and complex network architectures, GPUs can significantly speed the processing time to train a model.
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.