Analog-to-digital converters (ADCs) implemented as integrated circuits (IC) are prone to errors due to imperfect IC manufacturing. Mismatched analog components like transistors, resistors, and capacitors can lead to signal distortion, for example, causing poor total harmonic distortion (THD). One way to reduce ADC errors is to augment the design by using larger analog components. This approach improves matching, and therefore distortion numbers, but requires more area and power. A second approach is to add calibration circuitry, but that also requires additional silicon area and increases costs and power consumption—and usually, one needs to know the exact cause of the error to be calibrated.
At NXP Eindhoven, my colleague and I post-correct ADC errors using a neural network designed and trained with MATLAB® and Deep Learning Toolbox™. When implemented on an ASIC, the network requires just 15% of the area of the ADC while consuming roughly 16 times less power under normal operating conditions.
Designing and Training the Neural Network
We generated training data in the lab by supplying a reference signal to 30 ADC samples (dies) and capturing the digital output. A further 10 samples were set aside for validating the network. Because ADC errors are affected by both temperature and voltage, we tested each sample at nine different voltage-temperature combinations, for a total of 360 measurements. We preprocessed our data using signal processing techniques and then used the measured digital output values of the ADC as inputs to the neural network. The network coefficients would be updated by comparing the corrected output signal with the original reference signal (Figure 1).
Because I had little previous experience with neural networks when the project began, I was uncertain how complex the network needed to be. I started by creating basic two- and three-layer networks in MATLAB and varying the number of neurons in each layer. The neurons in the first and second layers use a sigmoid activation function, and the output layer activation function is linear. The cost function used is a least mean squares (LMS) cost function.
After training these early network configurations on our data set, I saw that I could improve their performance by incorporating voltage and temperature measurements as predictors. When I implemented this change, network performance improved significantly across a wide range of temperature and voltage conditions.
Evaluating IC Area and Power
Once I had a neural network that was effective at post-correcting ADC errors, I wanted to evaluate how much silicon area and power it would require. To do this, I generated a Simulink® model of the trained neural network from MATLAB. I then quantized all network coefficients using Fixed-Point Designer™ before generating VHDL® code from the network with HDL Coder™. My colleague verified the generated VHDL in Simulink via HDL Verifier™ cosimulation and then used Cadence® Genus to synthesize the design. He also used the Cadence environment to perform the physical implementation using 28 nm CMOS technology, generate power reports, and calculate the number of gates used and the area needed for these gates.
The results of this analysis showed that a neural network can correct ADC errors at relatively low cost in terms of area and power. A network that improved signal-to-noise ratio by about 17 dB required just over 4600 gates and a silicon area of 0.0084 mm2 to implement. The ADC, which measures 0.06 mm2, is more than seven times larger than the network. When active, the network consumed about 15 µW of power, whereas the ADC consumed 233 µW.
Both the area and the power consumption estimates are considered acceptable for error-correcting circuits, but I’m confident that, with optimization, we could improve these numbers. The workflow used to implement the network in VHDL was very straightforward, despite my relative inexperience with machine learning. As a result, designing and implementing the neural network-based circuit took no longer than a traditional approach would have, even though I was new to the process.
Increasing Reusability and Portability
In the near term, we plan to explore several avenues for validating the use of neural networks for ADC error correction. First, we want to better understand how the trained network performs the error correction so that we can minimize the risk of unexpected behavior in production. Second, we want to expand our data set. We need to know whether the results we achieved will hold if we use a million samples instead of just 40. Finally, we want to gauge how reusable a neural network can be. We expect that a single network will be able to compensate for different errors across a variety of ADCs more effectively than a traditional design could because the network can accommodate a wide range of transfer functions. However, we will need to conduct further testing to validate this assumption.