Feedback

Understanding Kalman Filters, Part 4: An Optimal State Estimator Algorithm

From the series: Understanding Kalman Filters

Part4, Optimal State Estimator Algorithm

In this video, we’ll discuss the set of equations that you need to implement the Kalman filter algorithm. Let’s revisit the example that we introduced in the previous video. You join a competition to win the big prize. You’re asked to design a self-driving car that needs to drive 1 km on 100 different terrains. In each trial, the car must stop as close as possible to the finish line. At the end of the competition, the average final position is computed for each team, and the owner of the car with the smallest error variance and an average final position closest to 1 km gets the big prize.

In that example, we also showed the car dynamics and the car model for our single-state system, and we discussed process and measurement noises along with their covariances. Finally, we said that you could win the competition by using a Kalman filter, which computes an optimal unbiased estimate of the car’s position with minimum variance. This optimal estimate is found by multiplying the prediction and measurement probability functions together, scaling the result, and computing the mean of the resulting probability density function.

Computationally, the multiplication of these two probability density functions relates to the discrete Kalman filter equation shown here. Does this ring a bell? Doesn’t it look similar to the state observer equation that we’ve discussed in previous videos? Actually, a Kalman filter is a type of state observer, but it is designed for stochastic systems. Here’s how the Kalman filter equation relates to what we’ve discussed with the probability density functions. The first part predicts the current state by using state estimate from the previous time step and the current input. Note that these two state estimates are different from each other. We’ll show the predicted state estimate with this notation. This is also called the a priori estimate since it is calculated before the current measurement is taken. We can now rewrite the equation like this. The second part of the equation uses the measurement and incorporates it into the prediction to update the a priori estimate. And we’ll call the result the a posteriori estimate.

You want to win the big prize, right? Then these are the equations you need to run on your car’s ECU. Looks a little scary? What if we turn everything upside down? Doesn’t change much, does it? Ok, we’ll go over the algorithm equations step by step. The Kalman filter is a two-step process. Let’s first start with the prediction part.

Here, the system model is used to calculate the a priori state estimate and the error covariance P. For our single-state system, P is the variance of the a priori estimate. The error covariance P can be thought of as a measure of uncertainty in the estimated state. This variance comes from the process noise and propagation of the uncertain x̂(k-1). At the very start of the algorithm, the k-1 values for x̂ and P come from their initial estimates.

The second step of the algorithm uses the a priori estimates calculated in the prediction step and updates them to find the a posteriori estimates of the state and error covariance. The Kalman gain is calculated such that it minimizes the a posteriori error covariance. Let this bar represent the calculation of x̂k.  By weighing the correction term, the Kalman gain determines how heavily the measurement and the a priori estimate contributes to the calculation of x̂k. If the measurement noise is small, the measurement is trusted more and contributes to the calculation of x̂k more than the a priori state estimate does. In the opposite case, where the error in the a priori estimate is small, the a priori estimate is trusted more and the computation of x̂k  mostly comes from this estimate. We can also show this mathematically by looking at two extreme cases.

Assume that in the first case the measurement covariance is zero. To calculate the Kalman gain, we take its limit as R goes to zero. We plug in 0 for R and see that these two terms cancel each other out. As R goes to zero, the Kalman gain approaches to the inverse of C, which is equal to 1 in our system. Plugging K and C-1 into the a posteriori state estimate shows that x̂k is equal to yk, so the calculation comes from the measurement only as expected. Now if we update our plot, we can show the measurement with impulse function which is shown with this orange vertical line. Note that the variance in the measurement is zero since R goes to zero. We found that the a posteriori estimate is equal to the measurement, so we can show it by the same impulse function. On the other hand, if the a priori error covariance is close to zero, then the Kalman gain is found as zero. Therefore, the contribution of this term to x̂k is ignored, and the computation of x̂k comes from the a priori state estimate. On the plot, we’ll show the a priori state estimate with an impulse function which has zero variance. And since the a posteriori estimate is equal to the a priori estimate, we’ll show it with the same impulse function.

Once we’ve calculated the update equations, in the next time step the a posteriori estimates are used to predict the new a priori estimates and the algorithm repeats itself. Notice that to estimate the current state, the algorithm doesn’t need all the past information. It only needs the estimated state and error covariance matrix from the previous time step and the current measurement. This is what makes the Kalman filter recursive.

Now you know the set of equations needed to implement the Kalman filter algorithm. What are you going to do with the big prize when you win the competition? If you can’t decide, here’s a suggestion. Note that the Kalman filter is also referred to as a sensor fusion algorithm. So, you can buy an additional sensor, such as an IMU, and experiment to see whether using multiple sensors would improve your self-driving car’s estimated position. If you have two measurements, the dimensions of y, C, and K matrices would change as shown here. But basically, you would still follow the same logic to compute the optimal state estimate. On the plot, we’ll have one more probability density function for the measurement from IMU, and this time we’ll be multiplying three probability density functions together to find the optimal estimate of the car’s position.

So far, we had a linear system. But what if you have a nonlinear system and want to use a Kalman filter? In the next video, we’ll discuss nonlinear state estimators.

Product Focus