What do the Kalman Filter Equations Mean? (Part 2: Update)

Jessica YungSelf-Driving Car NDLeave a Comment

In my previous post, I explained the Kalman Filter prediction equations in a big-picture way. In this post I will explain the update equations.

5-7

Kalman Filter equations. Credits: Udacity.

The equations to focus on are the last two. (We use the results from the first three equations in the last two equations.)

Eqn 1: Updating the object state x:

x = x' + Ky

  • x’ is the predicted object state (obtained from prediction equations).
  • x is our updated belief as to what the object state is.
  • y is the difference between the actual measurement z and the predicted measurement Hx’.
    • The predicted measurement is Hx’. The measurement matrix H translates our prediction x’ into the same space as the measurement z.
    • Think of H as changing units of x’ so it matches z’s, e.g. from centimetres to metres.
Kalman Gain K

K is the Kalman GainIt’s how much our measurement affects our beliefs. The smaller the magnitude of the Kalman gain, the less our belief is affected by a new measurement that is different from our prediction.

  • If we could change this, it’d be like the learning rate in deep learning, or like how un-Bayesian we want to be.
  • We don’t directly change this parameter though. It comes from the equation K = P'H^TS^{-1} = P'H^T(HP'H^T+R)^{-1}.
  • To understand what that means, let’s look at the one-dimensional case where we combine two normal distributions (our predicted object state and the measurement) to get a new normal distribution (our new object state).
    • gauss_joint

      Multiply N_0 (pink) and N_1 (green) to get the blue distribution. Credits: BZARG

    • Algebraic manipulation yields k = \frac{\sigma_0^2}{\sigma_0^2+\sigma_1^2}.
  • So the Kalman Gain depends on the covariances of our object and the measurement. It is increasing in our object covariance and decreasing in the measurement covariance.
    • That is, the more uncertain we are about our prediction, the more we allow our measurement to change the prediction. And the more uncertain we are about our measurement, the less we allow the measurement to change our prediction. Makes sense.

Updating x in one sentence: We bring our predicted object state closer to our observed measurement,

  • where the amount of influence the measurement has on our beliefs depends on how uncertain we are about our prediction and our measurement.

Eqn 2: Updating the object covariance P:

P = (I-KH)P'

  • P’ is the predicted object covariance.
  • P is our updated belief as to what the object covariance matrix is.
  • K is the Kalman gain (see above).
  • H is the measurement matrix.

Key insight: Notice that the object covariance decreases after every update step.

Updating P in one sentence: We decrease the object covariance by an amount that depends on how certain we are about our measurement.

Feature image credits: Udacity. A picture with a KAR and a MAN with a FILTER from Photogramio! Get it? 🙂 

Further reading:

Leave a Reply