In the second term of Udacity’s Self-Driving Car Engineer Nanodegree, you start out learning about Kalman Filters. You are given a bunch of equations. What do they mean?
In this post I explain the prediction equations (left) in a big-picture way. I explain the update equations in my next post.
Predicting the object state x:
- x is the object state.
- I.e. where we believe the object is and what we believe it’s velocity is. Velocity is basically speed.
- x’ is the predicted object state.
- F is the state transition matrix.
- What is the state transition matrix? The question we want to answer is: given (1) the object’s previous state (position and velocity) and (2) an amount of time that’s passed between the previous measurement and the current measurement, what do we think the object’s current state (position and velocity) is?
- The state transition matrix’s answer: We assume the object is moving at constant velocity, so we predict that after time , the velocity is the same and the position in each direction is (the previous position + change in time * velocity in that direction).
- u is external motion.
- We use u to incorporate information we may have about how external actions might be affecting our object. For example, if we know (e.g. via telepathy) that the driver of the car we’re tracking is braking, we can use that in our prediction.
- If we don’t know anything, u is just a zero vector.
Predicting x in one sentence: predict the new position assuming the velocity is constant (), then add any motion from external agents (e.g. drivers) we know about.
Predicting the object covariance P:
- P is the object covariance matrix.
- What is the object covariance? It is the uncertainty of the object’s state. Sensors often have measurement error, so we can’t be sure that the measurements we receive are accurate.
- Why do we care? It’s important to know approximately how precise our measurement is. If we are highly uncertain about the position of a car in front of us, we may want to stay further away from it to avoid crashing into it.
- P’ is the predicted object covariance.
- F is the state transition matrix (see above).
- Applying mappings to covariance matrices: See all thoses? We do that because if you scale a variable (which is basically what a matrix does because it’s a mapping), you multiply its variance by the scaling squared. E.g. if F is a constant, . is analogous to that.
- This is in contrast to what we did when predicting .
- Q is the process covariance matrix.
- What is process covariance? It’s the uncertainty about the true velocity of the object. We assume constant velocity, but the object might be accelerating. This is the variance analog to u in the previous equation.
- Q depends on and how variable our random acceleration is. If our random acceleration is more variable, Q will have a larger magnitude.
- Why add Q to (as opposed to multiplying)? The process uncertainty is separate from the object uncertainty, which is related more to measurement error.
Predicting P in one sentence: Scale the previous uncertainty by the state transition matrix squared, then add uncertainty about acceleration by agents (e.g. drivers).
In my next post, I’ll explain what the Update equations mean.