Recap from the previous post: we’re trying to trace lane lines from a video. We just dealt with our pipeline spitting out zebra stripes, but now we’ve got a semicircle trace to deal with (see feature image above). What to do?
Step 1: Plot intermediate steps to locate the bug
Recall that our model is supposed to:
- Un-distort the test image
- Create a thresholded binary image
- Basically we select only pixels where there’s a large change in colour because that’ll eliminate a lot of pixels we don’t care about.
- We care about pixels where there’s a large change in colour because lane lines are drawn with colours (e.g. white, yellow) that are very different from the colour of the lane (grey).
- Transform the image from a car’s perspective to a birds-eye-view (looking from directly above) perspective
- Identify the lane line pixels and fit their position with a polynomial (i.e. fit the lane with a curve)
- Draw the curve back onto the original image
Let’s go back and plot (1) the lane line seen from a birds-eye-view (result from Step 3) and (2) the pixels our model thinks belong to the lane lines (the histogram peaks) like we did last post.
The green lines trace out the pixels the model thinks belong to the lane line. We can see that it’s the white specks on the bottom right of the image that are misleading our model. The resulting pair of polynomials traced are:
Step 2: Fix the bug
So we want to reduce noise, i.e. get rid of those white specks.
We can do this by increasing the lower gradient threshold. That is, we require a larger change in colour compared to neighbouring pixels (larger change in gradient) for pixels to be retained.
xgrad_thresh contains the range of gradients we consider. We want to increase the lower end of the range. Increasing xgrad_thresh to (50,100) from (20,100) yields
That’s a lot of noise gone! Let’s plot the traced lane line again to see if the problem has been fixed.
Seems like we’ve still got a problem. Why is the right lane line still so strange? Let’s plot the lane line seen from a bird’s eye view again:
This time the model is ignoring pixels that we want to include in our lane. We can see that the lane segment in the top right is a lot thinner than it was before – so much so that it’s not getting picked up by our model.
Because the model only gets the part of the right lane that’s at the bottom and the dot in the middle, it traces out this pair of polynomials:
So whereas previously we wanted to reduce noise, now we want to include more pixels. So we increase the lower bound of the gradient threshold:
xgrad_thresh = (40,100) (previously set to (20,100)).
Let’s plot the birds-eye-view image and trace again.
Looks great! Here are the lane lines traced onto the original image:
So we had to remove some noise but not remove too much noise, and had to figure out the optimal values of the parameters by experimenting.
Step 3: Robustness checks
I then went on to check whether this had messed up the line tracing in the other test images. Fortunately it hadn’t. You can imagine how tuning parameters is a balancing act not only within one input (image) as you’ve seen here, but also between different inputs. A configuration of a model might work really well for one test image but work disastrously for another. Our goal is to find a model that generalises well and that involves tradeoffs between the quality of output for different inputs.
A next step would be to tune these parameters more rigorously and automate processes instead of doing it by hand. Doing it by hand to start is a good way to get an intuition for what each parameter setting does though.
- Code on GitHub (Project 4: Advanced Lane Line Detection)
- Bugger! Detecting Lane Lines (A compilation of interesting bugs I found while working on the Introductory Lane Line Detection project and how I dealt with them)
- Zebra Stripes for lane lines: tackling another bug