Bugger! 4.1: Zebra Stripes for Lane Lines

Jessica YungProgramming, Self-Driving Car NDLeave a Comment

In Project 4: Advanced Lane Lines of Udacity’s Self-Driving Car Nanodegree, we use computer vision techniques to trace out lane lines from a video taken from a camera mounted at the front of a car, like so:

Result I’m supposed to get

screenshot.png

See that the lane lines are traced in white.

In this series, I share some bugs I came across and how I tackled them. The code can be found in my GitHub repo (files p4-advanced-lane-lines.ipynb and helperfunctions.py.)

Bug Uno

Tracing out the lane lines for each image takes over a hundred lines of code, so once I got the trace to work on one image, I consolidated the code into self-contained functions and put those into one function image_pipeline that I could feed all my images through. On running a test image through the pipeline, I got this:

screenshot.png

^What is going on??

Tracking down the bug

To track down the bug (where the mistake had come from), I asked the function to plot intermediate results (i.e. to show its rough work).

List expected functionality

The pipeline was supposed to:

  1. Un-distort the test image
  2. Create a thresholded binary image
    1. Basically we select only pixels where there’s a large change in colour because that’ll take out a lot of pixels we don’t care about
  3. Transform the image from a car’s perspective to a birds-eye-view (looking from directly above) perspective
  4. Identify the lane line pixels and fit their position with a polynomial (i.e. fit the lane with a curve)
  5. Draw the curve back onto the original image
Plot intermediate results

I decided to first plot the birds-eye-view rendering of the lane to check that the first three steps had gone all right. I did this by uncommenting plt.imshow(warped) from my image_pipeline and commenting out the final plot of the buggy output image (else the final plot would overwrite the plot I wanted).

screenshot.png

Looks good!

It looked fine, so I then plotted the polynomial’s predicted x-values ( fit_poly)onto the image:

screenshot.png

It also looked fine, so I plotted the next result, i.e. the actual traced left lane line from draw_poly.

screenshot.png

Result:  polyfit_left

It was not fine. So what happened?

Backtrack through code once you’ve found an erroneous intermediate result

Going back to the code, I saw that the plot above, polyfit_left, came from

where left_fit was meant to be the coefficients [A,B,C] of the fitted curve x=Ay^2+By+C.

18-47.jpg

Equations of curves. Image credits: Udacity

I then saw that

lefty and leftx are arrays of the y and x coordinates of the lane line pixels. So what does fit_second_order_poly return?

It returns the predicted x-coordinates, not the coefficients of the polynomial (the description of the curve)! No wonder I was getting weird values.

This was also evident from printing left_fit:

Fix bug

I corrected the function to return fit as well as fitdep when I set the return_coeffs parameter to be True, which I then did in my image pipeline. I retained this flexibility because I thought I might sometimes only want the fitted values.

Running the image pipeline again yielded this curve plot:

screenshot.png

Beautiful! I then plotted the output image to check there were no further bugs:

screenshot.png

Fab. But then when I tried this on the other test images, I got this result for test1.jpg:

screenshot

Why did I get a semicircle-esque lane trace? Stay tuned for the big reveal in my next post.

Takeaway: be careful how you name your variables and don’t get them mixed up!

Further reading:

  • Project code (files p4-advanced-lane-lines.ipynb and helperfunctions.py.)
  • Bugger! Detecting Lane Lines – collection of bugs from Project 1 of Udacity’s Self-Driving Car Nanodegree (another lane line detecting project)

Leave a Reply