Advanced Lane Finding - Udacity Self-Driving Car Engineer Nanodegree

Описание к видео Advanced Lane Finding - Udacity Self-Driving Car Engineer Nanodegree

This is my fourth project for Udacity’s Self-Driving Car Engineer Nanodegree.

The goal of this project is to detect and track lane lines from a video feed using Python. This was done by first by creating a thresholded binary image by converting the image from RGB to HLS color space, and applying a threshold on the saturation and hue color channels. The next step is to apply a perspective transform in order to see the lanes lines from a “birds-eye” view. (The algorithm basically takes a trapezoidal shape and converts it to the corresponding rectangular shape). The final step is to identify the lane line pixels by using the sliding windows method. The algorithm starts at the bottom of the image and works its way up, identifying zones in the left and right half of the image that have the highest number of pixels based on a histogram analysis. Finally, a polynomial curve is fit to the identified lane pixels.

As you can see, the algorithm does a good job of identifying the lane lines. It detects the pixels that correspond to the left and right lane lines, and fits a second order polynomial to each curve. The curves are then transformed from the birds-eye view back to the original frame of the video, and the area between them is filled in order to highlight the full lane. The polynomial curve is tracked from frame to frame such that it does not deviate much if there are false positives such as strong shadows or missing lane markers. In addition to the curvature radius of the lane, I have calculated the position of the vehicle relative to the center of the lane. Both values are shown at the top of the video.

Комментарии

Информация по комментариям в разработке