Road Lane Detection For Autonomous Driving

Why Road Lane Detection?

Road lane detection has a fundamental functionality in autonomous driving vehicles. It has significant functionality in path planning, control braking, and steering. Even vehicles without autonomous functions use lane detection to notify the driver to correct the car's position if the car starts to drift out of its respective lane to prevent accidents.

How Did We Implement Lane Detection Using the Hough Line Method?

The Hough transform is a feature extraction technique used in image analysis and computer vision. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the Hough transform.

Hough lines based lane detection is a purely mathematical way to implement lane detection. For this, we need a clear picture or a video source of the road.

Screenshot 2024-05-30 at 16.17 1.png

Since fully colored RGB images take lots of computational power to process we need to convert our full-color image into a gray-scale (Black & White) image. Processing black and white images take less computational power than processing fully colored RGB images.

Screenshot 2024-05-30 at 16.18 1.png

Another notable problem in lane tracking is that if an image has unnecessary sharpness that can be detected as a false edge, that can be a problem to edge detection algorithms. So we need to smoothen the image by reducing the noise of the image which can be done using a Gaussian Filter. A Gaussian filter uses a kernel to make each pixel value equal to the weighted average of the neighboring pixels, which makes the image smoother.

Screenshot 2024-05-30 at 16.19 1.png

Road lines can be detected as edges inside the picture. To detect Edges we need to measure changes of brightness over adjacent pixels. We call this gradient. For example, in the below-given image, the white line has a higher brightness than its adjacent pixels.

Screenshot 2024-05-30 at 16.20 1.png

So the gradient between the black part and the white part is high so that it can be detected as an edge. We used the canny edge detector to perform the tracking, the canny detector measures changes in the intensity of a given pixel with the adjacent pixels. This way we can detect the edges in our grayscale picture.

Screenshot 2024-05-30 at 16.20 2.png

Looking at the above picture we can see there are a lot of edges along with the picture. So we need to select the region of the pixels that we are interested in. And after that, we mask other parts of the image and process only the needed parts. We use three points to create a triangle that can be used to isolate the part of the picture that we want to focus on.

Frame 40409.png

Now we need to use Hough Space to detect lines inside the selected area. Mathematically every line can be represented using the equation y = mx + b .In the Cartesian space, we use these x and y points to represent lines. In Hough space, we use m (slope) and the b (intercept) to represent data. Each individual coordinate in the Hough space denotes a line in a 2-dimensional cartesian space. As an example, the below graphs depict the line with the equation y = 2x +2 in Cartesian space and the Hough space.

Screenshot 2024-05-30 at 16.22 3.png

So as shown in the above picture, the Green line in Cartesian space is shown as a point in a Hough space. And red and orange points in Cartesian space are shown as lines in a Hough space.

If two lines intersect at one point in a Hough Space that means they are representing a Cartesian line in the corresponding Hough space point. So the grayscale image created earlier had a series of edges that represented a series of white points. We can create Hough space using these points. Which will be something like this,

Screenshot 2024-05-30 at 16.23.46.png

In the above examples, only five lines are shown, which represents 5-pixel points in the image. The real Hough space will be much more complex. In this image, we can see that there are four intersecting points. This means that these points make a line in the picture with corresponding pixels. So to figure out what are more visible lines and what are not lines at all we can use a voting system. First, we split Hough space into a grid. Something like this.

Screenshot 2024-05-30 at 16.24.14.png

If we look at the cell where the red point is located, for the whole grid there is only one intersection point. And if we look at the grid where purple, pink, and gray dots are located there are three intersection points. So that cell will get more votes than other cells since it has more interaction points.

Screenshot 2024-05-30 at 16.24.59.png

For example, above Hough space cells will have these kinds of votes.

After that, the algorithm picks out the most voted cells and decides that there are lines in those corresponding cells. This gives us the lines in our selected area. But there is a problem using Hough pace with the m and b axis. If we take a vertical line that has the same x value, Intersection (b value) will be zero. Therefore m will be equal to this,

Screenshot 2024-05-30 at 16.25.58.png

So creating Hough space using m and b as axes will be impossible. For this kind of scenario we use something called a polar coordinate system. Which use this equation to detect a line, ρ=x Sinθ+x Cosθ in this equation represents the distance to the origin to a point. And represent the angle from the x axis to the line. x Sinθ Represent distance in the x axis and x Cosθ represent distance in y axis.

Screenshot 2024-05-30 at 16.26.29.png

So we draw a chart using and as axes.

Screenshot 2024-05-30 at 16.27.08.png

But still this graph follows all the rules from Hough Space so as mentioned before we can assume that if two or more lines are intersecting each other then there will be a line. So we use voting system as same as before.

Screenshot 2024-05-30 at 16.27.36.png

In real-life scenarios, there will be more points and smaller grids. So getting the most voted grids and drawing lines in the image to those corresponding grids will give us the line locations of the image. Combining that line location with the original image will give us the final outcome something like this.

Screenshot 2024-05-30 at 16.28.17.png

When Things Get Real.

In practice, the usage of Hough line-based lane tracking is very rare in modern autonomous vehicles because of their computational cost and lack of speed. Also hough line tracking algorithms find it hard to detect curved lines in the road, which is very common on real roads. Modern autonomous vehicles use deep learning techniques such as instance-based segmentation and detection to perform lane tracking using multiple embedded cameras used on the vehicle rather than the Hough space approach to detect road lines. Another reason is that the accuracy of hough line tracking heavenly depends on the weather, speed, and line marker conditions, and if by any chance the camera gets obstructed by dirt or snow, the lane detection system will be deactivated, but on the other hand, hough line depicts another purely algorithmic mathematical computer vision approach that was used in the past for lane tracking and serves as a foundation for other tracking based computer vision algorithms.

Share on: