Detecting lines using proabilistic hough transform

We use Canny algorithm to create a binary image and then use the HoughLinesP function detects line in binary image.

First we take the bird eye view image of straight labe for processing.

When the HoughLinesP is applied we get following resulting lines:

Then we take a curved lane image

When the HoughLinesP is applied we get following resulting lines:

Here is the code.


int hough(const char *input_path, const char *output_path)
{
    Mat src_o, src, dst;
    src_o = imread(input_path);

    cvtColor(src_o, src, CV_BGR2GRAY );

    Canny(src, dst, 200, 255, 3 );

    vector lines;
    HoughLinesP(dst, lines, 1, CV_PI/180, 20, 10, 20 );

    std::map  candidate;

    for( size_t i = 0; i < lines.size(); i++ )
    {
        line( src_o, Point(lines[i][0], lines[i][1]),
              Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 );
    }

    imwrite(output_path, src_o);

    return 0;
}


Next step is to take the lines and calculate the lane midpoint.

The self driving car platform (GTAR)

We are using the Traxxas Slash 4x4 and Jetson TX1 (which provides 1 TeraFLOPs peak performance) to build the self driving car platform.
First we remove the body and the radio controller from the car and added 4 narrow bolts to hold to plastic platforms.
We are going to use two piece of plastic sheets like below. Both has a few holes.

Here is how it looks after adding the lower platform.

Now we detach the Jetson TX1 bottom plastic base and remove the plastic legs.

Attach the Jetson TX1 plastic base to the lower platform. The onboard camera is facing front.

Add the Jetson TX1 board and the power supply to the platform

.

And add the upper platform and the sonar sensor in the front.

I have not received the camera yet. The camera will be placed on the upper platform and the GPS, gyroscope and compass will also be placed on the top platform.

The lower platform needs some cut to free the front and rear wheels. Other than that its complete and we are ready to add the remaining electronics and start coding.

Finally the sensor detecting obstacle in front of it:

Lane Detection: Step 1: Inverse/Wrap perspective mapping

To detect lane we first do some image transformation. First lets consider the following image. It has lane marking (green). The lanes are almost parallel in real world, but as we see in the image the lanes seem wider at the bottom and narrower at the top.


To make it easier we want to have a bird eye view so that the lane lines become parallel to each other. OpenCV has built-in function to do the transformation. So we just use that. The process is called wrap perspective mapping (WPM) and we need to provide 4 points on the original image (I have marked them red in the image above) and a corresponding real world co-ordinate. I have just made up the numbers to make them parallel.
Here is the result:
And here is the code (just a demo- without error handling):
#include "iostream"

#include "cv.h"
#include "highgui.h"

using namespace std;
using namespace cv;


void ipm(const char *input_path, const char *output_path)
{
    Mat src, dst;
    src=imread(input_path, 1);

    Point2f img_vertices[4];
    img_vertices[0] = Point(213, 104);
    img_vertices[1] = Point(410, 106);
    img_vertices[2] = Point(6, 372);
    img_vertices[3] = Point(540, 400);

    Point2f world_vertices[4];
    world_vertices[0] = Point(26, 104);
    world_vertices[1] = Point(540, 106);
    world_vertices[2] = Point(26, 400);
    world_vertices[3] = Point(540, 400);

    Mat warpMatrix = getPerspectiveTransform(img_vertices, world_vertices);
    
    warpPerspective(src, dst, warpMatrix, dst.size(), INTER_LINEAR);

    imwrite(output_path, dst);    
}


int main(int argc, char** argv)
{
    ipm(argv[1], "ipm.jpg");
    return 0;
}

At the end of all steps the goal is to detect all paths and objects 
in the http://www.cvlibs.net/datasets/kitti/index.php dataset.

Student registration reached 365 in less than a week

As I have posted previously, I asked students to register for Autonomous Vehicle course and within less than a week we have 365 registration...