I am now able to calculate cross track error and steer the robots front wheel to correct it. The steering driver is very crude at this moment- it has three positions Left, Middle and Right. I'll have to send more positions using a second parameter that tells exact angle. The servo motor can position itself between 0 to 180 degree angle with 1 degree resolution where 90 degree is straight forward. Wanted to make it work first then make it precise. Here is the video- please pay attention to the robots front wheel and where the lane is:
I first detect two lane marking lines and group them together (if one line is in 100 pixel, configurable, from another line they form one group). Then calculate the mid point for each group that gives me the list of lane marking. The lanes can be curved (I have to calculate a 3rd degree bezier spline instead of straight lines that I use right now) - but still all lines should be within 100 pixel of each other for one lane marking- its like the snake algorithm to track the lanes while not exact.
After I get all the lane markings I take the middle point of the lane from each neighboring pair of lane marking. Then from the image position I get the current car/ camera position and calculate the cross track error (CTE) by getting the difference from the lane mid point.
I use kalman filter to eliminate single frame errors and over multiple frames the values stables (in less than 20 frames it becomes almost correct). Jetson TX1 GPU array can process 20 frames per second without any optimization effort. Here is how it looks for a single frame (this is bird eye view- transformed using wrap transform of original frame):
The image shows detected lane marking using thicker straight line, lane mid point (green) and current car position (pink) and on top it shows frame #, car position, CTE value and which way it should steer the the car.
No comments:
Post a Comment