Building a realistic car simulator
I am thinking of buying an ECU and other parts to build a more realistic simulator where I can steer/ brake / accel the car from these hardware setup in front of the computer.
Self Driving Car Tutorial (Part N) : Developing convolutional neural network
I have been talking to a friend about building a self driving car and told him that I can show him how to build it it few short sessions. He seemed interested. So, I decided to write up some tutorials to show how its done.I am starting from the end- because its the one on which I am currently working. This and a lot of interesting stuff is taught in Udacity Self Driving Car Engineer Nanodegree program. Check that out at Udacity.com.
This is how it looks when I used the network (with some max-pooling and dropout layers) for a fully autonomous drive on Udacity simulator:
I am taking the convolutional neural network developed at NVIDIA research (this is a tutorial - so we should take existing research work rather than creating our own) in this tutorial. The paper can be found at NVIDIA Self Driving Car.
Below is how their neural network looks.
I'll go step by step how to build the network. The network is shown in the bottom up structure in the image. At the bottom we provide a 66x200 size image that has 3 color layers (RGB). Then it is normalized. We'll start from the normalized layer. So our input size is 66x200 and depth is 3.
First lets take a camera image.
This image is of size: 160x320. We resize it to 100x200 and crop out top 34 pixels. This can be done using OpenCV like below:
image = cv2.imread("./sample.jpg") img = cv2.resize(image, (200,100)) crp=img[34:,:] plt.imshow(crp)
This can be done in the model so that the cropping is done on the GPU:
model.add(Cropping2D(cropping=((68,0),(0,0))))
And we get an image like this with shape 3@66x200:
input_shape = (66, 200, 3) net = Sequential()
layer1 = Convolution2D(24, 5, 5, input_shape=input_shape, border_mode='valid', activation='relu') net.add(layer1) #output size = 24@31x94
net.add(Convolution2D(36, 5, 5, border_mode='valid', activation='relu')) #output size = 36@14x47 net.add(Convolution2D(48, 5, 5, border_mode='valid', activation='relu')) #output size = 48@5x22
net.add(Convolution2D(64, 3, 3, border_mode='valid', activation='relu'))
#output size = 64@3x20
net.add(Convolution2D(64, 3, 3, border_mode='valid', activation='relu'))
#output size = 64@1x18
net.add(Flatten())
net.add(Dense(115))
net.add(Dense(100)) net.add(Dense(50)) net.add(Dense(10)) net.add(Dense(1))
net.compile(loss='mean_squared_error', optimizer='adam') net.summary()
____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== convolution2d_1 (Convolution2D) (None, 62, 196, 24) 1824 convolution2d_input_1[0][0] ____________________________________________________________________________________________________ convolution2d_2 (Convolution2D) (None, 58, 192, 36) 21636 convolution2d_1[0][0] ____________________________________________________________________________________________________ convolution2d_3 (Convolution2D) (None, 54, 188, 48) 43248 convolution2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_4 (Convolution2D) (None, 52, 186, 64) 27712 convolution2d_3[0][0] ____________________________________________________________________________________________________ convolution2d_5 (Convolution2D) (None, 50, 184, 64) 36928 convolution2d_4[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 588800) 0 convolution2d_5[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 1156) 680653956 flatten_1[0][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 100) 115700 dense_1[0][0] ____________________________________________________________________________________________________ dense_3 (Dense) (None, 50) 5050 dense_2[0][0] ____________________________________________________________________________________________________ dense_4 (Dense) (None, 10) 510 dense_3[0][0] ____________________________________________________________________________________________________ dense_5 (Dense) (None, 1) 11 dense_4[0][0] ==================================================================================================== Total params: 680,906,575 Trainable params: 680,906,575 Non-trainable params: 0 ________________________
Looks good. Now we can generate inputs by driving our car with camera attached and a way to measure steering angle and train the network.
The output of the network is steering angle. So given a new image the network will tell what should be the cars steering angle. With right training the car should be bale to steer a car given there is mechanical / electrical components to steer the wheel.
Now sit back and relax while the car is being driven by the network.
Driving Udacity car simulator with home made steering wheel (!)
The middle pin of variable resistor is attached to A0 pin of the arduino and other two pins are attached to ground and +5v. This is creating a voltage divider and the measurement ranges from 0 to 1024. The measurement of 512 is received when the steering position is straight ahead.
The code
Here is arduino code that reads analog input from A0 and send the value to serial port. The code is adapted from Arduino IDE Analog sketch demo with minor changes.int sensorPin = A0; // select the input pin for the potentiometer int sensorValue = 0; // variable to store the value coming from the sensor int newSensorValue = 0; void setup() { Serial.begin(9600); } void loop() { // read the value from the sensor: newSensorValue = analogRead(sensorPin); if(abs(newSensorValue - sensorValue) > 5){ sensorValue = newSensorValue; Serial.println(sensorValue); } delay(10); }Here is python code that reads the data from arduino serial port and update angle variable. This angle is that is being sent to the simulator:
import serial from time import sleep import threading port = "/dev/ttyACM1" ser = serial.Serial(port, 9600) angle = 0.0 def read_steering_angle(): print("Monitoring steering angle") global angle while True: data = ser.readline() if len(data) > 0: #print('Got serial data:', data) angle = float(int(data) - 512) / 50.0 return angle t = threading.Thread(target=read_steering_angle) t.daemon = True t.start() while True: print(angle) sleep(1)
Demo
Here is how I am driving using the steering.Here is screen capture of driving with the steering.
Tracking lanes on real hardware
The localization is not working yet and I need to create a PID controller. I am excited to finish last bit of kalman filter to track lanes correctly, localization using particle filter and PID controller to drive the car.
Tracking lanes
Apply Gaussian filter to smooth out outliers
Use canny method to create binary image (detect edges)
Run probabilistic hough transform to get the lines
Remove lines with wrong angle and size
Use perspective transform to get image co-ordinates
Draw the lines on original frame
Calculating Cross Track Error (CTE) and steering the robot
I am now able to calculate cross track error and steer the robots front wheel to correct it. The steering driver is very crude at this moment- it has three positions Left, Middle and Right. I'll have to send more positions using a second parameter that tells exact angle. The servo motor can position itself between 0 to 180 degree angle with 1 degree resolution where 90 degree is straight forward. Wanted to make it work first then make it precise. Here is the video- please pay attention to the robots front wheel and where the lane is:
Detecting lines using proabilistic hough transform
Canny
algorithm to create a binary image and then use the HoughLinesP
function detects line in binary image.
First we take the bird eye view image of straight labe for processing.
When the HoughLinesP
is applied we get following resulting lines:
Then we take a curved lane image
When the HoughLinesP
is applied we get following resulting lines:
Here is the code.
int hough(const char *input_path, const char *output_path) { Mat src_o, src, dst; src_o = imread(input_path); cvtColor(src_o, src, CV_BGR2GRAY ); Canny(src, dst, 200, 255, 3 ); vectorlines; HoughLinesP(dst, lines, 1, CV_PI/180, 20, 10, 20 ); std::map candidate; for( size_t i = 0; i < lines.size(); i++ ) { line( src_o, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 ); } imwrite(output_path, src_o); return 0; }
Next step is to take the lines and calculate the lane midpoint.
The self driving car platform (GTAR)
Here is how it looks after adding the lower platform.
Now we detach the Jetson TX1 bottom plastic base and remove the plastic legs.
Attach the Jetson TX1 plastic base to the lower platform. The onboard camera is facing front.
Add the Jetson TX1 board and the power supply to the platform
.
And add the upper platform and the sonar sensor in the front.
I have not received the camera yet. The camera will be placed on the upper platform and the GPS, gyroscope and compass will also be placed on the top platform.
The lower platform needs some cut to free the front and rear wheels. Other than that its complete and we are ready to add the remaining electronics and start coding.
Finally the sensor detecting obstacle in front of it:
Student registration reached 365 in less than a week
As I have posted previously, I asked students to register for Autonomous Vehicle course and within less than a week we have 365 registration...
-
I have been talking to a friend about building a self driving car and told him that I can show him how to build it it few short sessions. He...