I have started to upload videos to the course youtube channel and you may subscribe to the channel using following link:
See you in class soon.
Creating simple robots
I have been talking to a friend about building a self driving car and told him that I can show him how to build it it few short sessions. He seemed interested. So, I decided to write up some tutorials to show how its done.I am starting from the end- because its the one on which I am currently working. This and a lot of interesting stuff is taught in Udacity Self Driving Car Engineer Nanodegree program. Check that out at Udacity.com.
This is how it looks when I used the network (with some max-pooling and dropout layers) for a fully autonomous drive on Udacity simulator:
I am taking the convolutional neural network developed at NVIDIA research (this is a tutorial - so we should take existing research work rather than creating our own) in this tutorial. The paper can be found at NVIDIA Self Driving Car.
Below is how their neural network looks.
I'll go step by step how to build the network. The network is shown in the bottom up structure in the image. At the bottom we provide a 66x200 size image that has 3 color layers (RGB). Then it is normalized. We'll start from the normalized layer. So our input size is 66x200 and depth is 3.
First lets take a camera image.
This image is of size: 160x320. We resize it to 100x200 and crop out top 34 pixels. This can be done using OpenCV like below:
image = cv2.imread("./sample.jpg") img = cv2.resize(image, (200,100)) crp=img[34:,:] plt.imshow(crp)
This can be done in the model so that the cropping is done on the GPU:
model.add(Cropping2D(cropping=((68,0),(0,0))))
And we get an image like this with shape 3@66x200:
input_shape = (66, 200, 3) net = Sequential()
layer1 = Convolution2D(24, 5, 5, input_shape=input_shape, border_mode='valid', activation='relu') net.add(layer1) #output size = 24@31x94
net.add(Convolution2D(36, 5, 5, border_mode='valid', activation='relu')) #output size = 36@14x47 net.add(Convolution2D(48, 5, 5, border_mode='valid', activation='relu')) #output size = 48@5x22
net.add(Flatten())
net.add(Dense(115))
net.add(Dense(100)) net.add(Dense(50)) net.add(Dense(10)) net.add(Dense(1))
net.compile(loss='mean_squared_error', optimizer='adam') net.summary()
____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== convolution2d_1 (Convolution2D) (None, 62, 196, 24) 1824 convolution2d_input_1[0][0] ____________________________________________________________________________________________________ convolution2d_2 (Convolution2D) (None, 58, 192, 36) 21636 convolution2d_1[0][0] ____________________________________________________________________________________________________ convolution2d_3 (Convolution2D) (None, 54, 188, 48) 43248 convolution2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_4 (Convolution2D) (None, 52, 186, 64) 27712 convolution2d_3[0][0] ____________________________________________________________________________________________________ convolution2d_5 (Convolution2D) (None, 50, 184, 64) 36928 convolution2d_4[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 588800) 0 convolution2d_5[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 1156) 680653956 flatten_1[0][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 100) 115700 dense_1[0][0] ____________________________________________________________________________________________________ dense_3 (Dense) (None, 50) 5050 dense_2[0][0] ____________________________________________________________________________________________________ dense_4 (Dense) (None, 10) 510 dense_3[0][0] ____________________________________________________________________________________________________ dense_5 (Dense) (None, 1) 11 dense_4[0][0] ==================================================================================================== Total params: 680,906,575 Trainable params: 680,906,575 Non-trainable params: 0 ________________________
Looks good. Now we can generate inputs by driving our car with camera attached and a way to measure steering angle and train the network.
The output of the network is steering angle. So given a new image the network will tell what should be the cars steering angle. With right training the car should be bale to steer a car given there is mechanical / electrical components to steer the wheel.
Now sit back and relax while the car is being driven by the network.
int sensorPin = A0; // select the input pin for the potentiometer int sensorValue = 0; // variable to store the value coming from the sensor int newSensorValue = 0; void setup() { Serial.begin(9600); } void loop() { // read the value from the sensor: newSensorValue = analogRead(sensorPin); if(abs(newSensorValue - sensorValue) > 5){ sensorValue = newSensorValue; Serial.println(sensorValue); } delay(10); }Here is python code that reads the data from arduino serial port and update angle variable. This angle is that is being sent to the simulator:
import serial from time import sleep import threading port = "/dev/ttyACM1" ser = serial.Serial(port, 9600) angle = 0.0 def read_steering_angle(): print("Monitoring steering angle") global angle while True: data = ser.readline() if len(data) > 0: #print('Got serial data:', data) angle = float(int(data) - 512) / 50.0 return angle t = threading.Thread(target=read_steering_angle) t.daemon = True t.start() while True: print(angle) sleep(1)
As I have posted previously, I asked students to register for Autonomous Vehicle course and within less than a week we have 365 registration...