End-to-End Steering Prediction and Object Detection for Self-Driving Car using Machine Learning
DOI:
https://doi.org/10.3126/kjse.v9i1.78363Keywords:
Self-driving, Raspberry pi, Machine learning, Lane detection, Object detection, Traffic light detection, Stop sign detection, Google Colab, YOLOv8, Near real-time inferencingAbstract
In general, Self-Driving car uses a combination of numerous sensors, cameras, radars, LIDAR, high-performance processors and artificial intelligence (AI) technology to travel between destinations without the need of a human operator. This research utilizes a Raspberry pi 4 (2GB) as the processing element and a camera module connected to it acts as the input for the car. Machine learning models are used for lane detection, steering angle prediction, and object detection, focusing on traffic light and stop sign detection, to make various decisions for the car. Raspberry pi is the main part of the system with the task of processing the video provided by the camera by extracting individual image frames, detecting the road lanes and the objects in front of it, and finally taking various decisions for the car such that the car follows the lane lines and doesn't go off-track, maintains appropriate speed and turning angles, and follows traffic rules by recognizing the status of traffic lights and detecting stop sign. For the lane detection and steering angle prediction model, lane images were collected by manually controlling the car through the track lane. Those thousands of lane images were transferred to the computer, and after pre-processing, the model was trained using Google Colab. Similarly, for the object detection model, images were collected for the required classes and then used, to train a pre-trained YOLOv8 model for detecting only the few required classes of object. Finally, both the machine learning models were saved in Raspberry Pi. In the inferencing process, the latest image frame from the camera was fed into the raspberry pi after conversion into appropriate format as required by the steering prediction model (YUV Format) and object detection model (RGB Format). After interpretation of the output of these models, the decision for the car movement was sent to the motor driver module which then controlled the rotation and speed of the Battery-Operated (BO) motors. In the end, the car prototype successfully followed the predefined lane tracks and detected objects within its view, achieving near real-time inferencing at low speeds. In conclusion, this paper presents the integration of two machine learning models to develop a miniature self-driving car prototype within a resource-constrained environment capable of autonomously navigating road lanes and making precise decisions.