Autonomous Vehicle

Sensor Fusion for Autonomous Vehicle

Autonomous vehicle is one of the trending research topics, it will revolutionize the future of ground vehicle. Autonomous vehicles are replacing the ordinary vehicle as it can make decision and perform driving task of their own. Every year 1.3 million of people die due to accidents on road, i.e., 3700 people per day. Most of the accidents can be saved by providing the necessary safety features to vehicle. In a few years, the demand for autonomous vehicles will tremendously increase. The safety of self-driving cars is the prime focus on which most of the automobile industry and research organizations are working tirelessly. Although research on autonomous vehicle is currently happening across the world, the solutions developed cannot be directly utilized in the Indian scenario due to the unorganized traffic conditions, extreme weather condition and large population. An algorithm for environment perception like object detection, tracking and classification, lane keeping, Speed detection and tracking of ego-vehicle and nearby dynamic objects, etc. based on multiple sensors is required to develop an autonomous vehicle. Also, self-driving cars need an eye of a driver to make decision to drive safely. Therefore, High-Definition Maps are required to give detailed information for the self-driving task. It contains a huge amount of driving assistance information. Navigation maps which are in existence have meter level accuracy but HD Maps will have centimetre level accuracy for localization and path planning for navigation. The primary sensors used to achieve the autonomous vehicle are LiDAR (Light Detecting & Ranging), Radar, Camera. Also, GPS and IMU is required for the navigation. 

To enable the self-driving cars Algorithm for environment perception and navigation with very high accuracy are required which can avoid obstacle and safely navigate from source point to destination using multi-sensor perception. As we know that there is a limitation of visual based sensor like camera in dark and bad weather condition. Efficiency of LiDAR is limited in extreme weather condition like heavy rain and dense fog. While as Radar works well in all-weather but the data from radar is very sparse for detecting and tracking obstacles. So, to overcome the individual sensor limitation, the sensor-fusion based algorithm is in demand which will be used in all-weather condition.

There are six levels of autonomous vehicle described by the Society of Automotive Engineers (SAE) as shown in Figure 1. In Level 0, Autonomous vehicle is fully controlled by the driver while as in Level 6, vehicle will control all the driving tasks. Also, most of the research predict that the demand for the Autonomous Vehicle will increase 10 times in coming 5 years. As it can improve productivity while travelling and reduce accidents.

Figure 1: 6 Level of Autonomous vehicles described by SAE

The four basic building blocks of autonomous vehicle are Perception, localization & Mapping, Path Planning and Control.

Perception: Perception is the process of perceiving environment using multiple sensors. Also, one of the major concerns to the autonomous vehicles is the performance in extreme weather condition like heavy fog in winter morning and rainy session where the sensors like camera and LiDAR don’t perceive the scenario due to low visibility, which may lead to improper function of vehicle. In that case fusion of LiDAR, Radar and Camera is required to visualize the environment around the vehicle. Perception task includes object detection, tracking, speed detection, lane keeping and etc. Obstacle detection is performed by determining the objects present in the path of vehicle. The different kind of objects are detected and classified using state of the art deep learning algorithms. The objects can be classified into cars, pedestrians, bikes, etc. Object tracking is used for track the dynamic objects in the scenario. It also adds or remove the objects in the frame. Object tracking techniques for autonomous vehicles must have both high speed and accuracy for real time application. Speed detection is used to classify whether the objects are dynamic or static. It is also used for determining the speed of the objects detected. The autonomous system must also be able to accurately predict speed in order to avoid collisions.

Localization and Mapping: Localization is the process of determining the location of vehicle, relative to objects in its environment. Mapping is the process of building maps based on data acquired from one or more sensors. Collectively Localization & mapping is known as Simultaneous Localization & Mapping (SLAM) as shown in Figure 2. SLAM is a technique used to building the map of an unknown environment or a known environment while at the same time keeping track of the current location of the vehicle. It matches the new measured point clouds to the previous reference and update the map with nodes or landmarks from the new point clouds. This plays an important role in GPS denied conditions, where localization done based on the Lidar or camera sensor. It helps to track the position of the vehicle with the help of estimation, keeping in account the reference position. Apart from the navigation map and point cloud 3D map, we can make the HD maps which will provide the rich details to vehicle about the surroundings and give an eye to self-driving vehicles.

Figure 2: Simultaneous Localization and Mapping using LiDAR

Planning: Path planning used in finding the possible paths from source to destination. It also used in finding a path to evade traffic and navigate through it. Based on the dynamic object, the path planning is done to avoid the obstacles. After detecting the obstacle, the vehicle will make the new path to reach the goal. Planning is the process of planning trajectories with the knowledge of the environment and the vehicle’s position. Improvement for Accuracy and computational efficiency for perception and navigation algorithm are very important for autonomous vehicle.  Inefficient algorithm can be replaced by deep learning algorithm which will accelerate the performance and accuracy of the system.

Control: Once the above three function is implemented then the actuators come in the action where the control algorithms are developed to actuate the vehicle in different driving situation.

Sensors:

The perception sensors like cameras, LiDAR or radar have their own advantages and drawbacks in different environment scenarios. Camera gives the rich colour and visual information but it lacks in providing the depth information. LiDAR gives the depth information of object but it is very sparse and does not have the colour information. Radar is not affected by illumination and have high range.  Radar uses radio frequency waves, so it has a lower resolution as compared to LiDAR. we will see the comparison of different sensors and their capability to be used in the different environment. From table I, we can conclude that Lidar is used for high level accuracy of autonomous vehicle with medium range. Camera gives us a lot of visual information. But both lidar and camera is affected by bad weather. While as Radar have higher range. It also works in bad weather condition as it emits radio waves which can penetrate in rain and fog. But Lidar have highest accuracy as compared to the counterparts in price of high cost. So, we need a vehicle consisting of Lidar, Radar and Camera for all weather condition.

SensorAffected by illuminationAffected by weatherResolutionDepthRangeAccuracyCost
CameraYesYesHighNo<150mLowLow
RadarNoNoLowYes50 – 300mMediumMedium
LiDARNoPartiallyMediumYes30 – 200mHighHigh
Table I: Comparison of different sensor

Approach to the sensor fusion:

Classical Approach: It uses Statistical and Probabilistic model. It is having high Computational Complexity. Also, it Requires prior knowledge of system model and data. Also, it can achieve low to medium level of fusion.

Deep Learning Approach: It is the current state of the art technology used for Autonomous vehicle. Convolutional Neural Network (CNN) based models are used widely for the object detection and classification. Different layers for the object detection are shown in the Figure 3. First, the input image data is given to the input layer. Then, various important features are extracted from the image using convolution layers. Later, pooling is done to reduce the computational complexity by keeping most important information and ignoring the redundant information. In fully connected layers, all neurons are connected with some weights and classification is done. Hence, the final output is produced with classification probability. Advancements in CNN are seen in few years from the two-stage detector to single-stage detector. R-CNN is one of the popular two stage detectors in which image is proposed into different regions then CNN is applied for the detection. R-CNN model gives high accuracy but it is a slow process which is not useful for real time application of autonomous vehicle. Then various advancement is seen in R-CNN. Fast R-CNN is proposed to increase the speed in this, Input image is directly processed using CNN to produce a convolutional feature map. Now a days single stage detector like YOLO, SSD are used which gives faster result. The loss of accuracy is compensated using the sensor fusion technology. It is used widely in autonomous vehicle.

Figure 3: Different layers of CNN

Methodology:

Sensor fusion is one of the most vital technologies to enhance the accuracy and speed of detection process in all-weather condition. When one of the sensors is less efficient then using the fusion of multiple sensors, we can enhance the accuracy of the perception algorithm. Sensor fusion can be done in two ways which are described as below:

Early Fusion: In early fusion raw data like point cloud from Lidar and camera image are fused before processing. It is also known as high level fusion. It follows the 3-step process as shown in Figure 4. First, the 3D Lidar point cloud is projected into 2D camera image frame by transformation into homogeneous coordinates and applying some translation and rotation process then it is returned back to Euclidean coordinates. Second, object detection is done using various deep learning technology like R-CNN or SSD or YOLO. Finally, the matching of Region of Interest is done. We need to perform the intrinsic and extrinsic calibration of the camera and LiDAR sensor by following the process, so that the 3D point cloud of LiDAR can be projected onto the image frame.

Figure 4: Early sensor fusion

Late Fusion: In late fusion, result is obtained independently from Lidar and camera sensor. It follows the 4-step process as shown in Figure 5. First, 3D object detection is done using Lidar. Then camera detects the object in 2D and it is projected into 3D space. Then intersection of union matching is done to get the final result.

Figure 5: Late sensor fusion

One of the solutions to the recognition and detection of the object is 3D Computer vision technique, because volumetric images contain more information give better quality and less noise as compared to the 2D images. So, to make it reliable technology, the focus is required on the Deep Learning technique to improve the performance like reliability, accuracy, effectiveness and robustness of sensor fusion network.

There is some limitation without sensor fusion but fusion brings the desired output as shown in Table II.

AV applicationFused SensorLimitation without fusionAdvantages using fusion
Object DetectionLidar & CameraIllumination, Night vision difficulty, Low resolution of LidarDepth, range and accuracy
Localization & MappingGPS and LidarPoor function in GPS denied areaContinuous navigation, correction in localization
Positioning & NavigationLidar Map, Camera and GPSGPS denied area and road markingRoad marking detection, HD Map
Perception in Bad weatherLidar, Camera and RadarPoor functioning in bad weather like fogs and rain.All weather solution of autonomous vehicle
Table II: Advantages of sensor fusion for different autonomous vehicle application.

Conclusion:

To summarize, the semi-autonomous vehicle is well developed in many countries and to make it fully autonomous, we need to rely on various sensor and take decision, as if one sensor fails other will work. Sensor Fusion is one of the keys enabling technology for self-driving cars to work.

Author:

Abhishek Thakur

Prime Minister’s Research Fellow

IIT Hyderabad

Abhishek Thakur is currently Prime Minister Research Fellow at Indian Institute of Technology, Hyderabad. His research work includes sensor fusion for autonomous vehicle for Indian environment. He works with sensors like LiDAR, Camera, GPS, IMU.

Published in Telematics Wire

Back to top button