MIT Engineers Develop System That Helps Autonomous Vehicles See Around Corners
MIT engineers have developed a system that by sensing tiny changes in shadows on the ground can determine approaching objects that may cause a collision. According to the researches, the technology can find application in Autonomous cars and robots.
The system, called “ShadowCam,” uses computer-vision techniques to detect and classify changes to shadows on the ground. It depends upon sequences of video frames from a camera targeting a specific area to detect changes in light intensity over time, from image to image, that may indicate something moving away or coming closer. ShadowCam computes that information and classifies each image as containing a stationary object or a dynamic, moving one and reacts accordingly.
The researchers conducted successful experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways to conclude that it beats traditional LiDAR by more than half a second. When it comes to fast-moving autonomous vehicles, these fractions of a seconds matter a lot, according to researchers.
The researchers employ “Direct Sparse Odometry” (DSO), a new visual-odometry technique, to compute feature points in environments. Essentially, DSO plots features of an environment on a 3D point cloud, and then a computer-vision pipeline selects only the features located in a region of interest, such as the floor near a corner.
The researchers are further developing the system to work in different indoor and outdoor lighting conditions. The Toyota Research Institute has funded the research.