Researchers from Cornell University have discovered a simpler method to detect 3D objects in the paths of autonomous cars using two inexpensive cameras on either side of the windshield.
The researchers claim that the system can detect objects with nearly LiDAR’s accuracy and at a fraction of the cost. They found that analyzing the captured images from a bird’s-eye view rather than the more traditional frontal view more than tripled their accuracy, making stereo camera a viable and low-cost alternative to LiDAR.
The laser sensors currently used to detect 3D objects in the paths of autonomous cars are bulky, ugly, expensive, energy-inefficient – and highly accurate. The LiDAR sensors are affixed to cars’ roofs, where they increase wind drag and add around $10,000 to a car’s cost.
LiDAR sensors use lasers to create 3D point maps of their surroundings, measuring objects’ distance via the speed of light. But stereo cameras, rely on two perspectives to establish depth, as human eyes do. For most self-driving cars, the data captured by cameras or sensors is analyzed using convolutional neural networks – a kind of machine learning that identifies images by applying filters that recognize patterns associated with them.
These convolutional neural networks have been shown to be very good at identifying objects in standard color photographs, but they can distort the 3D information if it’s represented from the front. So the researchers switched the representation from a frontal perspective to a point cloud observed from a bird’s-eye view, the accuracy more than tripled.
The research was partly supported by grants from the National Science Foundation, the Office of Naval Research and the Bill and Melinda Gates Foundation.
Source: Cornell Chronicle