How do autonomous vehicles understand their world?

Today’s autonomous vehicle, whether a boat, car, tractor, or drone, rely on two primary sensor-based perception systems to navigate and position themselves in their environment. The two predominant systems are LiDAR (Light Detection and Ranging) and visual light camera based optical systems.

LiDAR systems send out pulses of light, which then bounce back to a receiving system that can process them into 2D and 3D images. Effectively LiDAR builds a model of the world with height, depth, and distance based on the points of data collected. A typical LiDAR system consists of a pulsed light source (think laser), a rotating mirror to precisely scatter the light, and detector to collect the reflections.

Visual light camera-based systems essentially take pictures of the world, seeing what the vehicle sees in real-time. A computer then interprets these images and sends navigation commands to the vehicle. These systems consist of multiple cameras able to capture a 360-degree view of the world and a processing engine to determine what is being seen.  A camera-based system interprets images to determine speed, direction, and distance of nearby vehicles, as well as identifying what is being seen. 

LiDAR, while excelling at the close-in, has frustrated engineers with its limitations at distance. RADAR does a reasonable job of solving the distance issue, whereby it could be a one-sensor solution to near- and far- range detection. Today’s RADAR systems are smaller, more power efficient, and less expensive than LiDAR. But all of this comes at a cost and that is the resolution of the images produced by RADAR are inferior to those produced by cameras and even LiDAR.

Want to know more?