The race to develop truly self-driving cars is accelerating at a rapid pace, while the vehicles we can buy today are also getting smarter and safer, thanks to technologies like Advanced Driver Assistance Systems (ADAS). At the heart of these changes, we generally find manufacturers faced with a choice of three main sensor types – cameras, radar or lidar. When these sensors are coupled to a powerful computer system, the vehicle can then attempt to map, understand and navigate the environment around it. But what are the core differences between these three sensor types? Which type represents the best foundation for the Level 5 autonomous vehicles we expect in the near future?
Wind the clock back a decade or so and all cars on the road were entirely dependent on the eyes, ears and attention span of their human drivers. Despite some niche and luxury examples, mainstream vehicles had no systems to warn drivers of potential hazards, and no way to help them avoid an accident.
Over the last decade or so however, vehicles have slowly begun integrating advanced technology to help people drive safely and avoid hazards. Advanced Driver Assistance Systems can now offer adaptive cruise control (ACC), autonomous emergency braking (AEB) and lane-keeping assist systems (LKAS). This has become possible thanks to improvements in sensor technology and an exponential increase in available processing power – all of which allows the vehicle to better understand, and react to its environment.
While improvements to sensor technology are fundamental to the development of fully autonomous, self-driving cars, there is little consensus among high-profile companies like Tesla and Waymo for example. While Tesla favours a mix of cameras and radar, Waymo is preparing instead to gamble on lidar. So, what’s the difference? Which approach is best suited to the future we envision?
Behind the Camera
Cameras are a widely understood, mature technology. They are also reliable and relatively cheap to produce. Vehicle applications that commonly rely on cameras today include advanced driver assistance systems (ADAS), surround view systems (SVS) and driver monitoring systems (DMS). Coupled with infra-red lighting, they can perform to some extent at night. But they have some significant limitations too.
In fact, cameras often face many of the same limitations as we find with the human eye. They need a clear lens to see properly, limiting where car makers can position them, and they don’t always give a crisp or reliable picture in bad weather – particularly in heavy rain or snow. At night, they’re often only as good as the vehicle’s headlights, a significant deterrent to achieving accuracy.
Perhaps the one fundamental limitation of camera technology in providing a fully autonomous driving experience, is that with no human brain to make a determination about what is being seen, cameras rely on complex predictions from neural networks. This requires significant amounts of training and processing power, two factors which are inherently limited with an in-vehicle computer system.
Radio detection and ranging, known simply as ‘radar’ to most of us, was first developed before the Second World War. It has been employed for decades to accurately calculate the position, speed and direction of planes, boats and other moving objects. Radar works by firing radio waves at a target area and monitoring for reflections from any objects. Analyzing the frequency of these reflections also reveals their relative speed. For example, reflections from approaching objects are shifted up, and those from receding objects are shifted down – a similar effect to the experience of a police siren changing pitch as it passes by.
Today, radar is already employed to help keep our roads safe, with many modern cars using radar sensors for hazard detection and range-finding in features like advanced cruise control (ACC) and automatic emergency braking (AEB). These technologies depend on the information from multiple radar sensors which is interpreted by in-vehicle computers to identify the distance, direction and relative speed of vehicles or hazards.
Unlike cameras, radar is virtually impervious to adverse weather conditions, working reliably in dark, wet or even foggy weather. But there are some limitations to current radar technology. One is that current 24GHz sensors can offer a limited resolution only – they let the car ‘see’ the world, but in reality the picture painted is somewhat blurry, leading to problems identifying and reacting to multiple, specific hazards.
The development of more accurate 77GHz ‘mmWave’ radar sensors should help reduce the blur, detecting speed changes with more precision when measuring distance compared to 24GHz sensors. 77GHz sensors are also smaller, so they’re easier for vehicle makers to discreetly integrate into their designs. There seems little doubt that the continued development and production of 77GHz mmWave radar sensors is likely to be fundamental to the ever-improving autonomy of vehicles.
What Actually is Lidar?
Lidar stands for light detection and ranging, with systems typically using invisible laser light to measure the distance to objects in a similar way to radar. Originally developed as a surveying technology, lidar measures many thousands of points to build up an incredibly detailed 3D view of the environment around the sensor.
In a vehicle, lidar can provide the most detailed understanding possible of the road, road users and potential hazards surrounding the vehicle. Impressively, lidar can spot objects up to 100 meters or so away, and can measure distances at an accuracy of up to 2cm. Lidar is also unaffected by adverse weather conditions such as wind, rain and snow, and in fact could even be used to map inaccessible areas in heavy snow conditions.
Lidar technology is not without its own set of specific limitations however. For a start, it takes a huge amount of processing power to interpret up to a million measurements every second, and then translate them into actionable data. Lidar sensors are also complex, with many relying on moving parts which can make them more vulnerable to damage.
Lidar’s forensic view of the world is overkill for today’s ADAS needs, but it remains an important technology for future driver-less cars, with General Motors and Waymo being notable proponents. At present, it’s prohibitively expensive and largely unproven in the automotive industry, but when the technology is ready for the mainstream, its performance may prove to be vital.
So What’s Best Option for Today?
With each technology having its own advantages and disadvantages, driver-less cars are unlikely to rely on just one system to view and navigate the world. Many companies, including VIA, believe in adapting the most appropriate solution for the customers’ specific needs. We also envision that sensor fusion, using a combination of multiple sensor technologies to eliminate the weaknesses of any one sensor type, will prove to the best way forward.
At VIA, our mission is to enable companies to accelerate the development and deployment of innovative new transportation solutions and services that redefine safety, convenience, and comfort to give our customers a competitive edge as we enter a new age of seamless mobility and autonomous driving.
To learn more about VIA Mobile360 safety solutions, click here.