Self-driving cars need sensors such as cameras and radar in order to ‘see’ the world around them. But sensor data alone isn’t enough. Autonomous vehicles also need the computing power and advanced machine intelligence to analyse multiple, sometimes conflicting data streams to create a single, accurate view of their environment. Known as ‘sensor fusion’, it is clear that this is an important prerequisite for self-driving cars, but achieving it is a major technical challenge. Here we examine the different sensor technologies, why sensor fusion is necessary, and the edge AI technology that underpins it all.
Before cars drive themselves, they must know where they are, where they’re going, and what might get in the way. Already, today’s level one and level two autonomous vehicles bristle with sensors that help locate hazards like cars, cyclists or pedestrians. The latest level three vehicles – able to drive themselves in limited circumstances – also integrate high-resolution map data for a more complete understanding of the roads.
To take the next step however, vehicles need to be able to see and understand their environment in much more detail. And critically, they need to be able to react to dynamic factors, such as a pedestrian stepping into the road, at least as well as humans do. Perceiving the world accurately enough to make safe driving decisions relies on multiple, overlapping sensors, using a mix of three main technologies.
Self-driving Technologies Compared
Cameras are the best understood sensor for self-driving cars. They’re cheap and reliable, and can be used to provide a 360-degree view around the vehicle. However, like our own eyes, cameras can struggle with visibility at night or in poor weather conditions. Crucially, they also don’t record depth information. While computers can work it out – for example, by recognising a specific model of car, knowing how big it is, then inferring how far away it must be – the conclusion may not be quick or reliable enough for self-driving.
To get around this, vehicles can use radar. Radio detection and ranging works by pinging out radio waves, and measuring what’s reflected back by the environment. Radar is a highly accurate way to measure the distance and speed of solid objects such as cars, and its sensors are cheap, reliable and already crucial to systems such as automatic cruise control and autonomous emergency braking. The latest 77GHz mmWave radar sensors are also small enough to be integrated into any kind of vehicle, and offer enough resolution to distinguish between individual objects that are positioned close together, within an accuracy of 4cm.
Radar works well at night and is reliable in most weather conditions, but the sensors have a limited field of view, and their resolution is far lower than a camera. Both limitations are addressed by lidar, which works similarly to radar but uses invisible laser light rather than radio waves. Lidar can produce a highly detailed and extremely accurate map of a car’s environment, but the technology is affected by weather. In addition, current sensors are expensive and comparatively fragile, although next-generation solid-state lidar promises to fix that.

Sense from Sensors
The three main sensor technologies each have their own strengths and weaknesses, and no single one provides the complete and reliable data needed for self-driving. The common approach for self-driving cars is to use computer vision to create a view of the world based on what the cameras see, much like humans do with their eyes. However, the limitations of cameras – particularly when it comes to measuring distance – make it necessary to back this up with depth information from elsewhere.
While Tesla uses only radar for this purpose, most other groups feel that lidar provides vital extra data that will help guard against imperfect computer vision, so they combine cameras with radar and lidar. In all cases, the vehicle’s challenge is to build an accurate and up-to-date model of its environment using the best data at its disposal.
To achieve this, self-driving cars leverage multiple sensors, interpreting the data in real-time with powerful edge AI systems. Sensors are arranged so that their fields of view overlap, producing ‘images’ of the same area with different techniques. This provides redundancy, for example meaning that there may be useful radar data even if it’s too dark for a camera to work well. It also means that the view from multiple sensors can be compared to rule out false readings and arrive at a single trustworthy measurement.
While all sensors have a degree of error, this process involves a constant cycle of measurement and prediction, integrating the data from multiple sensing technologies. The goal is to eliminate the noise of each and arrive at the most accurate estimate possible of what’s going on.
Machine Learning, from Mistakes
This is the essence of sensor fusion. By combining it with navigation and other information, autonomous cars can model the world around them with great accuracy, but this is only half of the job. The car’s systems need to interpret the environment – identifying routes, obstacles and potential hazards – and make sensible driving decisions.
Writing software for a vehicle to recognize and interact with its environment is immensely challenging. Manufacturers are under pressure to get solutions to the market, yet they have the additional responsibility that their errors may lead to people getting hurt or even killed.

As with many complex computing challenges, machine learning is key to the solution, which is why self-driving cars are being developed on public roads. By collecting data from real driving situations and training the car to react correctly, developers can create the most effective driving algorithms and continue to test and refine them in the real world.
As systems mature, they will become more reliable, coping increasingly well with a wide range of driving conditions until they rival the best human drivers. It may sound fanciful, but many authorities expect fully self-driving cars to operate on the roads within the next 10-15 years, and sensor fusion is the key to unlocking this exciting future.
———————————————————————————————————
At VIA, our mission is to enable companies to accelerate the development and deployment of innovative new transportation solutions and services that redefine safety, convenience, and comfort to give our customers a competitive edge as we enter a new age of seamless mobility and autonomous driving. To learn more about VIA Mobile360 solutions, click here