Skip to main content
Automated Driving 5 min read

Combining sensor data and AI gives a real picture of the world

Combining sensor data and AI gives a real picture of the world

In the last weeks, we took a detailed look at internal vehicle sensors, as well as external vehicle sensors. Today, let’s look at how we’re extending beyond data points to better see the world, and make it safer.

The old phrase, “You can’t see the forest for the trees” is a poetic way of reminding someone that they’re focusing so much on details that they’ve forgotten to consider things as a whole.

The technologies that are being developed today are fascinating. Sensors continually grow more powerful, while their relative size and expense grow smaller. That said, it’s easy to overlook the bigger picture created when machine learning interprets what those sensors detect. In reality, that picture is what we’ve been talking about all along.

To begin to understand the value of applying AI to sensor data, you can look at the example brought into play by CTO Brian Lent in his piece about The Reality Index.

A front-facing camera on your car might ‘see’ a pothole, but your car doesn’t innately know what that is, or how to react. To accurately tell a pothole from a manhole from a dark patch of road, all the sensor data has to be collected and analyzed. At that point, machine learning takes up all the data from your vehicle (and other vehicles), determines what the object in the road is, and takes appropriate action.

That said, teaching a machine to understand what an object is by sight is just a first step. The example of the pothole is a complex extension of the same machine learning process that enables a computer to identify a stop sign.

Using data collectively

So to return to the idea of this forest of data – even in the above example, we’re still looking at a single data point:  the camera. To test that one data point, let’s add some others. Namely, at the same time as the camera detects what AI decides is a pothole, what are the other conditions in play?

Did the vehicle brakes suddenly engage?  Did the steering column abruptly turn significantly in an otherwise straight lane?  Did a wheel drop down unexpectedly, followed by a hard bump detected in the suspension?

If none of those things happened, could you be sure the AI correctly identified a pothole?  That’s the sort of conditional checking that AI is made for, and gives a better report for the conditions of the road.

As a last example of collective data, let’s use multiple sensors to report something that a car’s sensors can’t see – and consider it from the data point of view.

Visualize a set of conditions as reported by a car’s sensors

At a point in time, the vehicle reports that the traction control system engages, and then disengages a few moments later. Following that, the speed of the vehicle comes down by 10 km/h. The external temperature gauge of the vehicle reports two degrees Celsius. The windshield wipers are on. The vehicle’s location is on an overpass, and the vehicle cameras do not indicate anything unusual.

Now add that in a 15-minute period, 4 different vehicles provide a similar set of data points at the same location.

Have you put together that the vehicles found a patch of ice on a bridge?  Yet, there isn’t a single sensor on the car made to detect ice on the road. Now, we’re starting to see the forest.

This approach to interpreting data is what’s behind many of the services we’re creating. It steps into the territory beyond looking at a collection of sensors, rather, than using the whole vehicle as a sensor to determine the nature of the world around it.

Creating that picture of the world accurately makes for safer driver decisions and paves the way for autonomous features.
 

Bradley Walker

Bradley Walker

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe