Making autonomous vehicles more perceptive

Training neural networks to read human behaviour for safe self-driving.
Training neural networks to read human behaviour for safe self-driving.

Perceptive Automata, a startup which had its beginnings in Harvard University, is leveraging deep intelligence to give human intuition into autonomous vehicles. The technology will be able to observe body language of pedestrians and react accordingly to enable safer self-driving.

For instance, if a person is rushing toward the street while talking on the phone, the conclusion is that his mind is likely focused elsewhere, not on his surroundings, so the autonomous vehicle should proceed with caution. Conversely, if a pedestrian is standing at a junction and looking both ways, it can be assumed that the person is aware of the surroundings and will anticipate oncoming traffic.

“Driving is more than solving a physics problem. In addition to identifying objects and people around you, you’re constantly making judgments about what’s in the mind of those people,” said said Sam Anthony, Co-founder and Chief Technology Officer of Perceptive Automata.

For its self-driving car development, Perceptive Automata’s software adds a layer of deep learning algorithms trained on real-world human behaviour data. By running these algorithms simultaneously with the AI powering the vehicle, the car can garner a more sophisticated view of its surroundings, further enhancing safety.

To bolster a vehicle’s understanding of the outside world, the startup takes a unique approach to training deep learning algorithms. Traditional training uses a variety of images of the same object to teach a neural network to recognise that object. For example, engineers will show deep learning algorithms millions of photos of emergency vehicles, then the software will be able to detect emergency vehicles on its own.

Rather than using images for just one concept, Perceptive Automata relies on data that can communicate to the networks a range of information in one frame. By combining facial expressions with other markers, like if a person is holding a coffee cup or a cellphone, the software can draw conclusions on where the pedestrian is focusing their attention.

Perceptive Automata depends on NVIDIA DRIVE for powerful yet energy-efficient performance. The in-vehicle deep learning platform allows the software to analyse a wide range of body language markers and predict the pathway of pedestrians. The software can make these calculations for one person in the car’s field of view or an entire crowd, creating a safer environment for everyone on the road.

Adding this layer of nuance to autonomous vehicle perception ultimately creates a smoother ride. The more information available to a self-driving car, the better it can adapt to the ebb and flow of traffic, seamlessly integrating into an ecosystem where humans and AI share the road.

“As the industry’s been maturing and as there’s been more testing in urban environments, it’s become much more apparent that nuanced perception that comes naturally to humans may not be so for autonomous vehicles,” said Anthony.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.