Perceptive Automata Teaches Autonomous Vehicles to Make Human-Level Deductions.
You can learn a lot about a person in just one glance. You can tell if they’re tired, distracted or in a rush. You know if they’re headed home from work or hitting the gym.
And you don’t have to be Sherlock Holmes or Columbo to make these types of instant deductions — your brain does it constantly. It’s so good at this type of perception, in fact, you hardly realize it’s happening.
Using deep learning, Perceptive Automata, a startup spun out from Harvard University, is working to infuse this same kind of human intuition into autonomous vehicles.
Visual cues like body language or what a person is holding can provide important information when making driving decisions. If someone is rushing toward the street while talking on the phone, you can conclude their mind is likely focused elsewhere, not on their surroundings, and proceed with caution. Likewise if a pedestrian is standing at a crosswalk and looks both ways — you know they’re aware and anticipating oncoming traffic.
“Driving is more than solving a physics problem,” said Sam Anthony, co-founder and chief technology officer at Perceptive Automata. “In addition to identifying objects and people around you, you’re constantly making judgments about what’s in the mind of those people.”
For its self-driving car development, Perceptive Automata’s software adds a layer of deep learning algorithms trained on real-world human behavior data. By running these algorithms simultaneously with the AI powering the vehicle, the car can garner a more sophisticated view of its surroundings, further enhancing safety.
A Perceptive Perspective
To bolster a vehicle’s understanding of the outside world, Perceptive Automata takes a unique approach to training deep learning algorithms. Traditional training uses a variety of images of the same object to teach a neural network to recognize that object. For example, engineers will show deep learning algorithms millions of photos of emergency vehicles, then the software will be able to detect emergency vehicles on its own.
Rather than using images for just one concept, Perceptive Automata relies on data that can communicate to the networks a range of information in one frame. By combining facial expressions with other markers, like if a person is holding a coffee cup or a cellphone, the software can draw conclusions on where the pedestrian is focusing their attention.
Perceptive Automata depends on NVIDIA DRIVE for powerful yet energy-efficient performance. The in-vehicle deep learning platform allows the software to analyze a wide range of body language markers and predict the pathway of pedestrians. The software can make these calculations for one person in the car’s field of view or an entire crowd, creating a safer environment for everyone on the road.
Humanizing the Car of the Future
Adding this layer of nuance to autonomous vehicle perception ultimately creates a smoother ride, Anthony said. The more information available to a self-driving car, the better it can adapt to the ebb and flow of traffic, seamlessly integrating into an ecosystem where humans and AI share the road.
“As the industry’s been maturing and as there’s been more testing in urban environments, it’s become much more apparent that nuanced perception that comes naturally to humans may not for autonomous vehicles,” Anthony said.
Perceptive Automata’s software leverages diversity, incorporating sophisticated deep neural networks into the driving stack, to offer a safe and robust solution to this perception challenge. With this higher level of vehicle understanding, self-driving cars can drive smarter and safer.