From Unmanned Systems Magazine: Nvidia brings artificial intelligence to automobiles
Imagine cars so fully autonomous that the interiors have no steering wheels, pedals, or rear-view mirrors. Imagine them being able to carry passengers safely along any route in any weather, in any traffic conditions — skillfully spotting, assessing and avoiding hazards even more reliably than human drivers.
It’s a vision that would require immense artificial intelligence computing power to realize. And that is what deep-learning innovator Nvidia says its Drive PX Pegasus provides.
In announcing Pegasus in October, Nvidia billed it as the “world’s first AI computer to make (fully autonomous) robotaxis a reality.” Pegasus is scheduled for distribution to auto manufacturers and other customers in the second half of 2018.
The new computer is only about the size of a license plate. Early designs from Nvidia partners place it in a false wall between the backseat and trunk of a car, among other unobtrusive possibilities. Still, officials say Pegasus packs more than 10,000 of Nvidia’s Cuda cores of parallel processing power and uses only a few hundred watts of electricity. They say it can handle all by itself what in the recent past has required racks of computers that fill the trunk of prototype vehicles and consume up to 10,000 watts.
Nvidia tallies the performance of Pegasus at more than 320 trillion deep-learning operations per second, or 320 TOPS. Compare that with its two predecessors in the line, the 2015 Drive PX and the 2016 Drive PX2, which delivered 8 and 24 TOPS, respectively. Then consider: The Drive PX series has advanced rapidly as the computer industry has braced for the end of Moore’s Law. It has happened in roughly the time that it takes a teenager to finish high school.
Barriers certainly remain on the road to full autonomy. While sensor and simulation technologies enable impressively extensive testing of the capabilities that Nvidia and its partners want to provide, many urban planners are still climbing the learning curve, and the regulatory infrastructure for certifying safety is young.
Still, proponents of autonomous vehicles emphasize that the potential in autonomous vehicles for social good is great. And with Pegasus, Nvidia maintains, the power for progress is here.
“There’s nothing else like it,” says Tim Wong, who handles technical marketing for Nvidia’s Autonomous Vehicles group. “There’s nothing else even close.”
Step by step
Wong recalled the “ground-floor” excitement he felt in joining the Autonomous Vehicles group when it was new, just four years ago. Design for Drive PX was still under way. But Nvidia had spent decades researching visual-computing technologies.
“The whole idea of autonomous driving still felt like something, you know, from cartoons like ‘The Jetsons’ or maybe ‘Knight Rider’ — KITT from ‘Knight Rider.’ We all used to fantasize about these things, right?”
In promoting Drive PX, Nvidia created a virtual garage to demonstrate that a car equipped with the computer might have the smarts to follow the command: “Go park yourself.”
Like Pegasus, the original Drive PX packed a lot of compute for its day into a relatively small container. But Wong says it didn’t take long for customers to fill up the computer’s capacity, and to start using multiple computers. The same thing happened after Nvidia released Drive PX 2, even though it had tripled the TOPS of its precursor.
“It feels like a gas law, where gas expands to fill the space available,” Wong says. “I think when we release compute platforms, people realize then, ‘Oh, I’ve got a lot more compute! So all of the stuff I’ve been holding back on, I want to put this in, I want to put this in and this in. And suddenly they’re out of space.”
With a capability of 13 times more TOPS than Drive 2, Pegasus was designed to eliminate the need for multiples. But with a growing array of sensor and software options, Nvidia’s partners might find a way to fill its capacity. Wong notes that one partner has talked about developing olfactory sensors.
“Yeah, think about it,” he says. “Would you get into a cab that someone just got out of and spilled their stinky food?
“You know, [robotaxis would have] to manage that. Right? Human drivers can manage whether a passenger is drunk or gets sick or brings stinky food on board and spills it. Because they're there. But if it's fully autonomous, you know, you want to make sure the condition of the vehicle for the next passenger is as pristine as possible. So we find that we have to set aside computing power for that.”
Hundreds of customers are interested in Pegasus, Wong says, and so far, none has spoken of using more than one per vehicle. “But I would expect it.”
One thing keeping NVIDIA busy regarding Pegasus is preparing to show the public what it can do. Wong coordinates demonstrations for trade-show crowds, urban planners and other audiences, and he says the visuals often are what make the difference between confusion and understanding, or between fear and enthusiasm.
“One of the things I show when I do speaking engagements is a lane detector,” Wong says. “And so what it’s doing is it’s looking down the road, figuring out where the lanes are, so it knows to stay in the center. Now, detecting lines on a road is not hard, think about where you go from there.”
For example, neural networks for autonomous driving can be trained to take note of differences in lane markings — dashed lines, raised markings, flags and cones signaling roadwork. They can even guide a vehicle in white-out conditions, where lane lines disappear.
“What you and I would do is either we’ll line up behind cars and kind of create our own lane, or we’ll look for tire markings to follow,” Wong says. “Well, you can teach a neural network to do all of those things … and that’s kind of an a-ha moment for people, when they see that capabilities go beyond just flash-card detection work to actually thinking more like a human.”
Proceeding with caution
A-ha moments have been abundant in recent years as the idea of autonomous driving has gained traction. But public confidence took a hit in March when an Uber SUV that had a human at the wheel but was in self-driving mode fatally struck a 49-year-old woman crossing a dark street on foot in Tempe, Arizona.
At the time that this edition of Unmanned Systems magazine published, that incident was still under investigation. But early assessments by local law enforcement included a statement from Tempe police Chief Sylvia Moir to the San Francisco Chronicle: “It’s very clear it would have been difficult to avoid this collision in any kind of mode based on how she came from the shadows right into the roadway.”
Wong didn’t discuss the Arizona fatality. But he emphasizes that safety has been a top priority for Nvidia as it has developed and tested the Drive PX series of computers. Pegasus is designed around a diverse and redundant system, and the platform is built to support a classification called ASIL-D, the highest level of automotive functional safety.
It is only through vigilance about safety that Nvidia can fulfill the sense of mission it has carried throughout its push to enable fully autonomous vehicles: a belief that they will move transportation toward the greater good.
That could start with benefits for people who can’t drive themselves.
“I mean, my mom is 84,” Wong says. “She only drives during the day and when it’s not raining. Autonomous vehicles would open her world up.”
Wong also speaks of a head-on collision in 2017 that broke his collar bone and reminded him of the fallibility of human drivers.
“The other driver was making a left turn and didn’t see me, and so she hit me at 40 mph,” he says. “Where you know if her car had been an [intelligent] car, it wouldn’t have let her enter the intersection because it wasn’t clear. It was a great example of an unnecessary accident, and if we had more intelligence in cars, it could have been prevented.”
Below: A artistic representation of Nvidia’s Drive PXcomputing system. Image: Nvidia