Elon Musk revealed yesterday that Tesla is developing its own hardware in its bid to bring self-driving cars to the public. Up to now Tesla has used Nvidia’s Drive Platform, but this will be replaced by ‘Hardware 3,’ which will be, according to Tesla at least, the ‘world’s most advanced computer for autonomous driving.’
The Hardware 3 chip has been in the works for a few years now, with Jim Keller joining Tesla from chip manufacturer AMD back in 2016, and Musk confirming the project in December 2017.
Keller has since left Tesla, and the Autopilot project – Tesla’s self-driving car initiative – is now being led by Pete Bannon. “The chips are up and working, and we have drop-in replacements for S, X and 3, all have been driven in the field,” Bannon said. “They support the current networks running today in the car at full frame rates with a lot of idle cycles to spare.”
Why has Tesla developed its own AI hardware?
By developing its own AI hardware, Tesla is able to build the solutions tailored to its needs. It means it isn’t relying on others – like Nvidia, say – to build what they need.
Bannon explained “nobody was doing a bottoms-up design from scratch.” By bringing hardware in-house, Tesla will not only be able to develop chips according to its needs, it will also make it easier to plan and move at its own pace.
Essentially, it allows Tesla to take control of its own destiny. In the context of safety concerns around safe-driving cars, taking on responsibility for developing the hardware on which your machine intelligence will sit makes a lot of sense. It means you can assume responsibility for solving your own problems.
How does Tesla’s Hardware 3 compare with other chips?
The hardware 3 chips are, according to Musk, 10x better than the current Nvidia GPUs. The current GPUs in Tesla’s Autopilot system can analyze 200 frames per second. Tesla’s new hardware can run on 2000 frames per second.
This significant performance boost should, in theory, bring significant gains in terms of safety.
What’s particularly remarkable is that the new chip isn’t actually costing Tesla any more than its current solution.
Musk explained how the team was able to find such significant performance gains.
“The key is to be able to run the neural network at a fundamental, bare metal level. You have to do these calculations in the circuit itself, not in some sort of emulation mode, which is how a GPU or CPU would operate. You want to do a massive amount of [calculations] with the memory right there.”
The hardware is expected to roll out in 2019 and offered as a hardware upgrade to all owners of Autopilot 2.0 cars and up.
Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine
DeepMind, Elon Musk, and others pledge not to build lethal AI
Elon Musk’s tiny submarine is a lesson in how not to solve problems in tech