Bosch and NVIDIA Team Up for Xavier-Based Self-Driving Systems for Mass Market Cars
by Anton Shilov on March 18, 2017 2:00 PM EST- Posted in
- SoCs
- Arm
- NVIDIA
- Volta
- Xavier
- Automotive
- Bosch
- Self-Driving Cars
Bosch and NVIDIA on Thursday announced plans to co-develop self-driving systems for mass-market vehicles. The solutions will use NVIDIA’s next-generation codenamed Xavier SoC as well as the company’s AI-related IP. Meanwhile, Bosch will offer its expertise in car electronics as well as auto navigation.
Typically, automakers mention self-driving cars in the context of premium and commercial vehicles, but it is pretty obvious that, given the opportunity, self-driving is a technology that will be a part of the vast majority of cars available in the next decade and onwards. Bosch and NVIDIA are working on an autopilot platform for mass-market vehicles that will not cost as much as people think, and will be able to be widespread. To build the systems, the two companies will use NVIDIA’s upcoming Drive PX platform based on the Xavier system-on-chip, which is a next-gen Tegra processor set to be mass-produced sometimes in 2018 or 2019.
Bosch and NVIDIA did not disclose too many details about their upcoming self-driving systems, but indicated that they are talking about the Level 4 autonomous capabilities in which a car can drive on its own without any human intervention. To enable Level 4 autonomous capabilities, NVIDIA will offer its Xavier SoC featuring eight general-purpose in-house-designed custom ARMv8-A cores, a GPU based on the Volta architecture with 512 stream processors, hardware-based encoders/decoders for video streams with up to 7680×4320 resolution, and various I/O capabilities.
From performance point of view, Xavier is now expected to hit 30 Deep Learning Tera-Ops (DL TOPS) (a metric for measuring 8-bit integer operations), which is 50% higher when compared to NVIDIA’s Drive PX 2, the platform currently used by various automakers to build their autopilot systems (e.g., Tesla Motors uses the Drive PX 2 for various vehicles). NVIDIA's goal is to deliver this at 30 W, for an efficiency ratio of 1 DL TOPS-per-watt. This is a rather low level of power consumption given the fact that the chip is expected to be produced using TSMC’s 16 nm FinFET+ process technology, the same that is used to make the Tegra (Parker) SoC of the Drive PX 2.
The developers say that the next-gen Xavier-based Drive PX will be able to fuse data from multiple sensors (cameras, lidar, radar, ultrasonic, etc.) and its compute performance will be enough to run deep neural nets to sense surroundings, understand the environment, predict the behavior and position of other objects as well as ensure safety of the driver in real-time. Given the fact that the upcoming Drive PX will be more powerful than the Drive PX 2, it is clear that it will be able to better satisfy demands of automakers. In fact, since we are talking about a completely autonomous self-driving system, the more compute efficiency NVIDIA can get from its Xavier the better.
Speaking of the SoC, it is highly likely that the combination of its performance, power and the level of integration is what attracted Bosch to the platform. One chip with a moderate power consumption means that Bosch engineers will be able to design relatively compact and reasonable-priced systems for self-driving and then help automakers to integrate them into their vehicles.
Unfortunately, we do not know what car brands will use the autopilot systems co-developed by Bosch and NVIDIA. Bosch supplies auto electronics to many carmakers, including PSA, which owns Peugeot, Citroën and Opel brands.
Neither Bosch nor NVIDIA made any indications about when they expect actual cars featuring their autopilot systems to hit the roads. But since NVIDIA plans to start sampling of its Xavier in late 2017 and then mass produce it in 2018 or 2019, it is logical to expect the first commercial applications based on the SoC to become available sometime in the 2020s, after the (extensive) validation and certification period for an automotive system.
Related Reading:
Source: NVIDIA
43 Comments
View All Comments
Meteor2 - Saturday, March 18, 2017 - link
Tesla reckons their current hardware will support 'full-automony':https://www.tesla.com/en_GB/blog/all-tesla-cars-be...
And that's using the Pascal-based Drive PX2. Heck, didn't Nvidia themselves demonstrate their 'BB8' car last year, saying it only needed four days to learn to drive?
So which will it be? Tesla Level 4 in a year, or Bosch/Nvidia Level 4 (or Audi/Nvidia) in the 2020s? I wouldn't bet against Tesla...
PS Level 4 basically means the human driver is monitoring the car and responsible for it, exactly the same as a modern fighter jet or airliner. It won't need any certification. Level 5 is the level where the steering wheel goes, and the car (or rather its manufacturer) is responsible for it. Some car makers have said they want to skip straight to Level 5 but I doubt any will be able to sit by while competitors offer Level 4.
ragenalien - Saturday, March 18, 2017 - link
The drive PX 2 board supports up to two mxm cards for additional compute power. The SOC in it isn't powerful enough on it's own for level 4 autonomy.Meteor2 - Sunday, March 19, 2017 - link
Yes you're right, more recently Musk has said the cars might need a hardware upgrade for Level 4.geekman1024 - Sunday, March 19, 2017 - link
do you mean 'full-auto-money'?name99 - Sunday, March 19, 2017 - link
Oh my god! You mean companies plan to CHARGE US for this technology? Is the US government aware of this scandal?If you're going to bring in political pseudo-outrage, can you at least make it interesting?
Yojimbo - Saturday, March 18, 2017 - link
Getting 5 times the efficiency through architectural enhancements to the GPU doesn't seem likely. I think the reason it can get 1 DL TOPs per watt is because it contains an ASIC geared towards inferencing convolutional neural networks. That is probably what the "CVA" is in the SoC block graphic. Power consumption is kept low by reducing data movement through the reuse of data and by storing the data in "scratchpads" located very close to the processor elements. That's how they can go from ~0.2 DL TOPs per watt with the SoC-only version of the Drive PX 2 (Parker) to 1 DL TOPs per watt with Xavier on a similar manufacturing process. I wonder, though, if it allows for flexible precision or if it only allows for the inferencing of 8-bit integer based networks.chrissmartin - Saturday, March 18, 2017 - link
OMG! ...30 Watts if that's for the whole system meaning CPU + GPU and GPU contains 512 cores then Volta would be power efficient as hell. A GPU like 940M with 384 cores has a TDP of 36 watts and 1050 for laptops with 640 cores is 75 watts. Volta could be awesome!Yojimbo - Saturday, March 18, 2017 - link
I think Volta is supposed to be about 60% more energy efficient than Pascal, at least in terms of GEMM (matrix multiplication). But comparing the Xavier SoC to a 940M or even the 1050 brings with it a lot of issues. For example, the SoC includes a CPU, yes, but the 940M and 1050 power numbers include power-hungry GDDR5 VRAM.CrazyElf - Saturday, March 18, 2017 - link
Depends on how they are clocked.There are too many variables here for us to know for sure:
1. Process enhancements from TSMC
2. Architectural enhancements
3. The car version may be underclocked
We don't know what real world gaming performance is going to be like.
ddriver - Sunday, March 19, 2017 - link
Those cores are designed from the ground up for machine learning. This means maximized throughput at low precision integers and minimized transistor cost that would translate into increasingly abysmal performance as the data type precision increases.You cannot make a direct comparison in terms of power efficiency with a general purpose GPU core, which is optimized 32 bit float computation.