NVIDIA GTC 2015 Keynote Live Blog
by Ryan Smith on March 17, 2015 12:10 PM EST- Posted in
- GPUs
- Trade Shows
- NVIDIA
- GTC 2015
02:12PM EDT - And we're done here
02:11PM EDT - Wrap-up
02:10PM EDT - Elon and Jen-Hsun are wrapping up
02:08PM EDT - Multiple levels of security. Infotainment and actual driving controls are separated
02:07PM EDT - Now discussing car hacking
02:00PM EDT - Fast and slow is easy. Mid-speed is hard
02:00PM EDT - Discussing how various speeds are easier or harder
01:55PM EDT - Even if self-driving cars become a viable product, it will take 20 years to replace the car fleet
01:54PM EDT - Jen-Hsun and Elon are discussing the potential and benefits of self-driving cars
01:53PM EDT - Musk responds that he's more worried about "strong" AI, and not specialty AI like self-driving cars
01:52PM EDT - Jen-Hsun is grilling Musk on some of his previous comments on the danger of artificial intelligence
01:51PM EDT - Tesla meets Tesla
01:50PM EDT - Now on stage: Elon Musk
01:49PM EDT - Drive PX dev kit available in May, $10,000
01:46PM EDT - Jen-Hsun has a Drive PX in hand
01:42PM EDT - The key to training cars is not to train them low-level physics, but to train them the high-level apect of how the world works and how humans behave
01:42PM EDT - How do you make a car analogy when you're already talking about cars? With a baby analogy
01:39PM EDT - I for one am all for more Freespace
01:39PM EDT - Jen-Hsun talking about how free space is important to understand in a self-driving car
01:36PM EDT - Recapping the Drive PX system
01:36PM EDT - NVIDIA's vision: to augment ADAS with GPU deep learning
01:35PM EDT - Final subject of the day: self-driving cars
01:34PM EDT - And we're done with roadmaps. Looks like no updates on the Tegra roadmap
01:32PM EDT - 28 TFLOPS FP32
01:32PM EDT - DIGITS DevBox: 1300W
01:30PM EDT - Now Jen-Hsun is explaining why he thinks Pascal will offer 10x the performance of Maxwell, by combining the FP16 performance gains with other improvements in memory bandwidth and better interconnect performance through NVLink
01:29PM EDT - FP16 is quite imprecise, but NV seems convinced it can still be useful for neural network convolution
01:28PM EDT - Pascal has 3x the memory bandwidth of Maxwell. This would put memory bandwidth at 1TB/sec
01:28PM EDT - It sounds like Pascal will come with Tegra X1's FP16 performance improvements
01:27PM EDT - Mixed precision: discussing the use of FP16
01:27PM EDT - 4x the mixed precision performance of Maxwell
01:26PM EDT - Pascal offers 2x the perf per watt of Maxwell
01:26PM EDT - Also reiterating the fact that Pascal will be the first NVIDIA GPU with 3D memory and NVLink
01:25PM EDT - New feature: mixed precision
01:25PM EDT - Roadmap update on Pascal and Volta
01:24PM EDT - Now: roadmaps
01:23PM EDT - NVIDIA is building them one at a time
01:23PM EDT - $15,000, available May 2015
01:22PM EDT - They hope not to sell a whole lot of these, but rather to have these bootstrap further use of DIGITS
01:22PM EDT - Jen-Hsun is stressing that this is a box for developers, not for end-users
01:21PM EDT - Specialty box for running the DIGITS middleware
01:21PM EDT - 4 cards in a complete system. As much compute performance as you can get out of a single electrical outlet
01:20PM EDT - New product: the DIGITS DevBox
01:19PM EDT - Web-based UI
01:19PM EDT - DIGITS: A deep GPU training system for data scientists
01:18PM EDT - NVIDIA now discussing ew frameworks for deep learning
01:16PM EDT - It's surprisingly accurate
01:16PM EDT - Demonstrating how he is able to use neural networks to train computers to describe scenes they're seeing
01:12PM EDT - "Automated image captioning with convnets and recurrent nets"
01:12PM EDT - Now on stage: Andrej Karpathy of Stanford
01:09PM EDT - Deep learning can also be applied to medical research. Using networks to identify patterns in drugs and biology
01:07PM EDT - Discussing how quickly companies have adopted deep learning, both major and start-ups
01:03PM EDT - Continuing to discuss the training process, and how networks get more accurate over time
12:59PM EDT - Now discussing AlexNet in depth
12:49PM EDT - The moral of the story: GPUs are a good match for image recognition via neural networks
12:48PM EDT - Now discussing AlexNet, a GPU powered neural network software package, one of the first and significantly better than its CPU counterparts
12:47PM EDT - Recounting a particular story about neural networks and handwriting recognition in 1999
12:44PM EDT - Discussing the history of neural networks for image recognition
12:42PM EDT - Now shifting gears from GTX Titan X to a deeper focus on deep learning
12:41PM EDT - While GTX Titan X lacks FP64 performance, NVIDIA still believes it will be very useful for compute users who need high FP32 performance such as neural networks and other examples of deep learning
12:40PM EDT - (The crowd takes a second to start applauding)
12:40PM EDT - GTX Titan X: $999
12:39PM EDT - Discussing how NVIDIA's cuDNN neural network middleware has allowed them to significantly increase neural network training times. Down to 3 days on GTX Titan X
12:37PM EDT - Now how GTX Titan X ties into deep learning
12:32PM EDT - Now running Epic's kite demo in real time on GTX Titan X
12:31PM EDT - Big Maxwell does not have Big Kepler's FP64 performance. Titan X offers high FP32, but only minimal FP64
12:30PM EDT - Titan X: 8 billion transistors. Riva 128 was 4 million
12:30PM EDT - Now rolling a teaser video
12:29PM EDT - Previously teased at GDC 2015: http://www.anandtech.com/show/9049/nvidia-announces-geforce-gtx-titan-x
12:29PM EDT - New GPU: GTX Titan X
12:29PM EDT - Jen-Hsun is continuing to address the developers, talking about how NVIDIA has created a top-to-bottom GPU compute ecosystem
12:26PM EDT - (Not clear if that's cumulative, or in the last year)
12:26PM EDT - NVIDIA has sold over 450K Tesla GPUs at this point
12:25PM EDT - Now recapping the first year of GTC, and the first year of NVIDIA's Tesla initiative
12:23PM EDT - But NVIDIA couldn't do this without their audience, the developers
12:22PM EDT - Tesla continues to do well for the company, with NVIDIA racking up even more supercomputer wins. It has taken longer than NVIDIA originally projected, but it looks like Tesla is finally taking off
12:21PM EDT - Pro Visualization: Another good year, with GRID being a big driver
12:20PM EDT - Automotive revenue has doubled year-over-year
12:19PM EDT - Cars: a subject near and dear to Jen-Hsun's heart. Automotive has been a bright spot for NVIDIA's Tegra business
12:18PM EDT - Also recapping the SHIELD Console announcement from GDC 2015
12:17PM EDT - Year in review: GeForce
12:17PM EDT - Jen-Hsun thinks we'll be talking about deep learning fo te next decade, and thinks it will be very important to NVIDIA's future
12:16PM EDT - This continues NVIDIA's earlier advocacy of this field that was started with their CES 2015 Tegra X1 presentation
12:15PM EDT - Today's theme: deep learning
12:15PM EDT - 4 announcements: A new GPU, a very fast box, a roadmap reveal, and self-driving cars
12:14PM EDT - Jen-Hsun has taken the stage
12:14PM EDT - We're told to expect several surprises today. Meanwhile Tesla CEO Elon Musk is already confirmed to be one of today's guests, so that should be interesting
12:13PM EDT - Run time is expected to be 2 hours, with NVIDIA CEO Jen-Hsun Huang doing much of the talking
12:13PM EDT - We're her at the keynote presentation for NVIDIA's annaul conference, the GPU Technology Conference
47 Comments
View All Comments
CajunArson - Tuesday, March 17, 2015 - link
"Deep Learning??"[Double Face Palm]
Well it's good to see that both Nvidia and AMD like to spew empty marketing buzzwords during their presentations.
nandnandnand - Tuesday, March 17, 2015 - link
Quadruple face palm?maximumGPU - Tuesday, March 17, 2015 - link
i don't get, these aren't exactly marketing buzzwords. Deep learning is a field of machine learning and has many applications. nvidia is just telling the world gpus are better at it then cpus, so you're better buying them.alfalfacat - Tuesday, March 17, 2015 - link
Exactly. Deep learning has been around in CS research for decades, and most machine learning techniques are, in some way or another, deep learning. It just means that it's a neural network with many layers. It certainly isn't an empty marketing buzzword.grrrgrrr - Tuesday, March 17, 2015 - link
Deep learning is if not the next generation of computing mechanism.We are shifting from turing machines to recursive neural networks bby.
Refuge - Tuesday, March 17, 2015 - link
Digital Jazz baby!name99 - Tuesday, March 17, 2015 - link
It's not really buzzwords. Take something like driving.To drive well (or even acceptably) you need some sort of "theory of mind" for the other cars on the road. For example, if you're driving down a street and see a car move in a certain way, you DON'T just react to how the car is moving; you understand (through your theory of mind) that the GOAL of the car you've seen is to try to parallel park in the spot ahead of you, AND that parallel parking is done in a certain way AND that lots of people struggle with this and have to take two or three tries.
And because you understand all that, you slow down in a certain way, and look out for certain behavior.
Likewise for your car AI. Even after the car has learned about physics and avoiding collisions and so on, to do a really good job, it needs this sort of similar theory of mind, to understand how people behave when they are trying to park, to understand the GOALs of other entities on the road, and how they will behave in the future (not just at this immediate second) based on those goals.
blanarahul - Tuesday, March 17, 2015 - link
I really wasn't expecting any FP64 performance anyway. There's only so much that can be done with 28 NM.axien86 - Tuesday, March 17, 2015 - link
Just 30% performance over GTX 980 and little or no FP64 capabilities means Nv Maxwells are starting to run on fumes for the next two years or so.By comparison the new AMD 390x series with 4096 cores with 4096-bit HBM will clearly dominate PC gaming AND graphics for a long time.
JarredWalton - Tuesday, March 17, 2015 - link
I'd wager we'll see a compute optimized GM200 from NVIDIA in Tesla/Quadro cards at some point; NVIDIA just doesn't want to cut in to those margins is my bet. Both Quadro and Tesla are long overdue for an upgrade. Of course, maybe part of the "power optimizations" for Maxwell 2.0 involved removing all hardware level support for FP64 and that's why we haven't seen a new Tesla/Quadro on GM2x0? I haven't seen anything explicitly stating that's the case, but it's certainly possible.