Nvidia's Pascal to use stacked memory, proprietary NVLink interconnect

by Scott Wasson — 6:50 PM on March 25, 2014

GTC — Today during his opening keynote at the Nvidia GPU Technology Conference, CEO Jen-Hsun Huang offered an update to Nvidia's GPU roadmap. The big reveal was about a GPU code-named Pascal, which will be a generation beyond the still-being-introduced Maxwell architecture in the firm's plans.

Pascal's primary innovation will be the integration of stacked "3D" memory situated on the same substrate with the GPU, providing substantially higher bandwidth than traditional DRAMs mounted on the same circuit board.

If all of this info sounds more than a little familiar, perhaps you'll recall that Nvidia also announced a future, post-Maxwell GPU at GTC 2013. It was code-named Volta and was also slated to feature stacked memory on package. So what happened?

Turns out Volta remains on the roadmap, but it comes after Pascal and will evidently include more extensive changes to Nvidia's core GPU architecture.

Nvidia has inserted Pascal into its plans in order to take advantage of stacked memory and other innovations sooner. (I'm not sure we can say that Volta has been delayed, since the firm never pinned down that GPU's projected release date.) That makes Pascal intriguing even though its SM will be based on a modified version of the one from Maxwell. Memory bandwidth has long been one of the primary constraints for GPU performance, and bringing DRAM onto the same substrate opens up the possibility of substantial performance gains.

The picture above includes a single benchmark result, as projected for Pascal, in the bandwidth-intensive SGEMM matrix multiplication test. As you can see, Pascal nearly triples the performance of today's Kepler GPUs and nearly doubles the throughput of the upcoming Maxwell chips. This comparison is made at the same power level for each GPU, so Pascal should also represent a nice increase in energy efficiency.

Compared to today's GPU memory subsystems, Huang claimed Pascal's 3D memory will offer "many times" the bandwidth, two and a half times the capacity, and four times the energy efficiency. The Pascal chip itself will not participate in the 3D stacking, but it will have DRAM stacks situated around it on the same package. Those DRAM stacks will be of the HBM type being developed at Hynix. You can see the DRAM stacks cuddled up next to the GPU in the picture of the Pascal test module below.

The other item of note in Pascal's feature set is a new, proprietary chip-to-chip interconnect known as NVLink. This interconnect is a higher-bandwidth alternative to PCI Express 3.0 that Nvidia claims will be substantially more power-efficient. In many ways, NVLink looks very similar to PCI Express. It uses differential signaling with an embedded clock, and it will support the PCI Express programming model, including "DMA+", so driver support should be straightforward. Nvidia expects NVLink to act as a GPU-to-GPU connection and, in some cases, as a GPU-to-CPU link. To that end, the second generation of NVLink will be capable of maintaining cache coherency between multiple chips.

NVLink was created chiefly for use in supercomputing clusters and other enterprise-class deployments where many GPUs may be installed into a single server. Interestingly, as part of today's announcements, IBM revealed that it will incorporate NVLink into future CPUs. We don't have any details yet about which CPUs or what proportion of the Power CPU lineup will use NVLink, though.

Huang claimed NVLink will offer five to 12 times the bandwidth of PCIe. That may be a bit of CEO math. The first generation of NVLink will feature eight lanes per block or "brick" of connectivity. Each of those lanes will be capable of transporting 20Gbps of data, so the aggregate bandwidth of a brick should be 20GB/s. By contrast, PCIe 3.0 transfers 8Gbps per lane and 8GB/s across eight lanes, and the still-in-the-works PCIe 4.0 standard is targeting double that rate.

NVLink apparently gets some of its added bandwidth by imposing stricter limits on trace lengths across the motherboard, and the company says it has made a "fundamental breakthrough" in energy efficiency, resulting from Nvidia's own research, that differentiates NVLink from PCIe. NVLink will not be an open standard, though, so we may not be seeing a public airing of the entire spec.

The module pictured above will be the basic building block of many solutions based on the Pascal GPU. Each module has two "bricks" of NVLink connectivity onboard, and the board will connect to the host system via a mezzanine-style NVLink connector. The combination of connector and NVLink protocol should allow for some nice, dense, and high-integrity server systems built around Nvidia GPUs—and it will also ensure that those systems can only play host to Nvidia silicon. This proprietary hook is surely another motivation for the creation of NVLink, at the end of the day.

Huang said he wants the Pascal module to be the future of not just supercomputers but all sorts of visual computing systems, including gaming PCs. Mezzanine-style modules do have size and signal integrity advantages over traditional expansion cards with edge-based connectors. Another benefit of this module is additional power without auxiliary power cables. Nvidia's current Tesla GPUs draw between 225 and 300W, and the firm apparently expects to power them solely via the mezzanine connection to the module. We'll have to work to tease out exactly what Huang's statement means for future consumer PCs, but Nvidia admits it doesn't expect PCIe cards to be going away any time soon.

NVlink的更多相关文章

  1. [转帖]nvidia nvlink互联与nvswitch介绍

    nvidia nvlink互联与nvswitch介绍 https://www.chiphell.com/thread-1851449-1-1.html 差不多在一个月前在年度gtc会议上,老黄公开了d ...

  2. 深度学习“引擎”之争:GPU加速还是专属神经网络芯片?

    深度学习“引擎”之争:GPU加速还是专属神经网络芯片? 深度学习(Deep Learning)在这两年风靡全球,大数据和高性能计算平台的推动作用功不可没,可谓深度学习的“燃料”和“引擎”,GPU则是引 ...

  3. R – GPU Programming for All with ‘gpuR’

    INTRODUCTION GPUs (Graphic Processing Units) have become much more popular in recent years for compu ...

  4. 让AI简单且强大:深度学习引擎OneFlow技术实践

    本文内容节选自由msup主办的第七届TOP100summit,北京一流科技有限公司首席科学家袁进辉(老师木)分享的<让AI简单且强大:深度学习引擎OneFlow背后的技术实践>实录. 北京 ...

  5. 计算系统中互联设备Survey

    Survey of Inter-connects in computer system 姚伟峰 http://www.cnblogs.com/Matrix_Yao/ https://github.co ...

  6. CUDA compiler driver nvcc 散点 part 1

    ▶ 参考[https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html] ▶ nvcc 预定义的宏 __NVCC__ // 编译 ...

  7. 转帖 IBM要推POWER9,来了解一下POWER处理器的前世今生

    https://blog.csdn.net/kwame211/article/details/76669555 先来说一下最新的POWER 9 在Hot Chips会议上首次提到的IBM Power ...

  8. CUDA ---- Memory Model

    Memory kernel性能高低是不能单纯的从warp的执行上来解释的.比如之前博文涉及到的,将block的维度设置为warp大小的一半会导致load efficiency降低,这个问题无法用war ...

  9. OpenACC 与 CUDA 的相互调用

    ▶ 按照书上的代码完成了 OpenACC 与CUDA 的相互调用,以及 OpenACC 调用 cuBLAS.便于过程遇到了很多问题,注入 CUDA 版本,代码版本,计算能力指定等,先放在这里,以后填坑 ...

随机推荐

  1. vs 附加包含目录属性

    如果是在属性页里头添加了路径,则当程序拷贝到其他电脑上头的话,这个包含目录仍然存在,这就是与添加环境变量的区别.如果是通过添加环境变量配置的路径,则换了台电脑,这个路径就没有了,需要重新配置.

  2. Java中ArrayList的自我实现

    对于ArrayList相比大家都很熟悉,它是java中最常用的集合之一.下面就给出它的自我实现的java代码. 需要说明的一点是,它是基于数组创建的.所以它在内存中是顺序存储,对于查找十分的方便. p ...

  3. p235习题2

    List  成功添加 Set  添加失败

  4. Linux命令自己总结

    对于每一个Linux学习者来说,了解Linux文件系统的目录结构,是学好Linux的至关重要的一步.,深入了解linux文件目录结构的标准和每个目录的详细功能,对于我们用好linux系统只管重要,下面 ...

  5. Android VLC播放器二次开发1——程序结构分析

    最近因为一个新项目需要一个多媒体播放器,所以需要做个视频.音频.图片方面的播放器.也查阅了不少这方面的资料,如果要从头做一个播放器工作量太大了,而且难度也很大.所以最后选择了VLC作为基础,进行二次开 ...

  6. 能加载文件或程序集“System.Data.SQLite”或它的某一个依赖项。试图加载格式不正确的程序。

    现象: 能加载文件或程序集“System.Data.SQLite”或它的某一个依赖项.试图加载格式不正确的程序.

  7. web应用配置

    tomcat 的 server.html 配置文件 加在</Host>之上 <Context path=”/itcast” docBase=”c:\news” /> path虚 ...

  8. vi总结

    vi常用命令整理 ★命令模式 移动光标 h 或 向左方向键(←) → 光标向左移动一个字元 j 或 向下方向鍵(↓) → 光标向下移动一个字元 k 或 向上方向鍵(↑) → 光标向上移动一个字元 l ...

  9. http://jingyan.baidu.com/article/bad08e1ee14ae409c85121cf.html

    http://jingyan.baidu.com/article/bad08e1ee14ae409c85121cf.html

  10. struts2总结四:Action与Form表单的交互

    struts2 Action获取表单数据的方式有三种:1.通过属性驱动的方式.2.模型驱动方式.3.使用多个model对象的属性. 1.通过属性驱动式 首先在jsp里面编写form表单的代码如下: & ...