CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops
https://devblogs.nvidia.com/cuda-pro-tip-write-flexible-kernels-grid-stride-loops/
One of the most common tasks in CUDA programming is to parallelize a loop using a kernel. As an example, let’s use our old friend SAXPY. Here’s the basic sequential implementation, which uses a for loop. To efficiently parallelize this, we need to launch enough threads to fully utilize the GPU.
CUDA编程最常见的任务之一就是用一个kernel来并行化一个循环。比如,对于我们老朋友SAXPY,下面是一个基础的使用循环的实现。为了效率地并行化它,我们需要运行大量的线程来充分利用GPU。
void saxpy(int n, float a, float *x, float *y)
{
for (int i = ; i < n; ++i)
y[i] = a * x[i] + y[i];
}
Common CUDA guidance is to launch one thread per data element, which means to parallelize the above SAXPY loop we write a kernel that assumes we have enough threads to more than cover the array size.
通常CUDA指引会为每一个数据元素运行一个线程,意味着要并行化上述的SAXPY循环,我们需要假设我们写的kernel要有足够的线程以满足数组的大小。
__global__
void saxpy(int n, float a, float *x, float *y)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < n)
y[i] = a * x[i] + y[i];
}
I’ll refer to this style of kernel as a monolithic kernel, because it assumes a single large grid of threads to process the entire array in one pass. You might use the following code to launch the saxpy kernel to process one million elements.
我称这类kernel为monolithic kernel,因为它假设存在单个大的线程网格在一次同时处理,运行整个数组运算。你需要用下面的代码来运行一个具有百万元素的saxpy kernel
// Perform SAXPY on 1M elements
saxpy<<<,>>>(<<, 2.0, x, y);
Instead of completely eliminating the loop when parallelizing the computation, I recommend to use a grid-stride loop, as in the following kernel.
相比在并行化计算时完全消去循环,我更推荐使用一种grid-stride loop,如下
__global__
void saxpy(int n, float a, float *x, float *y)
{
for (int i = blockIdx.x * blockDim.x + threadIdx.x;
i < n;
i += blockDim.x * gridDim.x)
{
y[i] = a * x[i] + y[i];
}
}
Rather than assume that the thread grid is large enough to cover the entire data array, this kernel loops over the data array one grid-size at a time.
比起假设线程网格足够大得覆盖整个数组,这个kernel运行一次,就对数组进行一个grid-size的循环。
Notice that the stride of the loop is blockDim.x * gridDim.x
which is the total number of threads in the grid. So if there are 1280 threads in the grid, thread 0 will compute elements 0, 1280, 2560, etc. This is why I call this a grid-stride loop. By using a loop with stride equal to the grid size, we ensure that all addressing within warps is unit-stride, so we get maximum memory coalescing, just as in the monolithic version.
注意到这个循环的跨度是 blockDim.x * gridDim.x,它是一个线程网格中所有线程的数量。如果该线程网格中有1280个线程,那么编号为0的线程将执行元素0,1280,2560……这就是为什么我称之为“grid-stride loop”。使用一个跨度等于网格大小的循环,我们可以保证了所有地址都是unit-stride的,于是我们比起monolithic的版本减少了最大的内存消耗。
When launched with a grid large enough to cover all iterations of the loop, the grid-stride loop should have essentially the same instruction cost as the if
statement in the monolithic kernel, because the loop increment will only be evaluated when the loop condition evaluates to true.
grid-stride循环比起monolithic kernel,也会需要相同的计算消耗在if语句上,因为循环的条件为真时循环才会继续进行(在这里隐式地产生了if的消耗)。
There are several benefits to using a grid-stride loop.
1.Scalability and thread reuse. By using a loop, you can support any problem size even if it exceeds the largest grid size your CUDA device supports. Moreover, you can limit the number of blocks you use to tune performance. For example, it’s often useful to launch a number of blocks that is a multiple of the number of multiprocessors on the device, to balance utilization. As an example, we might launch the loop version of the kernel like this.
1.稳定性及线程复用。当使用一个循环,你可以支持任何显存大小的运算甚至包括它超出了CUDA设备(一次性)支持的最大值。除此之外,你可以限制线程块数量来调整运行效率。比如,为平衡资源使用,载入一定数量的具有不同multiprocessors的线程块,是非常有用的。
int numSMs;
cudaDeviceGetAttribute(&numSMs, cudaDevAttrMultiProcessorCount, devId);
// Perform SAXPY on 1M elements
saxpy<<<*numSMs, >>>( << , 2.0, x, y);
CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops的更多相关文章
- CUDA Pro Tip: Optimized Filtering with Warp-Aggregated Atomics
In this post, I’ll introduce warp-aggregated atomics, a useful technique to improve performance when ...
- CUDA Pro:通过向量化内存访问提高性能
CUDA Pro:通过向量化内存访问提高性能 许多CUDA内核受带宽限制,而新硬件中触发器与带宽的比率不断提高,导致带宽受限制的内核更多.这使得采取措施减轻代码中的带宽瓶颈非常重要.本文将展示如何在C ...
- cuda编程-卷积优化
CUDA Convolution https://www.evl.uic.edu/sjames/cs525/final.html Improve Image Processing Using GPU ...
- CUDA 8混合精度编程
CUDA 8混合精度编程 Mixed-Precision Programming with CUDA 8 论文地址:https://devblogs.nvidia.com/mixed-precisio ...
- Ext.js中的tip事件实际使用
Ext.onReady(function () { // Init the singleton. Any tag-based quick tips will start working. Ext.ti ...
- CUDA学习笔记(二)——CUDA线程模型
转自:http://blog.sina.com.cn/s/blog_48b9e1f90100fm5b.html 一个grid中的所有线程执行相同的内核函数,通过坐标进行区分.这些线程有两级的坐标,bl ...
- ExtJs 4: How To Add Grid Cell Tooltip
最近忙一个项目的时候需要实现鼠标移到grid的某一行上提示消息.花了半天时间才解决.在网上找很久终于有找到一个有用的.我的版本是extjs4. 效果如图 Ext.onReady(function () ...
- CUDA性能优化----warp深度解析
本文转自:http://blog.163.com/wujiaxing009@126/blog/static/71988399201701224540201/ 1.引言 CUDA性能优化----sp, ...
- Linux下Qt+CUDA调试并运行
Qt与CUDA相结合具体的操作主要修改qt项目中的配置文件pro.下面以测试的项目为例. 因为这是一个测试案例,代码很简单,下面将这几个文件的代码贴出来,方面后面对应pro文件和Makefile文件中 ...
随机推荐
- estt
1.路由控制的定义 1.1.IP地址与路由控制 互联网是由路由器连接的网络组合而成的.为了能让数据包正确地到达目标主机,路由器必须在途中进行正确地转发.这种向"正确的方法"转发数据 ...
- 吴裕雄--天生自然 人工智能机器学习实战代码:线性判断分析LINEARDISCRIMINANTANALYSIS
import numpy as np import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.mplot ...
- js中对Object对象的一些常用操作总结
前言我前面的文章,写过js中“类”与继承的一些文章.ES5我们可以通过 构造函数 或者 Object.create()等方式来模拟出js中的“类”,当然,对象呢是类的实例化,我们可以通过如下方式创建对 ...
- DOS命令编译JAVA程序
上篇文章给大家写了怎么安装JDK配置,现在这篇文章我们就来学习下怎么在DOS命令下编译JAVA程序,以后没编译器都可以直接编译啦(嘻嘻!) 我这里就用永远的 “Hello wrold!”来演示下吧. ...
- RocketMQ 零拷贝
一.零拷贝原理:Consumer 消费消息过程,使用了零拷贝,零拷贝包含以下两种方式: 1.使用 mmap + write 方式 (RocketMQ选择的方式:因为有小块数据传输的需求,效果会比 s ...
- 关于线上bug
之所以想写下线上bug,因为发觉有些公司对线上bug的处理是比较严格甚至是很苛刻,涉及到的相关人可能会因此而背黑锅. 之所以会存在这样情况,因为公司各部门都有关联,特别是用户.老板的投诉,也给公司会造 ...
- 当鼠标hover的时候,使用tip将overflow:hidden隐藏的文字显示完全
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title> ...
- 一个很实用的css技巧简析
我是小雨小雨,专注于更新有趣.实用内容的小伙,如果内容对大家有一点帮助,那么就请动动手指,给个关注.点赞支持一下吧. ^ - ^ 序言 前两天接到一个需求,其中包括一个有序的列表,我们今天就来看看这个 ...
- LeetCode 232题用栈实现队列(Implement Queue using Stacks) Java语言求解
题目链接 https://leetcode-cn.com/problems/implement-queue-using-stacks/ 题目描述 使用栈实现队列的下列操作: push(x) -- 将一 ...
- 什么是HDFS?算了,告诉你也不懂。
前言 只有光头才能变强. 文本已收录至我的GitHub精选文章,欢迎Star:https://github.com/ZhongFuCheng3y/3y 上一篇已经讲解了「大数据入门」的相关基础概念和知 ...