In this post, I’ll introduce warp-aggregated atomics, a useful technique to improve performance when many threads atomically add to a single counter. In warp aggregation, the threads of a warp first compute a total increment among themselves, and then elect a single thread to atomically add the increment to a global counter. This aggregation reduces the number of atomics performed by up to the number of threads in a warp (up to 32x on current GPUs), and can dramatically improve performance. Moreover, in many typical cases, you can implement warp aggregation as a drop-in replacement for standard atomic operations, so it is useful as a simple way to improve performance of complex applications.

Problem: Filtering by a Predicate

Let’s consider the following problem, which we call filtering: we have a source array, src, containing n elements, and a predicate, and we need to copy all elements of src satisfying the predicate into the destination array, dst. For the sake of simplicity, assume that dst has length of at least n and that the order of elements in the dst array does not matter. For our example, we assume that the array elements are integers, and the predicate is true if and only if the element is positive. Here is a sample CPU implementation of filtering.

int filter(int *dst, const int *src, int n) {
int nres = 0;
for (int i = 0; i < n; i++)
if (src[i] > 0)
dst[nres++] = src[i];
// return the number of elements copied
return nres;
}

Filtering, also known as stream compaction, is a common operation, and it is a part of the standard libraries of many programming languages, where it goes under a variety of names, including grep, copy_if, select, FindAll and so on. It is also very often implemented simply as a loop, as it may be very tightly integrated with the surrounding code.

Solutions with Global and Shared Memory

Now, what if we want to implement filtering on a GPU, and process the elements of the array src in parallel? A straightforward approach is to use a single global counter and atomically increment it for each new element written int the dst array. A GPU implementation of this may look as follows.

__global__
void filter_k(int *dst, int *nres, const int *src, int n) {
int i = threadIdx.x + blockIdx.x * blockDim.x;
if(i < n && src[i] > 0)
dst[atomicAdd(nres, 1)] = src[i];
}

The main problem with this implementation is that all threads in the grid that read positive elements from src increment a single counter, nres. Depending on the number of positive elements, this may be a very large number of threads. Therefore, the degree of collisions for atomicAdd() is high, which limits performance. You can see this in Figure 1, which plots the kernel bandwidth (counting both reads and writes, but not atomics) achieved on a Kepler K40 GPU when processing 16 Mi (224) elements.

Figure 1: Performance of filtering with global atomics on Kepler K40 GPU.

The bandwidth is inversely proportional to the number of atomics executed, or the fraction of positive elements in the array. While performance is acceptable (40 GiB/s) for a 5% fraction, it drops drastically when more elements pass the filter, to just 7 GiB/s for a 50% fraction. Atomic operations are clearly a bottleneck, and need to be removed or reduced to increase application performance.

One way to improve filtering performance is to use shared memory atomics. This increases the speed of each operation, and reduces the degree of collisions, as the counter is only shared between threads in a single block. With this approach, we only need one global atomicAdd() per thread block. Here is a kernel implemented with this approach.

__global__
void filter_shared_k(int *dst, int *nres, const int* src, int n) {
__shared__ int l_n;
int i = blockIdx.x * (NPER_THREAD * BS) + threadIdx.x; for (int iter = 0; iter < NPER_THREAD; iter++) {
// zero the counter
if (threadIdx.x == 0)
l_n = 0;
__syncthreads(); // get the value, evaluate the predicate, and
// increment the counter if needed
int d, pos; if(i < n) {
d = src[i];
if(d > 0)
pos = atomicAdd(&l_n, 1);
}
__syncthreads(); // leader increments the global counter
if(threadIdx.x == 0)
l_n = atomicAdd(nres, l_n);
__syncthreads(); // threads with true predicates write their elements
if(i < n && d > 0) {
pos += l_n; // increment local pos by global counter
dst[pos] = d;
}
__syncthreads(); i += BS;
}
}

Another approach is to first use a parallel prefix sum to compute the output index of each element. Thrust’s copy_if() function uses an optimized version of this approach. Performance of both approaches for Kepler K40 is presented in Figure 2. Though shared memory atomics improve filtering performance, it still stays within 2-3x of the original approach. Atomics are still a bottleneck, as the number of operations hasn’t changed. Thrust is better than both approaches for high filtering fractions, but incurs large upfront costs which are not amortized for small filtering fractions.

It is important to note that the comparison to Thrust is not apples-to-apples, because Thrust implements a stable filter: it preserves the relative order of the input elements in the output. This is a result of using prefix sum to implement it, but it is more expensive as a result. If we don’t need a stable filter, then a purely atomic approach is simpler and performs less work.

Figure 1: Performance of filtering with shared memory atomics on Kepler K40 GPU.

Warp-Aggregated Atomics

Warp aggregation is the process of combining atomic operations from multiple threads in a warp into a single atomic. This approach is orthogonal to using shared memory: the type of the atomics remains the same, but we use fewer of them. With warp aggregation, we replace atomic operations with the following steps.

  1. Threads in the warp elect a leader thread.
  2. Threads in the warp compute the total atomic increment for the warp.
  3. The leader thread performs an atomic add to compute the offset for the warp.
  4. The leader thread broadcasts the offset to all other threads in the warp.
  5. Each thread adds its own index within the warp to the warp offset to get its position in the output array.

After that, each thread proceeds as in the original code, and writes its value to its position in the dst array. Let’s now consider each of the steps in detail.

Step 1: Leader Election

First, we need a way to identify threads, or lanes in a warp. For the sake of simplicity, we assume one-dimensional thread blocks (or at least that all threads in a warp have the same threadIdx.y and threadIdx.z). The lane identifier can then be computed as follows.

#define WARP_SZ 32
__device__
inline int lane_id(void) { return threadIdx.x % WARP_SZ; }

The simplest approach is to always elect lane 0. In our example, this requires code reorganization so that lane 0 is always active during the warp-aggregated atomic. While for filtering code this reorganization is simple, in other applications, where the atomic counter increment may be performed inside a deep nest of conditionals, this can be much more complicated. To create a drop-in replacement for atomicAdd(), we need an alternative approach to leader election. Specifically, we elect the lowest active lane. This can be done using two CUDA intrinsics. __ballot(int v), available starting from Compute Capability 2.0 (sm_20), returns a mask in which each bit indicates whether the predicate v is true for the corresponding lane in the warp (the bit is unset for inactive lanes). __ffs(int v) (“find first set bit”) returns the 1-based position of the lowest bit that is set in v, or 0 if no bit is set. The code for leader selection thus looks as follows.

int mask = __ballot(1);  // mask of active lanes
int leader = __ffs(mask) - 1; // -1 for 0-based indexing

Step 2: Computing the Total Increment

For our filtering example, each thread with a true predicate increments the counter by 1. The total increment for the warp is equal to the number of active lanes which hold true predicates, which we can compute by counting the number of bits set in the mask returned by __ballot(1). For this, we use the __popc(int v) intrinsic, which returns the number of bits set in the binary representation of integer v. (We don’t consider here the case of increments that vary across lanes). The following code computes the total increment.

int change = __popc(mask);

Step 3: Performing the Atomic

Only the leader lane performs the atomic operation, using the following code.

int res;
if(lane_id() == leader)
res = atomicAdd(ctr, change); // ctr is the pointer to the counter

Step 4: Broadcasting the Result

In this step, the leader thread broadcasts the result of the atomicAdd() to other lanes in the warp. On Kepler and later GPUs (CC 3.0 and above), we can implement the broadcast efficiently using the warp shuffle intrinsic. The looks as follows.

__device__ int warp_bcast(int v, int leader) { return __shfl(v, leader); }

We use warp_bcast() to share the atomic result with all threads in the warp.

res = warp_bcast(res, leader);

Step 5: Computing the Index for Each Lane

Finally, we must compute the output index for each lane, by adding the broadcast counter value for the warp to the lane’s intra-warp offset. We define the intra-warp offset as the number of active lanes with IDs lower than current lane ID. This is the count of bits set in the active lanes mask corresponding to lanes with ID lower than the current lane ID. We can compute this by applying a mask that selects only lower lane IDs, and then applying the __popc() intrinsic to the result. In code, that looks like the following.

res + __popc(mask & ((1 << lane_id()) – 1))

We can now join the pieces of the code defined above to get the full warp-aggregated version of the increment function.

// warp-aggregated atomic increment
__device__
int atomicAggInc(int *ctr) {
int mask = __ballot(1);
// select the leader
int leader = __ffs(mask) – 1;
// leader does the update
int res;
if(lane_id() == leader)
res = atomicAdd(ctr, __popc(mask));
// broadcast result
res = warp_bcast(res, leader);
// each thread computes its own value
return res + __popc(mask & ((1 << lane_id()) – 1));
} // atomicAggInc

Performance Comparison

The warp-aggregated atomic increment function is a drop-in replacement for atomicAdd(ctr, 1) where ctr is the same across all threads of a warp. Therefore, we can rewrite GPU filtering using atomicAggInc() as follows.

__global__ void filter_k(int *dst, const int *src, int n) {
int i = threadIdx.x + blockIdx.x * blockDim.x;
if(i >= n)
return;
if(src[i] > 0)
dst[atomicAggInc(nres)] = src[i];
}

Note that though we defined warp aggregation with global atomics in mind, nothing precludes doing the same for shared memory atomics. In fact, the atomicAggInc(int *ctr) function defined above works if ctr is a pointer to shared memory. Warp aggregation can thus also be used to accelerate filtering with shared memory. Figure 3 shows a performance comparison of different variants of filtering with and without warp aggregation for a Kepler GPU, and Figure 4 shows the results for a Maxwell GPU.

Figure 3: Performance of different filtering variants on Tesla K40 (Kepler) GPU (CUDA 6.5, driver version 340.29).

Figure 4: Performance of different filtering variants on GeForce GTX 750 Ti (Maxwell) GPU (CUDA 6.5, driver version 340.29).

For Kepler GPUs, the version with warp-aggregated global atomics is the clear winner. It always provides more than 60 GiB/s bandwidth, and the bandwidth actually increases with the fraction of elements that successfully pass through the filter. This also indicates that atomics are no longer a significant bottleneck. Compared to global atomics, performance improves up to 20x. Performance of a simple copy operation on the same GPU is 180 GiB/s. We can thus say that the performance of filtering with warp-aggregated atomics is comparable (in fact, within a factor of 2x) to that of a simple copy operation. This also means that filtering can now be used in performance-critical portions of the code. Also note that shared memory atomics (with warp aggregation) are actually slower than warp-aggregated atomics. This indicates that warp aggregation already does a very good job, and using shared memory brings no benefit and introduces only additional overhead.

For Maxwell, global aggregated atomics also provide the best performance. However, we have only tested on a small Maxwell GPU so far, with just 5 SMs. Having more SMs could increase the performance of the shared memory versions. We hope to test on a GM204 GPU soon. Another thing to note about Maxwell is that it is the first GPU architecture to implement fast shared memory atomics; Kepler and earlier GPUs used an expensive lock/update/unlock sequence for shared memory atomics. On Maxwell, you can see that for a low collision fraction, performance of the non-aggregated shared memory version is good, but it decreases with increasing collision rate. Therefore, warp aggregation is still valuable for our example.

For both GPU generations, warp aggregation provides performance improvements for both global and shared memory atomics. Warp aggregation enables achieving the best performance. Additionally, achieved bandwidth actually increases with the fraction of filtered elements (due to more elements being written), compared to decreasing bandwidth without warp aggregation.

Conclusion

Warp aggregation of atomics is a useful technique to improve performance of applications that perform many operations on a small number of counters. In this post we applied warp aggregation to filtering, and obtained more than an order-of-magnitude performance improvement. An important point is that the atomic function with warp aggregation, atomicAggInc(), is logically separate from the rest of the application, and can be used as a drop-in replacement for a simple atomic addition when threads in a warp increment the same counter by the same value.

Warp-aggregated atomics are by no means limited to filtering; you can use it for many other applications which make use of atomic operations.

CUDA Pro Tip: Optimized Filtering with Warp-Aggregated Atomics的更多相关文章

  1. CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops

    https://devblogs.nvidia.com/cuda-pro-tip-write-flexible-kernels-grid-stride-loops/ One of the most c ...

  2. CUDA Pro:通过向量化内存访问提高性能

    CUDA Pro:通过向量化内存访问提高性能 许多CUDA内核受带宽限制,而新硬件中触发器与带宽的比率不断提高,导致带宽受限制的内核更多.这使得采取措施减轻代码中的带宽瓶颈非常重要.本文将展示如何在C ...

  3. cuda编程-卷积优化

    CUDA Convolution https://www.evl.uic.edu/sjames/cs525/final.html Improve Image Processing Using GPU ...

  4. CUDA 8混合精度编程

    CUDA 8混合精度编程 Mixed-Precision Programming with CUDA 8 论文地址:https://devblogs.nvidia.com/mixed-precisio ...

  5. CUDA性能优化----warp深度解析

    本文转自:http://blog.163.com/wujiaxing009@126/blog/static/71988399201701224540201/ 1.引言 CUDA性能优化----sp, ...

  6. CUDA C Programming Guide 在线教程学习笔记 Part 4

    ▶ 图形互操作性,OpenGL 与 Direct3D 相关.(没学过,等待填坑) ▶ 版本号与计算能力 ● 计算能力(Compute Capability)表征了硬件规格,CUDA版本号表征了驱动接口 ...

  7. Linux下Qt+CUDA调试并运行

    Qt与CUDA相结合具体的操作主要修改qt项目中的配置文件pro.下面以测试的项目为例. 因为这是一个测试案例,代码很简单,下面将这几个文件的代码贴出来,方面后面对应pro文件和Makefile文件中 ...

  8. 使用CUDA Warp-Level级原语

    使用CUDA Warp-Level级原语 NVIDIA GPU以SIMT(单指令,多线程)的方式执行称为warps 的线程组.许多CUDA程序通过利用warp执行来实现高性能.本文将展示如何使用cud ...

  9. CUDA 8的混合精度编程

    CUDA 8的混合精度编程 Volta和Turing GPU包含 Tensor Cores,可加速某些类型的FP16矩阵数学运算.这样可以在流行的AI框架内更快,更轻松地进行混合精度计算.要使用Ten ...

随机推荐

  1. com.android.dex.DexIndexOverflowException: Cannot merge new index 66299 into a non-jumbo instruction

    打包时控制台输出: Error:Execution failed for task ':app:transformClassesWithDexForAll32Release'. > com.an ...

  2. Linux 调优

    一.系统优化 1.硬件优化 增加内存 更换速度跟高磁盘(sata->sas)可以增加固态硬盘 更换更高校率的网卡,或者双网卡绑定,两个网卡作为一个网卡使用.服务器网卡一般为千兆 2.系统层优化 ...

  3. VCS 常用命令速查

      VCS是编译型Verilog模拟器,它完全支持OVI标准的Verilog HDL语言.PLI和SDF.VCS具有目前行业中最高的模拟性能,其出色的内存管理能力足以支持千万门级的ASIC设计,而其模 ...

  4. linux+udp+server+client

    一.客户端 #include<sys/types.h> #include<sys/socket.h> #include<netinet/in.h> #include ...

  5. Android系统--灯光系统驱动编写

    Android系统开发--Android灯光系统tiny4412_led_class驱动编写 框架分析 led_classdev_4412结构体 创建led_classdev_4412结构体 分配结构 ...

  6. Android内核开发:理解和掌握repo工具【转】

    本文转载自:http://ticktick.blog.51cto.com/823160/1653304 由于Android源码是用repo工具来管理的,因此,搞Android内核开发,首先要搞清楚re ...

  7. jprofile查看hprof文件[转]

    用jprofile打开hprof文件,查看内存泄露情况,有几个常用的功能说明一下: 工具下载:到官网下载jprofile7.0.1 64位的.再申请一个注册号,注册号的申请好像是一个邮件只能用一次. ...

  8. 如何开启和禁止Linux系统的ping功能

    在日常的网络维护和使用过程中,ping命令是最为常用的一个检测命令,它所使用的是ICMP协议,但是为了保护主机,很多时候我们需要禁止ICMP协议,在这种情况下,终端再使用ping命令检测,服务器是不会 ...

  9. java学习小笔记(三.socket通信)【转】

    三,socket通信1.http://blog.csdn.net/kongxx/article/details/7288896这个人写的关于socket通信不错,循序渐进式的讲解,用代码示例说明,运用 ...

  10. javascript删除JSON元素

    首先要搞清JSON的数据格式,我这里所说的JSON都是指javascript中的. JSON数据是由对象和数组数据结构组成,我们只要学会javascript中对对象和数组的删除方法即可对JSON项进行 ...