2014-09-23 38 views
1

我有一个简单的程序来计算平方根,循环展开做如环与动态并行展开减少时间性能

循环展开

#include <stdio.h> 
#include <cuda.h> 
__global__ void square(float *a, int N,int idx); 


// Kernel that executes on the CUDA device 
__global__ void first(float *arr, int N) 
{ 
    int idx = 2*(blockIdx.x * blockDim.x + threadIdx.x); 
    int n=N; 
    //printf("%d\n",n); 
    for(int q=0;q<2;q++) 
    { 
    if(N<2000) 
    { 
    arr[idx+q] = arr[idx+q] * arr[idx+q]; 
    } 
    } 

} 



// main routine that executes on the host 
int main(void) 
{ 
    clock_t start = clock(),diff; 
    float *a_h, *a_d; // Pointer to host & device arrays 
    const int N = 1000; // Number of elements in arrays 
    size_t size = N * sizeof(float); 
    a_h = (float *)malloc(size);  // Allocate array on host 
    cudaMalloc((void **) &a_d, size); // Allocate array on device 
    // Initialize host array and copy it to CUDA device 
    for (int i=0; i<N; i++) a_h[i] = (float)i; 
    cudaMemcpy(a_d, a_h, size, cudaMemcpyHostToDevice); 
    // Do calculation on device: 
    int block_size = 4; 
    //int n_blocks = N/block_size + (N%block_size == 0 ? 0:1); 
    first <<< 4, 128 >>> (a_d, N); 
    //cudaThreadSynchronize(); 
    // Retrieve result from device and store it in host array 
    cudaMemcpy(a_h, a_d, sizeof(float)*N, cudaMemcpyDeviceToHost); 
    // Print results 
    for (int i=0; i<N; i++) printf("%d %f\n", i, a_h[i]); 
    // Cleanup 
    free(a_h); cudaFree(a_d); 
    diff = clock() - start; 
int msec = diff * 1000/CLOCKS_PER_SEC; 

printf("Time taken %d seconds %d milliseconds\n", msec/1000, msec%1000); 

} 

然后意识到环路计算可以被最小化动态并行。

与动态并行展开被实现为

与动态并行展开

#include <stdio.h> 
#include <cuda.h> 
__global__ void square(float *a, int N,int idx); 


// Kernel that executes on the CUDA device 
__global__ void first(float *arr, int N) 
{ 
    int idx = 2*(blockIdx.x * blockDim.x + threadIdx.x); 
    int n=N; 
    square <<< 1,2 >>> (arr, n,idx); 


} 

__global__ void square(float *a, int N,int idx) 
{ 
    int tdx = blockIdx.x * blockDim.x + threadIdx.x; 
    printf("%d\n",N); 
    if(N<2000) 
    { 
    a[tdx+idx] = a[tdx+idx] * a[tdx+idx]; 
    } 
} 

// main routine that executes on the host 
int main(void) 
{ 
    clock_t start = clock(),diff; 
    float *a_h, *a_d; // Pointer to host & device arrays 
    const int N = 1000; // Number of elements in arrays 
    size_t size = N * sizeof(float); 
    a_h = (float *)malloc(size);  // Allocate array on host 
    cudaMalloc((void **) &a_d, size); // Allocate array on device 
    // Initialize host array and copy it to CUDA device 
    for (int i=0; i<N; i++) a_h[i] = (float)i; 
    cudaMemcpy(a_d, a_h, size, cudaMemcpyHostToDevice); 
    // Do calculation on device: 
    int block_size = 4; 
    //int n_blocks = N/block_size + (N%block_size == 0 ? 0:1); 
    first <<< 4, 128 >>> (a_d, N); 
    //cudaThreadSynchronize(); 
    // Retrieve result from device and store it in host array 
    cudaMemcpy(a_h, a_d, sizeof(float)*N, cudaMemcpyDeviceToHost); 
    // Print results 
    for (int i=0; i<N; i++) printf("%d %f\n", i, a_h[i]); 
    // Cleanup 
    free(a_h); cudaFree(a_d); 
    diff = clock() - start; 
int msec = diff * 1000/CLOCKS_PER_SEC; 

printf("Time taken %d seconds %d milliseconds\n", msec/1000, msec%1000); 

} 

与展开动态并行的执行需要更多的时间用于executio不仅仅是展开。 Aren,我们假设在这种情况下通过动态并行来提高执行时间?

回答

3

动态并行性主要用于具有动态并行性的情况。那就是:在你完成一些计算之前,你不知道需要多少并行性的情况。不是将数据传回主机,然后立即将数据传送到参数化另一个启动中,而是从内核启动。在这种模式下,避免内核启动之间的memcpys,你会看到加速。

在上例中,情况并非如此。你可以从主机启动两倍的线程。没有任何动态的要求,因为没有可用的并行机制,在第一次内核启动时你并不知道。

此外,使用动态并行机制启动的内核的性能要求与从主机启动的性能要求类似。您必须启动合理数量的工作,否则启动延迟将主导您的计算时间。

+0

您可能希望补充一点,在这种情况下,子内核启动开销完全掩盖了计算时间以仅执行两个子线程。出于你在答案中阐述的原因,从计算的角度来看,启动一个新的内核只是中断。 – JackOLantern 2014-09-23 21:34:41