问题添加向量与OpenCL

huangapple go评论55阅读模式
英文:

Problems additionVector with OpenCL

问题

我想学习OpenCL,所以我阅读了一个简单的向量加法教程https://www.eriksmistad.no/getting-started-with-opencl-and-gpu-computing/。我正在使用Ubuntu

Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.1 LTS
Release:	22.04
Codename:	jammy

我有一台由我的计算机识别的RTX 3080Ti

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05    Driver Version: 525.85.05    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:02:00.0  On |                  N/A |
|  0%   54C    P8    38W / 350W |    634MiB / 12288MiB |      2%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1766      G   /usr/lib/xorg/Xorg                312MiB |
|    0   N/A  N/A      2087      G   /usr/bin/gnome-shell              105MiB |
|    0   N/A  N/A      3343      G   ...5/usr/lib/firefox/firefox      183MiB |
+-----------------------------------------------------------------------------+

nvidia-smi 提供

我使用 apt-get install opencl-headers 安装了OpenCL,并为OpenCL驱动程序安装了cuda。

以下是代码:

#include <stdio.h>
#include <stdlib.h>
 
#ifdef __APPLE__
#include <OpenCL/opencl.h>
#else
#include <CL/cl.h>
#endif
 
#define MAX_SOURCE_SIZE (0x100000)
 
int main(void) {
    // 创建两个输入向量
    int i;
    const int LIST_SIZE = 10;
    int *A = (int*)malloc(sizeof(int)*LIST_SIZE);
    int *B = (int*)malloc(sizeof(int)*LIST_SIZE);
    for(i = 0; i < LIST_SIZE; i++) {
        A[i] = i;
        B[i] = i;
    }
 
    // 将内核源代码加载到数组 source_str 中
    FILE *fp;
    char *source_str;
    size_t source_size;
 
    fp = fopen("vector_add_kernel.cl", "r");
    if (!fp) {
        fprintf(stderr, "Failed to load kernel.\n");
        exit(1);
    }
    source_str = (char*)malloc(MAX_SOURCE_SIZE);
    source_size = fread( source_str, 1, MAX_SOURCE_SIZE, fp);
    fclose( fp );
 
    // 获取平台和设备信息
    cl_platform_id platform_id = NULL;
    cl_device_id device_id = NULL;   
    cl_uint ret_num_devices;
    cl_uint ret_num_platforms;
    cl_int ret = clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
    ret = clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_GPU, 1, 
            &device_id, &ret_num_devices);


    // 创建OpenCL上下文
    cl_context context = clCreateContext( NULL, 1, &device_id, NULL, NULL, &ret);
 
    // 创建命令队列
    cl_command_queue command_queue = clCreateCommandQueue(context, device_id, 0, &ret);
 
    // 为每个向量在设备上创建内存缓冲区
    cl_mem a_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY, 
            LIST_SIZE * sizeof(int), NULL, &ret);
    cl_mem b_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY,
            LIST_SIZE * sizeof(int), NULL, &ret);
    cl_mem c_mem_obj = clCreateBuffer(context, CL_MEM_WRITE_ONLY, 
            LIST_SIZE * sizeof(int), NULL, &ret);
 
    // 将列表 A 和 B 复制到各自的内存缓冲区
    ret = clEnqueueWriteBuffer(command_queue, a_mem_obj, CL_TRUE, 0,
            LIST_SIZE * sizeof(int), A, 0, NULL, NULL);
    ret = clEnqueueWriteBuffer(command_queue, b_mem_obj, CL_TRUE, 0, 
            LIST_SIZE * sizeof(int), B, 0, NULL, NULL);
 
    // 从内核源代码创建程序
    cl_program program = clCreateProgramWithSource(context, 1, 
            (const char **)&source_str, (const size_t *)&source_size, &ret);
 
    // 构建程序
    ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL);
 
    // 创建OpenCL内核
    cl_kernel kernel = clCreateKernel(program, "vector_add", &ret);
 
    // 设置内核的参数
    ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&a_mem_obj);
    ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&b_mem_obj);
    ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&c_mem_obj);
 
    // 在列表上执行OpenCL内核
    size_t global_item_size = LIST_SIZE; // 处理整个列表
    size_t local_item_size = 64; // 将工作项分组为 64 个一组
    ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, 
            &global_item_size, &local_item_size, 0, NULL, NULL);
 
    // 将设备上的内存缓冲区 C 读取到本地变量 C
    int *C = (int*)malloc(sizeof(int)*LIST_SIZE);
    ret = clEnqueueReadBuffer(command_queue, c_mem_obj, CL_TRUE, 0, 
            LIST_SIZE * sizeof(int), C, 0, NULL, NULL);
 
    // 在屏幕上显示结果
    for(i = 0; i < LIST_SIZE; i++)
        printf("%d + %d = %d\n", A[i], B[i], C[i]);
 
    // 清

<details>
<summary>英文:</summary>

I want to learn OpenCL so i read a tutorial with a simple vector addition [https://www.eriksmistad.no/getting-started-with-opencl-and-gpu-computing/](https://www.stackoverflow.com/)
Im working with ubuntu 

Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy

And i have a RTX 3080Ti known by my computer

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05 Driver Version: 525.85.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:02:00.0 On | N/A |
| 0% 54C P8 38W / 350W | 634MiB / 12288MiB | 2% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1766 G /usr/lib/xorg/Xorg 312MiB |
| 0 N/A N/A 2087 G /usr/bin/gnome-shell 105MiB |
| 0 N/A N/A 3343 G ...5/usr/lib/firefox/firefox 183MiB |
+-----------------------------------------------------------------------------+


give by an `nvidia-smi`
I installed OpenCL with `apt-get install opencl-headers` and cuda for OpenCL drivers. 
Here is the code : 

#include <stdio.h>
#include <stdlib.h>

#ifdef APPLE
#include <OpenCL/opencl.h>
#else
#include <CL/cl.h>
#endif

#define MAX_SOURCE_SIZE (0x100000)

int main(void) {
// Create the two input vectors
int i;
const int LIST_SIZE = 10;
int A = (int)malloc(sizeof(int)*LIST_SIZE);
int B = (int)malloc(sizeof(int)*LIST_SIZE);
for(i = 0; i < LIST_SIZE; i++) {
A[i] = i;
B[i] = i;
}

// Load the kernel source code into the array source_str
FILE *fp;
char *source_str;
size_t source_size;
char str_buffer[1024];
fp = fopen(&quot;vector_add_kernel.cl&quot;, &quot;r&quot;);
if (!fp) {
fprintf(stderr, &quot;Failed to load kernel.\n&quot;);
exit(1);
}
source_str = (char*)malloc(MAX_SOURCE_SIZE);
source_size = fread( source_str, 1, MAX_SOURCE_SIZE, fp);
fclose( fp );
// Get platform and device information
cl_platform_id platform_id = NULL;
cl_device_id device_id = NULL;   
cl_uint ret_num_devices;
cl_uint ret_num_platforms;
cl_int ret = clGetPlatformIDs(1, &amp;platform_id, &amp;ret_num_platforms);
ret = clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_GPU, 1, 
&amp;device_id, &amp;ret_num_devices);
// Create an OpenCL context
cl_context context = clCreateContext( NULL, 1, &amp;device_id, NULL, NULL, &amp;ret);
// Create a command queue
cl_command_queue command_queue = clCreateCommandQueue(context, device_id, 0, &amp;ret);
// Create memory buffers on the device for each vector 
cl_mem a_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY, 
LIST_SIZE * sizeof(int), NULL, &amp;ret);
cl_mem b_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY,
LIST_SIZE * sizeof(int), NULL, &amp;ret);
cl_mem c_mem_obj = clCreateBuffer(context, CL_MEM_WRITE_ONLY, 
LIST_SIZE * sizeof(int), NULL, &amp;ret);
// Copy the lists A and B to their respective memory buffers
ret = clEnqueueWriteBuffer(command_queue, a_mem_obj, CL_TRUE, 0,
LIST_SIZE * sizeof(int), A, 0, NULL, NULL);
ret = clEnqueueWriteBuffer(command_queue, b_mem_obj, CL_TRUE, 0, 
LIST_SIZE * sizeof(int), B, 0, NULL, NULL);
// Create a program from the kernel source
cl_program program = clCreateProgramWithSource(context, 1, 
(const char **)&amp;source_str, (const size_t *)&amp;source_size, &amp;ret);
// Build the program
ret = clBuildProgram(program, 1, &amp;device_id, NULL, NULL, NULL);
// Create the OpenCL kernel
cl_kernel kernel = clCreateKernel(program, &quot;vector_add&quot;, &amp;ret);
// Set the arguments of the kernel
ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&amp;a_mem_obj);
ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&amp;b_mem_obj);
ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&amp;c_mem_obj);
// Execute the OpenCL kernel on the list
size_t global_item_size = LIST_SIZE; // Process the entire lists
size_t local_item_size = 64; // Divide work items into groups of 64
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, 
&amp;global_item_size, &amp;local_item_size, 0, NULL, NULL);
// Read the memory buffer C on the device to the local variable C
int *C = (int*)malloc(sizeof(int)*LIST_SIZE);
ret = clEnqueueReadBuffer(command_queue, c_mem_obj, CL_TRUE, 0, 
LIST_SIZE * sizeof(int), C, 0, NULL, NULL);
// Display the result to the screen
for(i = 0; i &lt; LIST_SIZE; i++)
printf(&quot;%d + %d = %d\n&quot;, A[i], B[i], C[i]);
// Clean up
ret = clFlush(command_queue);
ret = clFinish(command_queue);
ret = clReleaseKernel(kernel);
ret = clReleaseProgram(program);
ret = clReleaseMemObject(a_mem_obj);
ret = clReleaseMemObject(b_mem_obj);
ret = clReleaseMemObject(c_mem_obj);
ret = clReleaseCommandQueue(command_queue);
ret = clReleaseContext(context);
free(A);
free(B);
free(C);
return 0;

}


And the code of the kernel :

__kernel void vector_add(__global const int *A, __global const int *B, __global int *C) {

// Get the index of the current element to be processed
int i = get_global_id(0);
// Do the operation
C[i] = A[i] + B[i];

}


I compile with : `gcc main.c -o vectorAddition -l OpenCL`
And the execution of vectorAddition give me this : 

platform name : NVIDIA CUDA
platform vendor : NVIDIA Corporation
Device name : NVIDIA Corporation
0 + 0 = 0
1 + 1 = 0
2 + 2 = 0
3 + 3 = 0
4 + 4 = 0
5 + 5 = 0
6 + 6 = 0
7 + 7 = 0
8 + 8 = 0
9 + 9 = 0


Thanks 
I already read a post which is pretty the same than mine :
[https://stackoverflow.com/questions/54606449/opencl-vector-addition-program](https://www.stackoverflow.com/)
But i think my `clCreateBuffer` are good
I put these lines in my code to be sure my gpu is know :
//Get the name of the platform and device
ret = clGetPlatformInfo(0, CL_PLATFORM_NAME, sizeof(str_buffer), &amp;str_buffer, NULL);
printf(&quot;platform name : %s\n&quot;,str_buffer);
ret = clGetPlatformInfo(0, CL_PLATFORM_VENDOR, sizeof(str_buffer), &amp;str_buffer, NULL);
printf(&quot;platform vendor : %s\n&quot;,str_buffer);
ret = clGetDeviceInfo(0, CL_DEVICE_NAME, sizeof(str_buffer), &amp;str_buffer, NULL);
printf(&quot;Device name : %s\n&quot;,str_buffer);
``
</details>
# 答案1
**得分**: 0
如果有人遇到相同问题,我找到了解决方案。问题是教程作者使用以下代码创建工作组:
```c
size_t global_item_size = LIST_SIZE; // 处理整个列表
size_t local_item_size = 64; // 将工作项分成 64 个一组
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, &global_item_size, &local_item_size, 0, NULL, NULL);

如果 local_item_size 不是 LIST_SIZE 的倍数,可能会导致未知行为。

所以,你可以选择发送一个 NULL 参数而不是 &local_item_size,或者选择 64、128 等值作为 LIST_SIZE

英文:

If anyone have the same issue i found out the solution. The problem is that the man who wrote the tutorial made work-groups with these lines :

size_t global_item_size = LIST_SIZE; // Process the entire lists
size_t local_item_size = 64; // Divide work items into groups of 64
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,&amp;global_item_size, &amp;local_item_size, 0, NULL, NULL);

You can have an unknown behavior if the local_item_size is not a multiple of list_size

So you can send a NULL argument instead of &local_item_size or chose 64,128,... for LIST_SIZE.

huangapple
  • 本文由 发表于 2023年2月16日 19:08:23
  • 转载请务必保留本文链接:https://go.coder-hub.com/75471377.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定