• VS+CUDA环境配置


    1. 安装CUDA驱动

    2.安装CUDA TOOLKIT

    3.在NVIDIA Corporation文件夹下下载CUDA Samples

    4.系统环境变量中添加

    1. CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6
    2. CUDA_SDK_PATH=C:\ProgramData\NVIDIA Corporation\CUDA Samples\v11.6
    3. CUDA_BIN_PATH=%CUDA_PAT%\bin
    4. CUDA_LIB_PATH=%CUDA_PATH%\lib\x64
    5. CUDA_SDK_BIN_PATH=%CUDA_SDK_PATH%\bin\win64
    6. CUDA_SDK_LIB_PATH=%CUDA_SDK_PATH%\common\lib\x64
    7. CUDA_SKD_PATH=C:\ProgramData\NVIDIA Corporation\CUDA Sample\v11.3

    5.在系统环境变量里添加

    1. %CUDA_BIN_PATH%
    2. %CUDA_LIB_PATH%
    3. %CUDA_SDK_BIN_PATH%
    4. %CUDA_SDK_LIB_path%

    6.在cude的文件夹下: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\extras\visual_studio_integration\MSBuildExtensions 将所有文件复制到D:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Microsoft\VC\v160\BuildCustomizations 文件夹下;

    7.VS中新建项目,右键项目,选择生成依赖项,选择生成自定义,在生成自定义中勾选CUDA。

    8.将新建的源文件右击,选择属性->常规->项类型,将项类型设置为CUDA C/C++

    9.打开项目属性,项目->“属性”->“配置属性”->“VC++目录”->"包含目录,添加包含目录:$(CUDA_PATH)\include

    10.打开项目属性,“VC++目录”->“库目录”,添加库目录:$(CUDA_PATH)\lib\x64

    11.“配置属性”->“链接器”->“输入”->“附加依赖项”

    1. cublas.lib
    2. cuda.lib
    3. cudadevrt.lib
    4. cudart.lib
    5. cudart_static.lib
    6. OpenCL.lib

    12.测试代码

    1. #include "cuda_runtime.h"
    2. #include "device_launch_parameters.h"
    3. #include<iostream>
    4. #include <stdio.h>
    5. using namespace std;
    6. constexpr size_t MAXSIZE = 20;
    7. __global__ void addKernel(int* const c, const int* const b, const int* const a)
    8. {
    9. int i = threadIdx.x;
    10. c[i] = a[i] + b[i];
    11. }
    12. int main()
    13. {
    14. constexpr size_t length = 6;
    15. int host_a[length] = { 1,2,3,4,5,6 };
    16. int host_b[length] = { 10,20,30,40,50,60 };
    17. int host_c[length];
    18. //为三个向量在GPU上分配显存
    19. int* dev_a, *dev_b, *dev_c;
    20. cudaMalloc((void**)&dev_c, length * sizeof(int));
    21. cudaMalloc((void**)&dev_a, length * sizeof(int));
    22. cudaMalloc((void**)&dev_b, length * sizeof(int));
    23. //将主机端的数据拷贝到设备端
    24. cudaMemcpy(dev_a, host_a, length * sizeof(int), cudaMemcpyHostToDevice);
    25. cudaMemcpy(dev_b, host_b, length * sizeof(int), cudaMemcpyHostToDevice);
    26. cudaMemcpy(dev_c, host_c, length * sizeof(int), cudaMemcpyHostToDevice);
    27. //在GPU上运行核函数,每个线程进行一个元素的计算
    28. addKernel << <1, length >> > (dev_c, dev_b, dev_a);
    29. //将设备端的运算结果拷贝回主机端
    30. cudaMemcpy(host_c, dev_c, length * sizeof(int), cudaMemcpyDeviceToHost);
    31. //释放显存
    32. cudaFree(dev_a);
    33. cudaFree(dev_b);
    34. cudaFree(dev_c);
    35. for (int i = 0; i < length; ++i)
    36. cout << host_c[i] << " ";
    37. cout << endl;
    38. getchar();
    39. system("pause");
    40. return 0;
    41. }

  • 相关阅读:
    猿创征文 | 详解二叉树之遍历及其应用(动图遍历演示)
    批处理的应用和源码分析
    strace应用
    TypeScript深度剖析:TypeScript 中枚举类型应用场景?
    JavaWeb核心、综合案例(详细!Web开发流程)
    直接插入排序
    解析华为OSPF协议
    142.如何个性化推荐系统设计-2
    如果面试官问你 JVM,额外回答“逃逸分析”技术会让你加分
    C++学习笔记--黑马程序员
  • 原文地址:https://blog.csdn.net/qq_38697681/article/details/126413779