1.安装英伟达显卡驱动
首先需要到NAVIDIA官网去查自己的电脑是不是支持GPU运算。
网址是:CUDA GPUs | NVIDIA Developer。打开后的界面大致如下,只要里边有对应的型号就可以用GPU运算,并且每一款设备都列出来相关的计算能力(Compute Capability)。

系统层面查看当前安装的显卡型号:
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~# lspci | grep nvida
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~# lspci | grep VGA
- 3b:00.0 VGA compatible controller: NVIDIA Corporation GV104 [GeForce GTX 1180] (rev a1)
- 5e:00.0 VGA compatible controller: NVIDIA Corporation GV104 [GeForce GTX 1180] (rev a1)
- 86:00.0 VGA compatible controller: NVIDIA Corporation GV104 [GeForce GTX 1180] (rev a1)
- af:00.0 VGA compatible controller: NVIDIA Corporation GV104 [GeForce GTX 1180] (rev a1)
如果是ubuntu系统:明确了显卡性能后,接下来就开始在ubuntu系统安装对应的显卡驱动。
首先,检测NVIDIA图形卡和推荐的驱动程序的模型,在终端输入:
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~# ubuntu-drivers devices
- WARNING:root:_pkg_get_support nvidia-driver-530: package has invalid Support PBheader, cannot determine support level
- WARNING:root:_pkg_get_support nvidia-driver-515-server: package has invalid Support PBheader, cannot determine support level
- WARNING:root:_pkg_get_support nvidia-driver-525-server: package has invalid Support PBheader, cannot determine support level
- == /sys/devices/pci0000:3a/0000:3a:00.0/0000:3b:00.0 ==
- modalias : pci:v000010DEd00001E87sv00001458sd000037A8bc03sc00i00
- vendor : NVIDIA Corporation
- driver : nvidia-driver-530 - distro non-free recommended
- driver : nvidia-driver-470-server - distro non-free
- driver : nvidia-driver-440 - third-party non-free
- driver : nvidia-driver-515 - third-party non-free
- driver : nvidia-driver-450-server - distro non-free
- driver : nvidia-driver-515-server - distro non-free
- driver : nvidia-driver-418-server - distro non-free
- driver : nvidia-driver-418 - third-party non-free
- driver : nvidia-driver-460 - third-party non-free
- driver : nvidia-driver-450 - third-party non-free
- driver : nvidia-driver-470 - third-party non-free
- driver : nvidia-driver-455 - third-party non-free
- driver : nvidia-driver-495 - third-party non-free
- driver : nvidia-driver-525 - third-party non-free
- driver : nvidia-driver-465 - third-party non-free
- driver : nvidia-driver-525-server - distro non-free
- driver : nvidia-driver-410 - third-party non-free
- driver : nvidia-driver-520 - third-party non-free
- driver : nvidia-driver-510 - third-party non-free
- driver : xserver-xorg-video-nouveau - distro free builtin
具体可以使用下面的命令安装:
(py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~# ubuntu-drivers autoinstall
或者去官网下载驱动再手动安装的方式,命令官网上有。
NVIDIA GeForce 驱动程序 - N 卡驱动 | NVIDIA
安装完成后重启系统,然后在终端中输入命令检测是否安装成功:
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~# nvidia-smi
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~# nvidia-smi
- Fri Jul 12 15:43:58 2024
- +---------------------------------------------------------------------------------------+
- | NVIDIA-SMI 530.41.03 Driver Version: 530.41.03 CUDA Version: 12.1 |
- |-----------------------------------------+----------------------+----------------------+
- | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
- | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
- | | | MIG M. |
- |=========================================+======================+======================|
- | 0 NVIDIA GeForce RTX 2080 Off| 00000000:3B:00.0 Off | N/A |
- | 32% 41C P8 3W / 225W| 8MiB / 8192MiB | 0% Default |
- | | | N/A |
- +-----------------------------------------+----------------------+----------------------+
- | 1 NVIDIA GeForce RTX 2080 Off| 00000000:5E:00.0 Off | N/A |
- | 27% 41C P8 4W / 225W| 8MiB / 8192MiB | 0% Default |
- | | | N/A |
- +-----------------------------------------+----------------------+----------------------+
- | 2 NVIDIA GeForce RTX 2080 Off| 00000000:86:00.0 Off | N/A |
- | 27% 36C P8 1W / 225W| 8MiB / 8192MiB | 0% Default |
- | | | N/A |
- +-----------------------------------------+----------------------+----------------------+
- | 3 NVIDIA GeForce RTX 2080 Off| 00000000:AF:00.0 Off | N/A |
- | 31% 43C P8 9W / 225W| 80MiB / 8192MiB | 0% Default |
- | | | N/A |
- +-----------------------------------------+----------------------+----------------------+
-
- +---------------------------------------------------------------------------------------+
- | Processes: |
- | GPU GI CI PID Type Process name GPU Memory |
- | ID ID Usage |
- |=======================================================================================|
- | 0 N/A N/A 52177 G /usr/lib/xorg/Xorg 4MiB |
- | 1 N/A N/A 52177 G /usr/lib/xorg/Xorg 4MiB |
- | 2 N/A N/A 52177 G /usr/lib/xorg/Xorg 4MiB |
- | 3 N/A N/A 52177 G /usr/lib/xorg/Xorg 28MiB |
- | 3 N/A N/A 52282 G /usr/bin/gnome-shell 46MiB |
- +---------------------------------------------------------------------------------------+
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~#
上图显示cuda最高支持12.1版本
驱动版本Driver Version: 530.41.03
显卡型号:NVIDIA GeForce RTX 2080
显卡num:共计4个 每个显存大小8G
2.安装CUDA
首先要知道硬件支持的CUDA版本:
在上图右上角我们看到“CUDA Version:12.1”,这个表明对于这款显卡,我们后面要装的CUDA版本最高不能超过12.1。
其次要明确CUDA版本需求:
本文最终的目的是装好深度学习环境,这里指的是最终能够正常的使用pytorch[facebook公司]和paddlepaddle【百度公司】或TensorFlow【google公司】。这三款是当前使用比较多的深度学习框架,pytorch[facebook]侧重于科研和模型验证,paddlepaddle更适合工业级深度学习开发部署(当然也可以使用tensorflow)。
为了能够使用他们,我们接下来需要按照顺序安装CUDA、cuDNN、nccl、paddlepaddle、pytorch【省略】安装paddleocr。
在正式安装前我们首先要来确定当前的版本一致性,否则装到后面就会发现各种版本问题了。
接下来我们先看paddlepaddle和pytorch官网目前稳定版所支持的cuda。
paddlepaddle目前官网安装界面如下图所示:

pytorch官网安装界面:

尽量选择两个框架都支持的了,并且本机驱动也支持的CUDA版本。
接下来开始安装:
首先在英伟达官网下载cuda12进行安装即可。


照runfile(local)安装的方式简单,只需要在终端输入图中下方的两条NVIDIA推荐的命令就好了。
- 2中方式
- 1)交互
- ./cuda_xxxxxxx_linux.run
- 2)静默
- ./cuda_xxxxxxx_linux.run --silent --toolkit --samples
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~# vim ~/.bashrc
-
- export PATH=/usr/local/cuda-12.0/bin${PATH:+:${PATH}}
- export LD_LIBRARY_PATH=/usr/local/cuda-12.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
最后,更新环境变量配置:
source ~/.bashrc
至此cuda安装完成,输入nvcc -V命令查看cuda信息。
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:~# nvcc -V
- nvcc: NVIDIA (R) Cuda compiler driver
- Copyright (c) 2005-2022 NVIDIA Corporation
- Built on Mon_Oct_24_19:12:58_PDT_2022
- Cuda compilation tools, release 12.0, V12.0.76
- Build cuda_12.0.r12.0/compiler.31968024_0
如果想要卸载CUDA(例如重新安装了驱动等情况),需要使用下面的命令:
- cd /usr/local/cuda-xx.x/bin/
- sudo ./cuda-uninstaller
- sudo rm -rf /usr/local/cuda-xx.x
cuDNN(CUDA Deep Neural Network library) 是由NVIDIA开发的一个深度学习GPU加速库。
目的和功能: cuDNN旨在提供高效、标准化的原语(基本操作)来加速深度学习框架(例如TensorFlow、PyTorch)在NVIDIA GPU上的运算。
专门为深度学习设计:cuDNN提供了为深度学习任务高度优化的函数,如:
- 卷积操作
- 池化操作
- 激活函数
- 归一化等
安装CUDNN的过程相对比较简单。上官网进行下载。

选择对应的CUDA版本,单击后选择cuDNN Library for Linux(x86_64)下载安装包。
然后打开终端输入类似下面的命令进行解压并拷贝安装:
- cp -Pcudnn*/include/cudnn*.h cuda/include/
- cp -P cudnn*/lib/libcudnn* cuda/lib64/
- chmod a+r cuda/include/cudnn*.h cuda/lib64/libcudnn*
其实,cuDNN的安装本质上就是复制一堆的文件到CUDA中去。
我们可以使用如下的命令查看cuDNN的信息:
CUDN + cuDNN安装完成,我们可以监控一下gpu状态:
watch -n 1 nvidia-smi
由于深度学习分布式训练需要nccl支持,可以调用多张显卡计算,因此本小节来安装nccl。
首先从官网下载对应版本的nccl.

- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/usr/local# tar -xf nccl_2.19.3-1+cuda12.0_x86_64.txz
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/usr/local# ln -sf nccl_2.19.3-1+cuda12.0_x86_64 nccl
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/usr/local# cd include/^C
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/usr/local# cat /etc/ld.so.conf.d/nccl_2.19.3-1+cuda12.0.conf
- /usr/local/nccl/lib
- (py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/usr/local/include# ln -sf ../nccl/include nccl
没安装之前报错:

安装之后:
- >>> import paddle
- >>> paddle.utils.run_check()
- Running verify PaddlePaddle program ...
- I0712 17:30:32.906308 16653 program_interpreter.cc:212] New Executor is Running.
- W0712 17:30:32.906838 16653 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 12.1, Runtime API Version: 12.0
- W0712 17:30:32.940363 16653 gpu_resources.cc:164] device: 0, cuDNN Version: 8.0.
- I0712 17:30:35.770787 16653 interpreter_util.cc:624] Standalone Executor is Used.
- PaddlePaddle works well on 1 GPU.
- ======================= Modified FLAGS detected =======================
- FLAGS(name='FLAGS_selected_gpus', current_value='2', default_value='')
- =======================================================================
- I0712 17:30:38.527948 17096 tcp_utils.cc:107] Retry to connect to 127.0.0.1:40265 while the server is not yet listening.
- ======================= Modified FLAGS detected =======================
- FLAGS(name='FLAGS_selected_gpus', current_value='3', default_value='')
- =======================================================================
- I0712 17:30:38.738694 17097 tcp_utils.cc:107] Retry to connect to 127.0.0.1:40265 while the server is not yet listening.
- ======================= Modified FLAGS detected =======================
- FLAGS(name='FLAGS_selected_gpus', current_value='1', default_value='')
- =======================================================================
- I0712 17:30:38.817551 17095 tcp_utils.cc:107] Retry to connect to 127.0.0.1:40265 while the server is not yet listening.
- ======================= Modified FLAGS detected =======================
- FLAGS(name='FLAGS_selected_gpus', current_value='0', default_value='')
- =======================================================================
- I0712 17:30:39.014600 17094 tcp_utils.cc:181] The server starts to listen on IP_ANY:40265
- I0712 17:30:39.014768 17094 tcp_utils.cc:130] Successfully connected to 127.0.0.1:40265
- I0712 17:30:41.528342 17096 tcp_utils.cc:130] Successfully connected to 127.0.0.1:40265
- I0712 17:30:41.528888 17096 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
- I0712 17:30:41.739022 17097 tcp_utils.cc:130] Successfully connected to 127.0.0.1:40265
- I0712 17:30:41.776871 17097 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
- I0712 17:30:41.817867 17095 tcp_utils.cc:130] Successfully connected to 127.0.0.1:40265
- I0712 17:30:41.840788 17095 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
- I0712 17:30:41.851110 17094 process_group_nccl.cc:129] ProcessGroupNCCL pg_timeout_ 1800000
- W0712 17:30:43.391786 17096 gpu_resources.cc:119] Please NOTE: device: 2, GPU Compute Capability: 7.5, Driver API Version: 12.1, Runtime API Version: 12.0
- W0712 17:30:43.394407 17096 gpu_resources.cc:164] device: 2, cuDNN Version: 8.0.
- W0712 17:30:43.564615 17097 gpu_resources.cc:119] Please NOTE: device: 3, GPU Compute Capability: 7.5, Driver API Version: 12.1, Runtime API Version: 12.0
- W0712 17:30:43.566882 17097 gpu_resources.cc:164] device: 3, cuDNN Version: 8.0.
- W0712 17:30:43.627422 17095 gpu_resources.cc:119] Please NOTE: device: 1, GPU Compute Capability: 7.5, Driver API Version: 12.1, Runtime API Version: 12.0
- W0712 17:30:43.629004 17095 gpu_resources.cc:164] device: 1, cuDNN Version: 8.0.
- W0712 17:30:43.656805 17094 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 12.1, Runtime API Version: 12.0
- W0712 17:30:43.659112 17094 gpu_resources.cc:164] device: 0, cuDNN Version: 8.0.
- I0712 17:30:46.433609 17096 process_group_nccl.cc:132] ProcessGroupNCCL destruct
- I0712 17:30:46.433516 17095 process_group_nccl.cc:132] ProcessGroupNCCL destruct
- I0712 17:30:46.435761 17097 process_group_nccl.cc:132] ProcessGroupNCCL destruct
- I0712 17:30:46.437583 17094 process_group_nccl.cc:132] ProcessGroupNCCL destruct
- I0712 17:30:46.843884 17168 tcp_store.cc:289] receive shutdown event and so quit from MasterDaemon run loop
- PaddlePaddle works well on 4 GPUs.
- PaddlePaddle is installed successfully! Let's start deep learning with PaddlePaddle now.

- https://github.com/NVIDIA/nccl-tests
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl# ls
- nccl-tests-2.13.9 nccl-tests-2.13.9.tar.gz
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl# cd nccl-tests-2.13.9/
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9# ls^C
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9#
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9# ls
- doc LICENSE.txt Makefile README.md src verifiable
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9# make
- make -C src build BUILDDIR=/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build
- make[1]: 进入目录“/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/src”
- Compiling timer.cc > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/timer.o
- Compiling /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/verifiable/verifiable.o
- Compiling all_reduce.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/all_reduce.o
- Compiling common.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/common.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/all_reduce.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/all_reduce_perf
- Compiling all_gather.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/all_gather.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/all_gather.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/all_gather_perf
- Compiling broadcast.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/broadcast.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/broadcast.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/broadcast_perf
- Compiling reduce_scatter.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/reduce_scatter.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/reduce_scatter.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/reduce_scatter_perf
- Compiling reduce.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/reduce.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/reduce.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/reduce_perf
- Compiling alltoall.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/alltoall.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/alltoall.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/alltoall_perf
- Compiling scatter.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/scatter.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/scatter.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/scatter_perf
- Compiling gather.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/gather.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/gather.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/gather_perf
- Compiling sendrecv.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/sendrecv.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/sendrecv.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/sendrecv_perf
- Compiling hypercube.cu > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/hypercube.o
- Linking /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/hypercube.o > /data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/build/hypercube_perf
- make[1]: 离开目录“/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9/src”
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9# ./build/all_reduce_perf -b 8 -e 128M -f 2 -g 8
- # nThread 1 nGpus 8 minBytes 8 maxBytes 134217728 step: 2(factor) warmup iters: 5 iters: 20 agg iters: 1 validation: 1 graph: 0
- #
- # Using devices
- jettech-WS-C621E-SAGE-Series: Test CUDA failure common.cu:894 'invalid device ordinal'
- .. jettech-WS-C621E-SAGE-Series pid 24945: Test failure common.cu:844
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9# ls build/all_reduce_perf ^C
- (py10_paddlepaddle2.6_paddleocr2.8_cuda12_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env/nccl/nccl-tests-2.13.9# ./build/all_reduce_perf -b 8 -e 256M -f 2 -g4
- # nThread 1 nGpus 4 minBytes 8 maxBytes 268435456 step: 2(factor) warmup iters: 5 iters: 20 agg iters: 1 validation: 1 graph: 0
- #
- # Using devices
- # Rank 0 Group 0 Pid 25570 on jettech-WS-C621E-SAGE-Series device 0 [0x3b] NVIDIA GeForce RTX 2080
- # Rank 1 Group 0 Pid 25570 on jettech-WS-C621E-SAGE-Series device 1 [0x5e] NVIDIA GeForce RTX 2080
- # Rank 2 Group 0 Pid 25570 on jettech-WS-C621E-SAGE-Series device 2 [0x86] NVIDIA GeForce RTX 2080
- # Rank 3 Group 0 Pid 25570 on jettech-WS-C621E-SAGE-Series device 3 [0xaf] NVIDIA GeForce RTX 2080
- #
- # out-of-place in-place
- # size count type redop root time algbw busbw #wrong time algbw busbw #wrong
- # (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
- 8 2 float sum -1 15.71 0.00 0.00 0 15.63 0.00 0.00 0
- 16 4 float sum -1 17.28 0.00 0.00 0 15.91 0.00 0.00 0
- 32 8 float sum -1 17.18 0.00 0.00 0 16.18 0.00 0.00 0
- 64 16 float sum -1 17.14 0.00 0.01 0 15.87 0.00 0.01 0
- 128 32 float sum -1 17.09 0.01 0.01 0 16.30 0.01 0.01 0
- 256 64 float sum -1 17.23 0.01 0.02 0 15.90 0.02 0.02 0
- 512 128 float sum -1 17.28 0.03 0.04 0 16.38 0.03 0.05 0
- 1024 256 float sum -1 17.13 0.06 0.09 0 15.81 0.06 0.10 0
- 2048 512 float sum -1 17.63 0.12 0.17 0 15.80 0.13 0.19 0
- 4096 1024 float sum -1 17.22 0.24 0.36 0 15.99 0.26 0.38 0
- 8192 2048 float sum -1 16.61 0.49 0.74 0 16.11 0.51 0.76 0
- 16384 4096 float sum -1 18.69 0.88 1.31 0 18.36 0.89 1.34 0
- 32768 8192 float sum -1 23.44 1.40 2.10 0 23.02 1.42 2.14 0
- 65536 16384 float sum -1 34.72 1.89 2.83 0 34.55 1.90 2.85 0
- 131072 32768 float sum -1 63.00 2.08 3.12 0 62.87 2.08 3.13 0
- 262144 65536 float sum -1 93.22 2.81 4.22 0 93.98 2.79 4.18 0
- 524288 131072 float sum -1 148.2 3.54 5.31 0 148.1 3.54 5.31 0
- 1048576 262144 float sum -1 294.1 3.57 5.35 0 289.8 3.62 5.43 0
- 2097152 524288 float sum -1 595.3 3.52 5.28 0 592.2 3.54 5.31 0
- 4194304 1048576 float sum -1 1319.9 3.18 4.77 0 1317.6 3.18 4.77 0
- 8388608 2097152 float sum -1 3014.5 2.78 4.17 0 3100.5 2.71 4.06 0
- 16777216 4194304 float sum -1 6966.1 2.41 3.61 0 7025.2 2.39 3.58 0
- 33554432 8388608 float sum -1 13814 2.43 3.64 0 13829 2.43 3.64 0
- 67108864 16777216 float sum -1 28272 2.37 3.56 0 28100 2.39 3.58 0
- 134217728 33554432 float sum -1 55028 2.44 3.66 0 55975 2.40 3.60 0
- 268435456 67108864 float sum -1 111871 2.40 3.60 0 111223 2.41 3.62 0
- # Out of bounds values : 0 OK
- # Avg bus bandwidth : 2.23175
- #
首先下载Anaconda3
在[清华镜像]下载Linux版本的anaconda
清华镜像官网Anaconda下载
里选择的是Anaconda3-5.0.0-Linux-x86_64.sh
在用户文件夹下新建一个名为anaconda的文件夹,并将刚刚下载的文件放在此文件夹中,执行以下命令:
bash Anaconda3-5.0.0-Linux-x86_64.sh
需要都很多页协议,不断按回车键跳过。
出现询问时就输入yes
之后选择默认的安装目录,按回车确定。
出现询问是否初始化或配置环境变量就输入yes
安装完成。
创建虚拟环境
(py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env# conda create --name py10_paddleocr2.8_gpu_wubo python=3.10
这里参照官网进行安装即可:

(py10_paddleocr2.8_gpu_wubo) root@jettech-WS-C621E-SAGE-Series:/data/wubo/paddleocr/env# python -m pip install paddlepaddle-gpu==2.6.1.post120 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html
最后进行验证。
使用 python 或 python3 进入python解释器,输入:
GPU版本
- import paddle
- paddle.utils.run_check()
如果出现PaddlePaddle is installed successfully!,说明您已成功安装。同时会显示当前可以并行使用的GPU数量。
参照官网命令进行安装:

最后验证安装是否成功。
打开Python,输入以下命令:
- import torch
- print(torch.cuda.is_available())
8.安装paddleocr客户端 命令行模式