未验证 提交 e33f2f3b 编写于 作者: L liu zhengxi 提交者: GitHub

[Cherry-pick] Update inference build doc and inference lib download links (#1743)

* update inference build doc with jetson support, test=develop, test=document_preview (#1673)

* update inference lib download links, test=develop (#1742)
Co-authored-by: NPei Yang <peiyang@baidu.com>
上级 0b4d086f
...@@ -7,20 +7,19 @@ ...@@ -7,20 +7,19 @@
------------- -------------
.. csv-table:: .. csv-table::
:header: "版本说明", "预测库(1.6.2版本)", "预测库(develop版本)" :header: "版本说明", "预测库(1.6.3版本)", "预测库(develop版本)"
:widths: 3, 2, 2 :widths: 3, 2, 2
"ubuntu14.04_cpu_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-cpu-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-mkl/fluid_inference.tgz>`_" "ubuntu14.04_cpu_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-cpu-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_avx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-cpu-avx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-openblas/fluid_inference.tgz>`_" "ubuntu14.04_cpu_avx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-cpu-avx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cpu_noavx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-cpu-noavx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-noavx-openblas/fluid_inference.tgz>`_" "ubuntu14.04_cpu_noavx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-cpu-noavx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-noavx-openblas/fluid_inference.tgz>`_"
"ubuntu14.04_cuda9.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_" "ubuntu14.04_cuda9.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda10.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_" "ubuntu14.04_cuda10.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"ubuntu14.04_cuda8.0_cudnn7_avx_mkl_trt4", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-gpu-cuda8-cudnn7-avx-mkl-trt4/fluid_inference.tgz>`_", "ubuntu14.04_cuda8.0_cudnn7_avx_mkl_trt4", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda8-cudnn7-avx-mkl-trt4/fluid_inference.tgz>`_",
"ubuntu14.04_cuda9.0_cudnn7_avx_mkl_trt5", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-gpu-cuda9-cudnn7-avx-mkl-trt5/fluid_inference.tgz>`_", "ubuntu14.04_cuda9.0_cudnn7_avx_mkl_trt5", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda9-cudnn7-avx-mkl-trt5/fluid_inference.tgz>`_",
"ubuntu14.04_cuda10.0_cudnn7_avx_mkl_trt5", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-gpu-cuda10-cudnn7-avx-mkl-trt5/fluid_inference.tgz>`_", "ubuntu14.04_cuda10.0_cudnn7_avx_mkl_trt5", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda10-cudnn7-avx-mkl-trt5/fluid_inference.tgz>`_",
"nv-jetson-cuda10-cudnn7.5-trt5", "`fluid_inference.tar.gz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-nv-jetson-cuda10-cudnn7.5-trt5/fluid_inference.tar.gz>`_", "nv-jetson-cuda10-cudnn7.5-trt5", "`fluid_inference.tar.gz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-nv-jetson-cuda10-cudnn7.5-trt5/fluid_inference.tar.gz>`_",
**Note:所提供的C++预测库均使用GCC 4.8编译。**
从源码编译 从源码编译
---------- ----------
......
...@@ -7,36 +7,58 @@ Direct Download and Installation ...@@ -7,36 +7,58 @@ Direct Download and Installation
--------------------------------- ---------------------------------
.. csv-table:: c++ inference library list .. csv-table:: c++ inference library list
:header: "version description", "inference library(1.6.2 version)", "inference library(develop version)" :header: "version description", "inference library(1.6.3 version)", "inference library(develop version)"
:widths: 1, 3, 3 :widths: 3, 2, 2
"cpu_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-cpu-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-mkl/fluid_inference.tgz>`_" "ubuntu14.04_cpu_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-cpu-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-mkl/fluid_inference.tgz>`_"
"cpu_avx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-cpu-avx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-openblas/fluid_inference.tgz>`_" "ubuntu14.04_cpu_avx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-cpu-avx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-avx-openblas/fluid_inference.tgz>`_"
"cpu_noavx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-cpu-noavx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-noavx-openblas/fluid_inference.tgz>`_" "ubuntu14.04_cpu_noavx_openblas", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-cpu-noavx-openblas/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-cpu-noavx-openblas/fluid_inference.tgz>`_"
"cuda9.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_" "ubuntu14.04_cuda9.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda9-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"cuda10.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_" "ubuntu14.04_cuda10.0_cudnn7_avx_mkl", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/latest-gpu-cuda10-cudnn7-avx-mkl/fluid_inference.tgz>`_"
"nv-jetson-cuda10-cudnn7.5-trt5", "`fluid_inference.tar.gz <https://paddle-inference-lib.bj.bcebos.com/1.6.2-nv-jetson-cuda10-cudnn7.5-trt5/fluid_inference.tar.gz>`_", "ubuntu14.04_cuda8.0_cudnn7_avx_mkl_trt4", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda8-cudnn7-avx-mkl-trt4/fluid_inference.tgz>`_",
"ubuntu14.04_cuda9.0_cudnn7_avx_mkl_trt5", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda9-cudnn7-avx-mkl-trt5/fluid_inference.tgz>`_",
"ubuntu14.04_cuda10.0_cudnn7_avx_mkl_trt5", "`fluid_inference.tgz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-gpu-cuda10-cudnn7-avx-mkl-trt5/fluid_inference.tgz>`_",
"nv-jetson-cuda10-cudnn7.5-trt5", "`fluid_inference.tar.gz <https://paddle-inference-lib.bj.bcebos.com/1.6.3-nv-jetson-cuda10-cudnn7.5-trt5/fluid_inference.tar.gz>`_",
Build from Source Code Build from Source Code
----------------------- -----------------------
Users can also compile C++ inference libraries from the PaddlePaddle core code by specifying the following compile options at compile time: Users can also compile C++ inference libraries from the PaddlePaddle core code by specifying the following compile options at compile time:
============================ ====================== ============================ =============== ==================
Option Value Option Value Description
============================ ====================== ============================ =============== ==================
CMAKE_BUILD_TYPE Release CMAKE_BUILD_TYPE Release cmake build type, set to Release if debug messages are not needed
FLUID_INFERENCE_INSTALL_DIR Path of installation FLUID_INFERENCE_INSTALL_DIR path install path of inference libs
WITH_PYTHON OFF(recommended) WITH_PYTHON OFF(recomended) build python libs and whl package
ON_INFER ON(recommended) ON_INFER ON(recomended) build with inference settings
WITH_GPU ON/OFF WITH_GPU ON/OFF build inference libs on GPU
WITH_MKL ON/OFF WITH_MKL ON/OFF build inference libs supporting MKL
============================ ====================== WITH_MKLDNN ON/OFF build inference libs supporting MKLDNN
WITH_XBYAK ON build with XBYAK, must be OFF when building on NV Jetson platforms
WITH_NV_JETSON OFF build inference libs on NV Jetson platforms
============================ =============== ==================
It is recommended to configure options according to the recommended values to avoid linking unnecessary libraries. Other options can be set if it is necessary.
Firstly we pull the latest code from github and install nccl.
.. code-block:: bash
git clone https://github.com/paddlepaddle/paddle
# Use git checkout to switch to stable versions such as v1.6.2
git checkout v1.6.2
git clone https://github.com/NVIDIA/nccl.git
make -j4
make install
It is recommended to configure options according to the recommended values to avoid the link to unnecessary library. Other options can be set if it is necessary. **note**: nccl is not used but still needed in building. This dependence will be removed later.
**build inference libs on server**
The following code snippet pulls the latest code from github and specifies the compiling options (you need to replace PADDLE_ROOT with the installation path of the PaddlePaddle inference library): Following codes set the configurations and execute building(PADDLE_ROOT should be set to the actual installing path of inference libs).
.. code-block:: bash .. code-block:: bash
...@@ -55,6 +77,79 @@ The following code snippet pulls the latest code from github and specifies the c ...@@ -55,6 +77,79 @@ The following code snippet pulls the latest code from github and specifies the c
make make
make inference_lib_dist make inference_lib_dist
**build inference libs on NVIDIA Jetson platforms**
NVIDIA Jetson is an AI computing platform in embedded systems introduced by NVIDIA. Paddle Inference supports building inference libs on NVIDIA Jetson platforms. The steps are as following.
1. Prepare environments
Turn on hardware performance mode
.. code-block:: bash
sudo nvpmodel -m 0 && sudo jetson_clocks
if building on Nano hardwares, increase swap memory
.. code-block:: bash
# Increase DDR valid space. Default memory allocated is 16G, which is enough for Xavier. Following steps are for Nano hardwares.
sudo fallocate -l 5G /var/swapfile
sudo chmod 600 /var/swapfile
sudo mkswap /var/swapfile
sudo swapon /var/swapfile
sudo bash -c 'echo "/var/swapfile swap swap defaults 0 0" >> /etc/fstab'
2. Build paddle inference libs
.. code-block:: bash
cd Paddle
mkdir build
cd build
cmake .. \
-DWITH_CONTRIB=OFF \
-DWITH_MKL=OFF \
-DWITH_MKLDNN=OFF \
-DWITH_TESTING=OFF \
-DCMAKE_BUILD_TYPE=Release \
-DON_INFER=ON \
-DWITH_PYTHON=OFF \
-DWITH_XBYAK=OFF \
-DWITH_NV_JETSON=ON
make -j4
# Generate inference libs
make inference_lib_dist -j4
3. Test with samples
Please refer to samples on https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_usage/deploy/inference/paddle_tensorrt_infer.html#id2
**FAQ**
1. Error:
.. code-block:: bash
ERROR: ../aarch64-linux-gpn/crtn.o: Too many open files.
Fix this by increasing the number of files the system can open at the same time to 2048.
.. code-block:: bash
ulimit -n 2048
2. The building process hangs.
Might be downloading third-party libs. Wait or kill the building process and start again.
3. Lacking virtual destructors for IPluginFactory or IGpuAllocator when using TensorRT.
After downloading and installing TensorRT, add virtual destructors for IPluginFactory and IGpuAllocator in NvInfer.h:
.. code-block:: bash
virtual ~IPluginFactory() {};
virtual ~IGpuAllocator() {};
After successful compilation, dependencies required by the C++ inference library Will be stored in the PADDLE_ROOT directory. (dependencies including: (1) compiled PaddlePaddle inference library and header files; (2) third-party link libraries and header files; (3) version information and compilation option information) After successful compilation, dependencies required by the C++ inference library Will be stored in the PADDLE_ROOT directory. (dependencies including: (1) compiled PaddlePaddle inference library and header files; (2) third-party link libraries and header files; (3) version information and compilation option information)
The directory structure is: The directory structure is:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册