未验证 提交 c3585cb5 编写于 作者: P Pei Yang 提交者: GitHub

fix deadlinks of c++ api infer doc, test=develop (#1797) (#1802)

上级 8ef5da49
......@@ -228,7 +228,7 @@ float *output_d = output_t->data<float>(PaddlePlace::kGPU, &output_size);
3. 在CPU可用核心数足够时,可以将设置`config->SetCpuMathLibraryNumThreads(num);`中的num值调高一些。
### GPU下预测
1. 可以尝试打开 TensorRT 子图加速引擎, 通过计算图分析,Paddle可以自动将计算图中部分子图融合,并调用NVIDIA的 TensorRT 来进行加速,详细内容可以参考 [使用Paddle-TensorRT库预测](./paddle_tensorrt_infer.html)
1. 可以尝试打开 TensorRT 子图加速引擎, 通过计算图分析,Paddle可以自动将计算图中部分子图融合,并调用NVIDIA的 TensorRT 来进行加速,详细内容可以参考 [使用Paddle-TensorRT库预测](../../performance_improving/inference_improving/paddle_tensorrt_infer.html)
### 多线程预测
Paddle Fluid支持通过在不同线程运行多个AnalysisPredictor的方式来优化预测性能,支持CPU和GPU环境。
......
......@@ -214,7 +214,7 @@ float *output_d = output_t->data<float>(PaddlePlace::kGPU, &output_size);
3. When the number of CPU cores available is enough, you can increase the num value in the setting `config->SetCpuMathLibraryNumThreads(num);`.
### Tuning on GPU
1. You can try to open the TensorRT subgraph acceleration engine. Through the graph analysis, Paddle can automatically fuse certain subgraphs, and call NVIDIA's TensorRT for acceleration. For details, please refer to [Use Paddle-TensorRT Library for inference](./paddle_tensorrt_infer_en.html)
1. You can try to open the TensorRT subgraph acceleration engine. Through the graph analysis, Paddle can automatically fuse certain subgraphs, and call NVIDIA's TensorRT for acceleration. For details, please refer to [Use Paddle-TensorRT Library for inference](../../performance_improving/inference_improving/paddle_tensorrt_infer_en.html)
### Tuning with multi-thread
Paddle Fluid supports optimizing prediction performance by running multiple AnalysisPredictors on different threads, and supports CPU and GPU environments.
......
......@@ -59,7 +59,7 @@ config->EnableTensorRtEngine(1 << 20 /* workspace_size*/,
## <a name="Paddle-TRT样例编译测试">Paddle-TRT样例编译测试</a>
1. 下载或编译带有 TensorRT 的paddle预测库,参考[安装与编译C++预测库](./build_and_install_lib_cn.html)
1. 下载或编译带有 TensorRT 的paddle预测库,参考[安装与编译C++预测库](../../inference_deployment/inference/build_and_install_lib_cn.html)
2.[NVIDIA官网](https://developer.nvidia.com/nvidia-tensorrt-download)下载对应本地环境中cuda和cudnn版本的TensorRT,需要登陆NVIDIA开发者账号。
3. 下载[预测样例](https://paddle-inference-dist.bj.bcebos.com/tensorrt_test/paddle_inference_sample_v1.7.tar.gz)并解压,进入`sample/paddle-TRT`目录下。
......
......@@ -52,7 +52,7 @@ The details of this interface is as following:
## <a name="Paddle-TRT example compiling test">Paddle-TRT example compiling test</a>
1. Download or compile Paddle Inference with TensorRT support, refer to [Install and Compile C++ Inference Library](./build_and_install_lib_en.html).
1. Download or compile Paddle Inference with TensorRT support, refer to [Install and Compile C++ Inference Library](../../inference_deployment/inference/build_and_install_lib_en.html).
2. Download NVIDIA TensorRT(with consistent version of cuda and cudnn in local environment) from [NVIDIA TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download) with an NVIDIA developer account.
3. Download [Paddle Inference sample](https://paddle-inference-dist.bj.bcebos.com/tensorrt_test/paddle_inference_sample_v1.7.tar.gz) and uncompress, and enter `sample/paddle-TRT` directory.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册