From 4773e7a1aa13ae65844cca61f608abe6e4eb6dc2 Mon Sep 17 00:00:00 2001 From: jack <136876878@qq.com> Date: Wed, 8 Jul 2020 14:44:41 +0800 Subject: [PATCH] fix linux number --- docs/deploy/server/cpp/linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/deploy/server/cpp/linux.md b/docs/deploy/server/cpp/linux.md index eca4c4e..a3556b9 100644 --- a/docs/deploy/server/cpp/linux.md +++ b/docs/deploy/server/cpp/linux.md @@ -45,7 +45,7 @@ fluid_inference **注意:** 预编译版本除`nv-jetson-cuda10-cudnn7.5-trt5` 以外其它包都是基于`GCC 4.8.5`编译,使用高版本`GCC`可能存在 `ABI`兼容性问题,建议降级或[自行编译预测库](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_guide/inference_deployment/inference/build_and_install_lib_cn.html#id12)。 -### Step4: 编译 +### Step3: 编译 编译`cmake`的命令在`scripts/build.sh`中,请根据实际情况修改主要参数,其主要内容说明如下: ``` @@ -112,7 +112,7 @@ yaml-cpp.zip文件下载后无需解压,在cmake/yaml.cmake中将`URL https:// sh ./scripts/build.sh ``` -### Step5: 预测及可视化 +### Step4: 预测及可视化 **在加载模型前,请检查你的模型目录中文件应该包括`model.yml`、`__model__`和`__params__`三个文件。如若不满足这个条件,请参考[模型导出为Inference文档](../python.html#inference)将模型导出为部署格式。** -- GitLab