From 06f207bfe2c7400461ed3b6be67397f74a294815 Mon Sep 17 00:00:00 2001 From: HtdrogenSulfate <490868991@qq.com> Date: Fri, 17 Jun 2022 05:54:37 +0000 Subject: [PATCH] change bash to source --- deploy/paddleserving/readme_en.md | 2 +- deploy/paddleserving/recognition/readme.md | 2 +- deploy/paddleserving/recognition/readme_en.md | 2 +- docs/zh_CN/inference_deployment/recognition_serving_deploy.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/deploy/paddleserving/readme_en.md b/deploy/paddleserving/readme_en.md index 902f0c4e..34f9f2ba 100644 --- a/deploy/paddleserving/readme_en.md +++ b/deploy/paddleserving/readme_en.md @@ -179,7 +179,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s # Enter the working directory cd PaddleClas/deploy/paddleserving # One-click compile and install Serving server, set SERVING_BIN - bash ./build_server.sh python3.7 + source ./build_server.sh python3.7 ``` **Note: The path set by **[build_server.sh](./build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. diff --git a/deploy/paddleserving/recognition/readme.md b/deploy/paddleserving/recognition/readme.md index 0ed87a8d..4391d1c3 100644 --- a/deploy/paddleserving/recognition/readme.md +++ b/deploy/paddleserving/recognition/readme.md @@ -216,7 +216,7 @@ python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUD # 进入工作目录 cd PaddleClas/deploy/paddleserving # 一键编译安装Serving server、设置 SERVING_BIN - bash ./build_server.sh python3.7 + source ./build_server.sh python3.7 ``` **注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 diff --git a/deploy/paddleserving/recognition/readme_en.md b/deploy/paddleserving/recognition/readme_en.md index e402f485..96dc6ff1 100644 --- a/deploy/paddleserving/recognition/readme_en.md +++ b/deploy/paddleserving/recognition/readme_en.md @@ -195,7 +195,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s # Enter the working directory cd PaddleClas/deploy/paddleserving # One-click compile and install Serving server, set SERVING_BIN - bash ./build_server.sh python3.7 + source ./build_server.sh python3.7 ``` **Note: The path set by **[build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled. diff --git a/docs/zh_CN/inference_deployment/recognition_serving_deploy.md b/docs/zh_CN/inference_deployment/recognition_serving_deploy.md index d6adc5d2..ec5093d1 100644 --- a/docs/zh_CN/inference_deployment/recognition_serving_deploy.md +++ b/docs/zh_CN/inference_deployment/recognition_serving_deploy.md @@ -216,7 +216,7 @@ python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUD # 进入工作目录 cd PaddleClas/deploy/paddleserving # 一键编译安装Serving server、设置 SERVING_BIN - bash ./build_server.sh python3.7 + source ./build_server.sh python3.7 ``` **注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。 -- GitLab