提交 9415f8d1 编写于 作者: M MRXLT

fix doc

上级 029bab4a
......@@ -249,7 +249,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv
- [Compile from source code](doc/COMPILE.md)
### About Efficiency
- [How to profile Paddle Serving latency?](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/util)
- [How to profile Paddle Serving latency?](python/examples/util)
- [CPU Benchmarks(Chinese)](doc/BENCHMARKING.md)
- [GPU Benchmarks(Chinese)](doc/GPU_BENCHMARKING.md)
......
......@@ -255,7 +255,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv
- [如何编译PaddleServing?](doc/COMPILE_CN.md)
### 关于Paddle Serving性能
- [如何测试Paddle Serving性能?](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/util/)
- [如何测试Paddle Serving性能?](python/examples/util/)
- [CPU版Benchmarks](doc/BENCHMARKING.md)
- [GPU版Benchmarks](doc/GPU_BENCHMARKING.md)
......
......@@ -14,7 +14,7 @@ Deep neural nets often have some preprocessing steps on input data, and postproc
## How to define Node
PaddleServing has some predefined Computation Node in the framework. A very commonly used Computation Graph is the simple reader-inference-response mode that can cover most of the single model inference scenarios. A example graph and the corresponding DAG defination code is as follows.
PaddleServing has some predefined Computation Node in the framework. A very commonly used Computation Graph is the simple reader-inference-response mode that can cover most of the single model inference scenarios. A example graph and the corresponding DAG definition code is as follows.
<center>
<img src='simple_dag.png' width = "260" height = "370" align="middle"/>
</center>
......
......@@ -5,9 +5,9 @@
Paddle Serving is Paddle's high-performance online inference service framework, which can flexibly support the deployment of most models. In this article, the IMDB review sentiment analysis task is used as an example to show the entire process from model training to deployment of inference service through 9 steps.
## Step1:Prepare for Running Environment
Paddle Serving can be deployed on Linux environments such as Centos and Ubuntu. On other systems or in environments where you do not want to install the serving module, you can still access the server-side prediction service through the http service.
Paddle Serving can be deployed on Linux environments.Currently the server supports deployment on Centos7. [Docker deployment is recommended](RUN_IN_DOCKER.md). The rpc client supports deploymen on Centos7 and Ubuntu 18.On other systems or in environments where you do not want to install the serving module, you can still access the server-side prediction service through the http service.
You can choose to install the cpu or gpu version of the server module according to the requirements and machine environment, and install the client module on the client machine. When you want to access the server with http
You can choose to install the cpu or gpu version of the server module according to the requirements and machine environment, and install the client module on the client machine. When you want to access the server with http, there is not need to install client module.
```shell
pip install paddle_serving_server #cpu version server side
......
......@@ -6,9 +6,9 @@ Paddle Serving是Paddle的高性能在线预测服务框架,可以灵活支持
## Step1:准备环境
Paddle Serving可以部署在Centos和Ubuntu等Linux环境上,在其他系统上或者不希望安装serving模块的环境中仍然可以通过http服务来访问server端的预测服务。
Paddle Serving可以部署在Linux环境上,目前server端支持在Centos7上部署,推荐使用[Docker部署](RUN_IN_DOCKER_CN.md)。rpc client端可以在Centos7和Ubuntu18上部署,在其他系统上或者不希望安装serving模块的环境中仍然可以通过http服务来访问server端的预测服务。
可以根据需求和机器环境来选择安装cpu或gpu版本的server模块,在client端机器上安装client模块。当希望同http来访问server端
可以根据需求和机器环境来选择安装cpu或gpu版本的server模块,在client端机器上安装client模块。使用http请求的方式来访问server时,client端机器不需要安装client模块。
```shell
pip install paddle_serving_server #cpu版本server端
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册