diff --git a/README.md b/README.md index 73f89a83dc763c0237a6a76c84a6e099ea2611f6..94ba688a2590766d52f9f81e498ad0e77674b9e0 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@

Motivation

-We consider deploying deep learning inference service online to be a user-facing application in the future. **The goal of this project**: When you have trained a deep neural net with [Paddle](https://github.com/PaddlePaddle/Paddle), you are also capable to deploy the model online easily. A demo of serving is as follows: +We consider deploying deep learning inference service online to be a user-facing application in the future. **The goal of this project**: When you have trained a deep neural net with [Paddle](https://github.com/PaddlePaddle/Paddle), you are also capable to deploy the model online easily. A demo of Paddle Serving is as follows:

@@ -53,7 +53,7 @@ Paddle Serving provides HTTP and RPC based service for users to access ### HTTP service -Paddle Serving provides a built-in python module called `paddle_serving_server.serve` that can start a rpc service or a http service with one-line command. If we specify the argument `--name uci`, it means that we will have a HTTP service with a url of `$IP:$PORT/uci/prediction` +Paddle Serving provides a built-in python module called `paddle_serving_server.serve` that can start a RPC service or a http service with one-line command. If we specify the argument `--name uci`, it means that we will have a HTTP service with a url of `$IP:$PORT/uci/prediction` ``` shell python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 --name uci ``` @@ -75,7 +75,7 @@ curl -H "Content-Type:application/json" -X POST -d '{"x": [0.0137, -0.1136, 0.25 ### RPC service -A user can also start a rpc service with `paddle_serving_server.serve`. RPC service is usually faster than HTTP service, although a user needs to do some coding based on Paddle Serving's python client API. Note that we do not specify `--name` here. +A user can also start a RPC service with `paddle_serving_server.serve`. RPC service is usually faster than HTTP service, although a user needs to do some coding based on Paddle Serving's python client API. Note that we do not specify `--name` here. ``` shell python -m paddle_serving_server.serve --model uci_housing_model --thread 10 --port 9292 ``` @@ -162,26 +162,26 @@ curl -H "Content-Type:application/json" -X POST -d '{"url": "https://paddle-serv ### New to Paddle Serving - [How to save a servable model?](doc/SAVE.md) -- [An end-to-end tutorial from training to serving(Chinese)](doc/TRAIN_TO_SERVICE.md) -- [Write Bert-as-Service in 10 minutes(Chinese)](doc/BERT_10_MINS.md) +- [End-to-end process from training to deployment](doc/TRAIN_TO_SERVICE.md) +- [Write Bert-as-Service in 10 minutes](doc/BERT_10_MINS.md) ### Developers - [How to config Serving native operators on server side?](doc/SERVER_DAG.md) - [How to develop a new Serving operator](doc/NEW_OPERATOR.md) - [Golang client](doc/IMDB_GO_CLIENT.md) -- [Compile from source code(Chinese)](doc/COMPILE.md) +- [Compile from source code](doc/COMPILE.md) ### About Efficiency -- [How profile serving efficiency?(Chinese)](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/util) -- [Benchmarks](doc/BENCHMARK.md) +- [How to profile Paddle Serving efficiency?(Chinese)](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/util) +- [CPU Benchmarks(Chinese)](doc/BENCHMARKING.md) +- [GPU Benchmarks(Chinese)](doc/GPU_BENCHMARKING.md) ### FAQ - [FAQ(Chinese)](doc/FAQ.md) ### Design -- [Design Doc(Chinese)](doc/DESIGN_DOC.md) -- [Design Doc(English)](doc/DESIGN_DOC_EN.md) +- [Design Doc](doc/DESIGN_DOC.md)

Community