提交 1561ba59 编写于 作者: H Helin Wang

fix according to comments

上级 f8d32892
......@@ -6,9 +6,14 @@ PaddlePaddle. It provides an HTTP endpoint.
## Run
The inference server reads a trained model (a topology file and a
parameter file) and serves HTTP request at port `8000`. **The HTTP API
will differ slightly for each model,** please see [HTTP
API](#http-api) for details.
parameter file) and serves HTTP request at port `8000`. Because models
differ in the numbers and types of inputs, **the HTTP API will differ
slightly for each model,** please see [HTTP API](#http-api) for the
API spec,
and
[here](https://github.com/PaddlePaddle/book/wiki/PaddlePaddle-Book-pretrained-model) for
the request examples of different models that illustrate the
difference.
We will first show how to obtain the PaddlePaddle model, and then how
to start the server.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册