Paddle Serving helps deep learning developers deploy an online inference service without much effort. **The goal of this project**: once you have trained a deep neural nets with [Paddle](https://github.com/PaddlePaddle/Paddle), you already have a model inference service. A demo of serving is as follows:
<palign="center">
<imgsrc="doc/demo.gif"width="700">
</p>
## Key Features
<h2align="center">Key Features</h2>
- Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed **with one line command**.
-**Industrial serving features** supported, such as models management, online loading, online A/B testing etc.
-**Distributed Key-Value indexing** supported that is especially useful for large scale sparse features as model inputs.
...
...
@@ -32,7 +32,7 @@ Paddle Serving helps deep learning developers deploy an online inference service
-**Multiple programming languages** supported on client side, such as Golang, C++ and python
-**Extensible framework design** that can support model serving beyond Paddle.
## Installation
<h2align="center">Installation</h2>
We highly recommend you to run Paddle Serving in Docker, please visit [Run in Docker](https://github.com/PaddlePaddle/Serving/blob/develop/doc/RUN_IN_DOCKER.md)