@@ -14,44 +14,36 @@ Paddle Serving helps deep learning developers deploy an online inference service
- Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed with one line command.
- Industrial serving features supported, such as multiple models management, model online loading, online A/B testing etc.
- Distributed Key-Value indexing supported that is especially useful for large scale sparse features as model inputs.
- Highly concurrent and efficient communication, with [Baidu-rpc](https://github.com/apache/incubator-brpc) supported.
- Highly concurrent and efficient communication between clients and servers.
- Multiple programming language supported on client side, such as Golang, C++ and python
## Quick Start
Paddle Serving supports light-weighted Python API for model inference and can be integrated with trainining process seemlessly. Here is a Boston House Pricing example for users to do quick start.