@@ -7,7 +7,8 @@ An easy-to-use Machine Learning Model Inference Service Deployment Tool
...
@@ -7,7 +7,8 @@ An easy-to-use Machine Learning Model Inference Service Deployment Tool
[中文](./doc/README_CN.md)
[中文](./doc/README_CN.md)
Paddle Serving helps deep learning developers deploy an online inference service without much effort. Currently, Paddle Serving supports the deep learning models produced by [Paddle](https://github.com/PaddlePaddle/Paddle) althought it can be very easy to support other deep learning framework's model inference engine.
## Motivation
Paddle Serving helps deep learning developers deploy an online inference service without much effort. Currently, Paddle Serving supports the deep learning models trained by [Paddle](https://github.com/PaddlePaddle/Paddle) althought it can be very easy to integrate other deep learning framework's model inference engine.
## Key Features
## Key Features
- Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed with one line command.
- Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed with one line command.