From b9ff9e29cb3fa38bc2956c2f00b213d914d2fa38 Mon Sep 17 00:00:00 2001 From: Jiawei Wang Date: Mon, 30 Mar 2020 13:40:18 +0800 Subject: [PATCH] Update README.md --- doc/README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/README.md b/doc/README.md index 9e35c442..5c1fccc2 100644 --- a/doc/README.md +++ b/doc/README.md @@ -1,5 +1,7 @@ # Paddle Serving +([简体中文](./README_CN.md)|English) + Paddle Serving is PaddlePaddle's online estimation service framework, which can help developers easily implement remote prediction services that call deep learning models from mobile and server ends. At present, Paddle Serving is mainly based on models that support PaddlePaddle training. It can be used in conjunction with the Paddle training framework to quickly deploy inference services. Paddle Serving is designed around common industrial-level deep learning model deployment scenarios. Some common functions include multi-model management, model hot loading, [Baidu-rpc](https://github.com/apache/incubator-brpc)-based high-concurrency low-latency response capabilities, and online model A/B tests. The API that cooperates with the Paddle training framework can enable users to seamlessly transition between training and remote deployment, improving the landing efficiency of deep learning models. ------------ -- GitLab