From 527c46a182100a914fc6c02344ec330be70fc768 Mon Sep 17 00:00:00 2001 From: CheQiXiao <50894398+CheQiXiao@users.noreply.github.com> Date: Thu, 17 Jun 2021 11:58:07 +0800 Subject: [PATCH] update readme, test=document_fix (#33618) --- README.md | 7 +++---- README_cn.md | 7 +++---- 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index d0a35332d4..d9ef44fa2b 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -

+

@@ -50,10 +50,9 @@ Now our developers can acquire Tesla V100 online computing resources for free. I [Click here to learn more](https://github.com/PaddlePaddle/Fleet) -- **Accelerated High-Performance Inference over Ubiquitous Deployments** +- **High-Performance Inference Engines for Comprehensive Deployment Enviroments** - PaddlePaddle is not only compatible with other open-source frameworks for models training, but also works well on the ubiquitous developments, varying from platforms to devices. More specifically, PaddlePaddle accelerates the inference procedure with the fastest speed-up. Note that, a recent breakthrough of inference speed has been made by PaddlePaddle on Huawei's Kirin NPU, through the hardware/software co-optimization. - [Click here to learn more](https://github.com/PaddlePaddle/Paddle-Lite) + PaddlePaddle is not only compatible with models trained in 3rd party open-source frameworks , but also offers complete inference products for various production scenarios. Our inference product line includes [Paddle Inference](https://paddle-inference.readthedocs.io/en/latest/product_introduction/summary.html): Native inference library for high performance server and cloud inference; [Paddle Serving](https://github.com/PaddlePaddle/Serving): A service-oriented framework suitable for distributed and pipeline productions; [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite): Ultra-Lightweight inference engine for mobile and IoT enviroments; [Paddle.js](https://www.paddlepaddle.org.cn/paddle/paddlejs): A frontend inference engine for browser and mini apps. Futhermore, by great amounts of optimization with leading hardwares in each scenarios, Paddle inference engines outperform most of the other mainstream frameworks. - **Industry-Oriented Models and Libraries with Open Source Repositories** diff --git a/README_cn.md b/README_cn.md index 2be8be3df6..f80e703d10 100644 --- a/README_cn.md +++ b/README_cn.md @@ -1,4 +1,4 @@ - +

@@ -47,10 +47,9 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型 [查看详情](https://github.com/PaddlePaddle/Fleet) -- **多端多平台部署的高性能推理引擎** +- **支持多端多平台的高性能推理部署工具** - 飞桨不仅兼容其他开源框架训练的模型,还可以轻松地部署到不同架构的平台设备上。同时,飞桨的推理速度也是全面领先的。尤其经过了跟华为麒麟NPU的软硬一体优化,使得飞桨在NPU上的推理速度进一步突破。 - [查看详情](https://github.com/PaddlePaddle/Paddle-Lite) + 飞桨不仅广泛兼容第三方开源框架训练的模型部署,并且为不同的场景的生产环境提供了完备的推理引擎,包括适用于高性能服务器及云端推理的原生推理库 [Paddle Inference](https://paddle-inference.readthedocs.io/en/latest/product_introduction/summary.html),面向分布式、流水线生产环境下自动上云、A/B测试等高阶功能的服务化推理框架 [Paddle Serving](https://github.com/PaddlePaddle/Serving),针对于移动端、物联网场景的轻量化推理引擎 [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite),以及在浏览器、小程序等环境下使用的前端推理引擎 [Paddle.js](https://www.paddlepaddle.org.cn/paddle/paddlejs)。同时,透过与不同场景下的主流硬件高度适配优化及异构计算的支持, 飞桨的推理性能也领先绝大部分的主流实现。 - **面向产业应用,开源开放覆盖多领域的工业级模型库。** -- GitLab