From 6e00122c646b4c5b3bc9655941963fb152257cf0 Mon Sep 17 00:00:00 2001
From: leiqing <54695910+leiqing1@users.noreply.github.com>
Date: Tue, 22 Nov 2022 22:06:40 +0800
Subject: [PATCH] Add (#5651)
* Add Liteseg deployment using FastDeploy
* Update fastdeploy_cn.md
* Update fastdeploy_en.md
* Update fastdeploy_en.md
---
modelcenter/PP-LiteSeg/fastdeploy_cn.md | 43 ++++++++++++++++++++++++
modelcenter/PP-LiteSeg/fastdeploy_en.md | 44 +++++++++++++++++++++++++
2 files changed, 87 insertions(+)
create mode 100644 modelcenter/PP-LiteSeg/fastdeploy_cn.md
create mode 100644 modelcenter/PP-LiteSeg/fastdeploy_en.md
diff --git a/modelcenter/PP-LiteSeg/fastdeploy_cn.md b/modelcenter/PP-LiteSeg/fastdeploy_cn.md
new file mode 100644
index 00000000..7753dd72
--- /dev/null
+++ b/modelcenter/PP-LiteSeg/fastdeploy_cn.md
@@ -0,0 +1,43 @@
+## 0. 全场景高性能AI推理部署工具 FastDeploy
+FastDeploy 是一款**全场景、易用灵活、极致高效**的AI推理部署工具。提供开箱即用的**云边端**部署体验, 支持超过 150+ Text, Vision, Speech和跨模态模型,实现了AI模型**端到端的优化加速**。目前支持的硬件包括 **X86 CPU、NVIDIA GPU、ARM CPU、XPU、NPU、IPU**等10类云边端的硬件,通过一行代码切换不同推理后端和硬件。
+
+使用 FastDeploy 3步即可搞定AI模型部署:(1)安装FastDeploy预编译包(2)调用FastDeploy的API实现部署代码 (3)推理部署。
+
+**注** : 本文档下载 FastDeploy 示例来完成高性能部署体验;仅展示X86 CPU、NVIDIA GPU的推理,且默认已经准备好GPU环境(如 CUDA >= 11.2等),如需要部署其他硬件或者完整了解 FastDeploy 部署能力,请参考 [FastDeploy的GitHub仓库](https://github.com/PaddlePaddle/FastDeploy)
+
+
+## 1. 安装FastDeploy预编译包
+```
+pip install fastdeploy-gpu-python==0.0.0 -f https://www.paddlepaddle.org.cn/whl/fastdeploy_nightly_build.html
+```
+## 2. 运行部署示例
+```
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/segmentation/paddleseg/python
+
+# 下载LiteSeg模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz
+tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+
+# CPU推理
+python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
+# GPU推理
+python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
+```
+运行完成可视化结果如下图所示:
+
+原始图像:
+
+
+
+
+
+分割后的图:
+
+
+
+
diff --git a/modelcenter/PP-LiteSeg/fastdeploy_en.md b/modelcenter/PP-LiteSeg/fastdeploy_en.md
new file mode 100644
index 00000000..6f687888
--- /dev/null
+++ b/modelcenter/PP-LiteSeg/fastdeploy_en.md
@@ -0,0 +1,44 @@
+## 0. FastDeploy
+
+FastDeploy is an Easy-to-use and High Performance AI model deployment toolkit for Cloud, Mobile and Edge with out-of-the-box and unified experience, end-to-end optimization for over 150+ Text, Vision, Speech and Cross-modal AI models. FastDeploy Supports AI model deployment on
+**X86 CPU、NVIDIA GPU、ARM CPU、XPU、NPU、IPU** etc. You can switch different inference backends and hardware with a single line of code.
+
+Deploying AI model in 3 steps with FastDeploy: (1)Install FastDeploy SDK; (2)Use FastDeploy's API to implement the deployment code; (3) Deploy.
+
+**Notes** : This document downloads FastDeploy examples to complete the high performance deployment experience; only X86 CPUs, NVIDIA GPUs are shown for reasoning and GPU environments are ready by default (e.g. CUDA >= 11.2, etc.), if you need to deploy AI model on other hardware or learn about FastDeploy's full capabilities, please refer to [FastDeploy GitHub](https://github.com/PaddlePaddle/FastDeploy).
+
+## 1. Install FastDeploy SDK
+```
+pip install fastdeploy-gpu-python==0.0.0 -f https://www.paddlepaddle.org.cn/whl/fastdeploy_nightly_build.html
+```
+## 2. Run Deployment Example
+```
+# Download Deployment Example
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/segmentation/paddleseg/python
+
+# Download LiteSeg Model and Test Image
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz
+tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+
+# CPU deployment
+python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
+# GPU deployment
+python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
+# TensorRT inference on GPU (note: if you run TensorRT inference the first time, there is a serialization of the model, which is time-consuming and requires patience)
+python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
+```
+The results of the completed visualisation are shown below:
+
+Test Image:
+
+
+
+
+
+The Result after segmentation:
+
+
+
+
--
GitLab