From a02644039b820e624a859f242402e41af10e684f Mon Sep 17 00:00:00 2001 From: Harryoung <30883834+Harryoung@users.noreply.github.com> Date: Mon, 28 Nov 2022 01:15:55 +0800 Subject: [PATCH] Provide some doc fixes for 3 seg models. (#5675) --- modelcenter/PP-HumanSegV2/benchmark_cn.md | 2 +- modelcenter/PP-HumanSegV2/benchmark_en.md | 2 +- .../PP-HumanSegV2/introduction_cn.ipynb | 3 +-- .../PP-HumanSegV2/introduction_en.ipynb | 2 +- modelcenter/PP-LiteSeg/benchmark_cn.md | 2 +- modelcenter/PP-LiteSeg/benchmark_en.md | 2 +- modelcenter/PP-LiteSeg/download_cn.md | 16 ++++++++-------- modelcenter/PP-LiteSeg/download_en.md | 16 ++++++++-------- modelcenter/PP-LiteSeg/introduction_cn.ipynb | 3 +-- modelcenter/PP-LiteSeg/introduction_en.ipynb | 2 +- modelcenter/PP-Matting/benchmark_cn.md | 2 +- modelcenter/PP-Matting/benchmark_en.md | 2 +- modelcenter/PP-Matting/introduction_cn.ipynb | 19 +++++++++++++------ modelcenter/PP-Matting/introduction_en.ipynb | 19 +++++++++++++------ 14 files changed, 52 insertions(+), 40 deletions(-) diff --git a/modelcenter/PP-HumanSegV2/benchmark_cn.md b/modelcenter/PP-HumanSegV2/benchmark_cn.md index e346796a..3f8a1846 100644 --- a/modelcenter/PP-HumanSegV2/benchmark_cn.md +++ b/modelcenter/PP-HumanSegV2/benchmark_cn.md @@ -39,4 +39,4 @@ PP-HumanSegV2 | 256x144 | 96.63 | 70.67 ## 2. 相关使用说明 -1. https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg \ No newline at end of file +1. [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg) \ No newline at end of file diff --git a/modelcenter/PP-HumanSegV2/benchmark_en.md b/modelcenter/PP-HumanSegV2/benchmark_en.md index 8b6d4c9c..eed44f03 100644 --- a/modelcenter/PP-HumanSegV2/benchmark_en.md +++ b/modelcenter/PP-HumanSegV2/benchmark_en.md @@ -39,4 +39,4 @@ PP-HumanSegV2 | 256x144 | 96.63 | 70.67 ## 2. Reference -Ref: https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg +Ref: [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg) diff --git a/modelcenter/PP-HumanSegV2/introduction_cn.ipynb b/modelcenter/PP-HumanSegV2/introduction_cn.ipynb index 5689c30e..2fd8a513 100644 --- a/modelcenter/PP-HumanSegV2/introduction_cn.ipynb +++ b/modelcenter/PP-HumanSegV2/introduction_cn.ipynb @@ -13,7 +13,7 @@ "\n", "2022年7月,PaddleSeg重磅升级的PP-HumanSegV2人像分割方案,以96.63%的mIoU精度, 63FPS的手机端推理速度,再次刷新开源人像分割算法SOTA指标。相比PP-HumanSegV1方案,推理速度提升87.15%,分割精度提升3.03%,可视化效果更佳。V2方案可与商业收费方案媲美,而且支持零成本、开箱即用!\n", "\n", - "PP-HumanSeg由飞桨官方出品,是PaddleSeg团队推出的模型和方案。 更多关于PaddleSeg可以点击 https://github.com/PaddlePaddle/PaddleSeg 进行了解。" + "PP-HumanSeg由飞桨官方出品,是PaddleSeg团队推出的模型和方案。 更多关于PaddleSeg可以点击 [https://github.com/PaddlePaddle/PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) 进行了解。" ] }, { @@ -417,4 +417,3 @@ "nbformat": 4, "nbformat_minor": 5 } - diff --git a/modelcenter/PP-HumanSegV2/introduction_en.ipynb b/modelcenter/PP-HumanSegV2/introduction_en.ipynb index ce6d5395..0dae5b7b 100644 --- a/modelcenter/PP-HumanSegV2/introduction_en.ipynb +++ b/modelcenter/PP-HumanSegV2/introduction_en.ipynb @@ -13,7 +13,7 @@ "\n", "In July 2022, PaddleSeg upgraded PP-HumanSeg to PP-HumanSegV2, providing new portrait segmentation solution which refreshed the SOTA indicator of the open-source portrait segmentation solutions with 96.63% mIoU accuracy and 63FPS mobile inference speed. Compared with the V1 solution, the inference speed is increased by 87.15%, the segmentation accuracy is increased by 3.03%, and the visualization effect is better. The PP-HumanSegV2 is comparable to the commercial solutions!\n", "\n", - "PP-HumanSeg is officially produced by PaddlePaddle and proposed by PaddleSeg team. More information about PaddleSeg can be found here https://github.com/PaddlePaddle/PaddleSeg." + "PP-HumanSeg is officially produced by PaddlePaddle and proposed by PaddleSeg team. More information about PaddleSeg can be found here [https://github.com/PaddlePaddle/PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)." ] }, { diff --git a/modelcenter/PP-LiteSeg/benchmark_cn.md b/modelcenter/PP-LiteSeg/benchmark_cn.md index 55c16fc0..c2d62bb0 100644 --- a/modelcenter/PP-LiteSeg/benchmark_cn.md +++ b/modelcenter/PP-LiteSeg/benchmark_cn.md @@ -43,4 +43,4 @@ PP-LiteSeg-B2 | STDC2 | 768x1536 | 78.2 | 77.5 | 102.6| ## 2. 相关使用说明 -1. https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg +1. [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg) diff --git a/modelcenter/PP-LiteSeg/benchmark_en.md b/modelcenter/PP-LiteSeg/benchmark_en.md index d2c3106d..12035cf7 100644 --- a/modelcenter/PP-LiteSeg/benchmark_en.md +++ b/modelcenter/PP-LiteSeg/benchmark_en.md @@ -41,4 +41,4 @@ PP-LiteSeg-B2 | STDC2 | 768x1536 | 78.2 | 77.5 | 102.6| ## 2. Reference -Ref: https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg +Ref: [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg) diff --git a/modelcenter/PP-LiteSeg/download_cn.md b/modelcenter/PP-LiteSeg/download_cn.md index a48ac833..c2c3c976 100644 --- a/modelcenter/PP-LiteSeg/download_cn.md +++ b/modelcenter/PP-LiteSeg/download_cn.md @@ -4,16 +4,16 @@ | 模型名 | 骨干网络 | 训练迭代次数 | 训练输入尺寸 | 预测输入尺寸 | 精度mIoU | 精度mIoU(flip) | 精度mIoU(ms+flip) | 下载链接 | | --- | --- | --- | ---| --- | --- | --- | --- | --- | -|PP-LiteSeg-T|STDC1|160000|1024x512|1025x512|73.10%|73.89%|-|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| -|PP-LiteSeg-T|STDC1|160000|1024x512|1536x768|76.03%|76.74%|-|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| -|PP-LiteSeg-T|STDC1|160000|1024x512|2048x1024|77.04%|77.73%|77.46%|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|1024x512|75.25%|75.65%|-|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|1536x768|78.75%|79.23%|-|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|2048x1024|79.04%|79.52%|79.85%|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|1025x512|73.10%|73.89%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|1536x768|76.03%|76.74%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|2048x1024|77.04%|77.73%|77.46%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|1024x512|75.25%|75.65%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|1536x768|78.75%|79.23%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|2048x1024|79.04%|79.52%|79.85%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| ## 2 CamVid上语义分割模型 | 模型名 | 骨干网络 | 训练迭代次数 | 训练输入尺寸 | 预测输入尺寸 | 精度mIoU | 精度mIoU(flip) | 精度mIoU(ms+flip) | 下载链接 | | --- | --- | --- | ---| --- | --- | --- | --- | --- | -|PP-LiteSeg-T|STDC1|10000|960x720|960x720|73.30%|73.89%|73.66%|[config](./pp_liteseg_stdc1_camvid_960x720_10k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc1_camvid_960x720_10k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_camvid_960x720_10k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|10000|960x720|960x720|75.10%|75.85%|75.48%|[config](./pp_liteseg_stdc2_camvid_960x720_10k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc2_camvid_960x720_10k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_camvid_960x720_10k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|10000|960x720|960x720|73.30%|73.89%|73.66%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_camvid_960x720_10k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc1_camvid_960x720_10k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_camvid_960x720_10k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|10000|960x720|960x720|75.10%|75.85%|75.48%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_camvid_960x720_10k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc2_camvid_960x720_10k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_camvid_960x720_10k_inference_model.zip)| diff --git a/modelcenter/PP-LiteSeg/download_en.md b/modelcenter/PP-LiteSeg/download_en.md index 18eec0f3..cc39d9b6 100644 --- a/modelcenter/PP-LiteSeg/download_en.md +++ b/modelcenter/PP-LiteSeg/download_en.md @@ -4,16 +4,16 @@ | Model | Backbone | Training Iters | Train Resolution | Test Resolution | mIoU | mIoU (flip) | mIoU (ms+flip) | Links | | --- | --- | --- | ---| --- | --- | --- | --- | --- | -|PP-LiteSeg-T|STDC1|160000|1024x512|1025x512|73.10%|73.89%|-|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| -|PP-LiteSeg-T|STDC1|160000|1024x512|1536x768|76.03%|76.74%|-|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| -|PP-LiteSeg-T|STDC1|160000|1024x512|2048x1024|77.04%|77.73%|77.46%|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|1024x512|75.25%|75.65%|-|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|1536x768|78.75%|79.23%|-|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|2048x1024|79.04%|79.52%|79.85%|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|1025x512|73.10%|73.89%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|1536x768|76.03%|76.74%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|2048x1024|77.04%|77.73%|77.46%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|1024x512|75.25%|75.65%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|1536x768|78.75%|79.23%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|2048x1024|79.04%|79.52%|79.85%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| ## 2 Semantic segmentation models on CamVid | Model | Backbone | Training Iters | Train Resolution | Test Resolution | mIoU | mIoU (flip) | mIoU (ms+flip) | Links | | --- | --- | --- | ---| --- | --- | --- | --- | --- | -|PP-LiteSeg-T|STDC1|10000|960x720|960x720|73.30%|73.89%|73.66%|[config](./pp_liteseg_stdc1_camvid_960x720_10k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc1_camvid_960x720_10k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_camvid_960x720_10k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|10000|960x720|960x720|75.10%|75.85%|75.48%|[config](./pp_liteseg_stdc2_camvid_960x720_10k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc2_camvid_960x720_10k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_camvid_960x720_10k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|10000|960x720|960x720|73.30%|73.89%|73.66%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_camvid_960x720_10k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc1_camvid_960x720_10k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_camvid_960x720_10k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|10000|960x720|960x720|75.10%|75.85%|75.48%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_camvid_960x720_10k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc2_camvid_960x720_10k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_camvid_960x720_10k_inference_model.zip)| diff --git a/modelcenter/PP-LiteSeg/introduction_cn.ipynb b/modelcenter/PP-LiteSeg/introduction_cn.ipynb index bdf30cb9..3be4f571 100644 --- a/modelcenter/PP-LiteSeg/introduction_cn.ipynb +++ b/modelcenter/PP-LiteSeg/introduction_cn.ipynb @@ -13,7 +13,7 @@ "\n", "在Cityscapes测试集上使用NVIDIA GTX 1080Ti进行实验,PP-LiteSeg的精度和速度可以达到 72.0% mIoU / 273.6 FPS 以及 77.5% mIoU / 102.6 FPS。与其他模型相比,PP-LiteSeg在精度和速度之间实现了SOTA平衡。\n", "\n", - "PP-LiteSeg模型由飞桨官方出品,是PaddleSeg团队推出的SOTA模型。 更多关于PaddleSeg可以点击 https://github.com/PaddlePaddle/PaddleSeg 进行了解。" + "PP-LiteSeg模型由飞桨官方出品,是PaddleSeg团队推出的SOTA模型。 更多关于PaddleSeg可以点击 [https://github.com/PaddlePaddle/PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) 进行了解。" ] }, { @@ -285,4 +285,3 @@ "nbformat": 4, "nbformat_minor": 5 } - diff --git a/modelcenter/PP-LiteSeg/introduction_en.ipynb b/modelcenter/PP-LiteSeg/introduction_en.ipynb index e903b601..d2d09ded 100644 --- a/modelcenter/PP-LiteSeg/introduction_en.ipynb +++ b/modelcenter/PP-LiteSeg/introduction_en.ipynb @@ -13,7 +13,7 @@ "\n", "On the Cityscapes test set, PP-LiteSeg achieves 72.0% mIoU/273.6 FPS and 77.5% mIoU/102.6 FPS on NVIDIA GTX 1080Ti. PP-LiteSeg achieves a superior tradeoff between accuracy and speed compared to other methods.\n", "\n", - "PP-LiteSeg model is officially produced by PaddlePaddle and is a SOTA model proposed by PaddleSeg. More information about PaddleSeg can be found here https://github.com/PaddlePaddle/PaddleSeg." + "PP-LiteSeg model is officially produced by PaddlePaddle and is a SOTA model proposed by PaddleSeg. More information about PaddleSeg can be found here [https://github.com/PaddlePaddle/PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)." ] }, { diff --git a/modelcenter/PP-Matting/benchmark_cn.md b/modelcenter/PP-Matting/benchmark_cn.md index ee5568cf..0b7378a2 100644 --- a/modelcenter/PP-Matting/benchmark_cn.md +++ b/modelcenter/PP-Matting/benchmark_cn.md @@ -35,4 +35,4 @@ | ppmatting_hrnet_w18 | PPM-AIM-195 | 31.56|0.0022|31.80|30.13| 24.5 | 91.28 | 28.9 | ## 3. 相关使用说明 -1. https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting +1. [https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) diff --git a/modelcenter/PP-Matting/benchmark_en.md b/modelcenter/PP-Matting/benchmark_en.md index 40c8a463..da654ec7 100644 --- a/modelcenter/PP-Matting/benchmark_en.md +++ b/modelcenter/PP-Matting/benchmark_en.md @@ -33,4 +33,4 @@ | ppmatting_hrnet_w18 | PPM-AIM-195 | 31.56|0.0022|31.80|30.13| 24.5 | 91.28 | 28.9 | ## 3. Reference -1. https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting +1. [https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) diff --git a/modelcenter/PP-Matting/introduction_cn.ipynb b/modelcenter/PP-Matting/introduction_cn.ipynb index 4800d325..af2e765d 100644 --- a/modelcenter/PP-Matting/introduction_cn.ipynb +++ b/modelcenter/PP-Matting/introduction_cn.ipynb @@ -9,9 +9,9 @@ "\n", "在众多图像抠图算法中,为了追求精度,往往需要输入trimap作为辅助信息,但这极大限制了算法的使用性。PP-Matting作为一种trimap-free的抠图方法,有效克服了辅助信息带来的弊端,在Composition-1k和Distinctions-646数据集中取得了SOTA的效果。PP-Matting利用语义分支(SCB)提取图片高级语义信息并通过引导流设计(Guidance Flow)逐步引导高分辨率细节分支(HRDB)对过度区域的细节提取,最后通过融合模块实现语义和细节的融合得到最终的alpha matte。\n", "\n", - "更多细节可参考技术报告:https://arxiv.org/abs/2204.09433 。\n", + "更多细节可参考技术报告:[https://arxiv.org/abs/2204.09433](https://arxiv.org/abs/2204.09433) 。\n", "\n", - "更多关于PaddleMatting的内容,可以点击 https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting 进行了解。\n", + "更多关于PaddleMatting的内容,可以点击 [https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) 进行了解。\n", "\n" ] }, @@ -173,20 +173,22 @@ "metadata": {}, "source": [ "## 6. 相关论文以及引用信息\n", + "```\n", "@article{chen2022pp,\n", " title={PP-Matting: High-Accuracy Natural Image Matting},\n", " author={Chen, Guowei and Liu, Yi and Wang, Jian and Peng, Juncai and Hao, Yuying and Chu, Lutao and Tang, Shiyu and Wu, Zewu and Chen, Zeyu and Yu, Zhiliang and others},\n", " journal={arXiv preprint arXiv:2204.09433},\n", " year={2022}\n", - "}" + "}\n", + "```" ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3.9.13 64-bit", "language": "python", - "name": "py35-paddle1.2.0" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -198,7 +200,12 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.4" + "version": "3.9.13" + }, + "vscode": { + "interpreter": { + "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" + } } }, "nbformat": 4, diff --git a/modelcenter/PP-Matting/introduction_en.ipynb b/modelcenter/PP-Matting/introduction_en.ipynb index 328f16a5..55576907 100644 --- a/modelcenter/PP-Matting/introduction_en.ipynb +++ b/modelcenter/PP-Matting/introduction_en.ipynb @@ -10,9 +10,9 @@ "\n", "In many image matting algorithms, in order to pursue precision, trimap is often provided as auxiliary information, but this greatly limits the application of the algorithm. PP-Matting, as a trimap-free image matting method, overcomes the disadvantages of auxiliary information and achieves SOTA performance in Composition-1k and Distinctions-646 datasets. PP-Matting uses Semantic Context Branch (SCB) to extract high-level semantic information of images and gradually guides high-resolution detail branch (HRDB) to extract details in transition area through Guidance Flow. Finally, alpha matte is obtained by fusing semantic map and detail map with fusion module.\n", "\n", - "More details can be found in the paper: https://arxiv.org/abs/2204.09433.\n", + "More details can be found in the paper: [https://arxiv.org/abs/2204.09433](https://arxiv.org/abs/2204.09433).\n", "\n", - "More about PaddleMatting,you can click https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting to learn.\n", + "More about PaddleMatting,you can click [https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) to learn.\n", "\n" ] }, @@ -176,20 +176,22 @@ "metadata": {}, "source": [ "## 6. Related papers and citations\n", + "```\n", "@article{chen2022pp,\n", " title={PP-Matting: High-Accuracy Natural Image Matting},\n", " author={Chen, Guowei and Liu, Yi and Wang, Jian and Peng, Juncai and Hao, Yuying and Chu, Lutao and Tang, Shiyu and Wu, Zewu and Chen, Zeyu and Yu, Zhiliang and others},\n", " journal={arXiv preprint arXiv:2204.09433},\n", " year={2022}\n", - "}" + "}\n", + "```" ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3.9.13 64-bit", "language": "python", - "name": "py35-paddle1.2.0" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -201,7 +203,12 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.4" + "version": "3.9.13" + }, + "vscode": { + "interpreter": { + "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" + } } }, "nbformat": 4, -- GitLab