diff --git a/modelcenter/PP-HumanSegV2/benchmark_cn.md b/modelcenter/PP-HumanSegV2/benchmark_cn.md index e346796a81f033077ea6caa5fdace732fc3ff25e..3f8a1846bede9a91dd35a6dcdaae9f3d19f720e5 100644 --- a/modelcenter/PP-HumanSegV2/benchmark_cn.md +++ b/modelcenter/PP-HumanSegV2/benchmark_cn.md @@ -39,4 +39,4 @@ PP-HumanSegV2 | 256x144 | 96.63 | 70.67 ## 2. 相关使用说明 -1. https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg \ No newline at end of file +1. [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg) \ No newline at end of file diff --git a/modelcenter/PP-HumanSegV2/benchmark_en.md b/modelcenter/PP-HumanSegV2/benchmark_en.md index 8b6d4c9c0d7afe4a3bf9fbf6513087d41c284fb2..eed44f037567d7e9442dd82365b045840aabed7b 100644 --- a/modelcenter/PP-HumanSegV2/benchmark_en.md +++ b/modelcenter/PP-HumanSegV2/benchmark_en.md @@ -39,4 +39,4 @@ PP-HumanSegV2 | 256x144 | 96.63 | 70.67 ## 2. Reference -Ref: https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg +Ref: [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg) diff --git a/modelcenter/PP-HumanSegV2/introduction_cn.ipynb b/modelcenter/PP-HumanSegV2/introduction_cn.ipynb index 5689c30ee1a6a564b3360ddb13eb989bd58a7dad..2fd8a51380ab0968c24359c6e1abe3a52d2351c1 100644 --- a/modelcenter/PP-HumanSegV2/introduction_cn.ipynb +++ b/modelcenter/PP-HumanSegV2/introduction_cn.ipynb @@ -13,7 +13,7 @@ "\n", "2022年7月,PaddleSeg重磅升级的PP-HumanSegV2人像分割方案,以96.63%的mIoU精度, 63FPS的手机端推理速度,再次刷新开源人像分割算法SOTA指标。相比PP-HumanSegV1方案,推理速度提升87.15%,分割精度提升3.03%,可视化效果更佳。V2方案可与商业收费方案媲美,而且支持零成本、开箱即用!\n", "\n", - "PP-HumanSeg由飞桨官方出品,是PaddleSeg团队推出的模型和方案。 更多关于PaddleSeg可以点击 https://github.com/PaddlePaddle/PaddleSeg 进行了解。" + "PP-HumanSeg由飞桨官方出品,是PaddleSeg团队推出的模型和方案。 更多关于PaddleSeg可以点击 [https://github.com/PaddlePaddle/PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) 进行了解。" ] }, { @@ -417,4 +417,3 @@ "nbformat": 4, "nbformat_minor": 5 } - diff --git a/modelcenter/PP-HumanSegV2/introduction_en.ipynb b/modelcenter/PP-HumanSegV2/introduction_en.ipynb index ce6d53954a843bd236eac4f3898434f3b815a2ea..0dae5b7b0cc6cebe147bbeb78f427eb53f39229a 100644 --- a/modelcenter/PP-HumanSegV2/introduction_en.ipynb +++ b/modelcenter/PP-HumanSegV2/introduction_en.ipynb @@ -13,7 +13,7 @@ "\n", "In July 2022, PaddleSeg upgraded PP-HumanSeg to PP-HumanSegV2, providing new portrait segmentation solution which refreshed the SOTA indicator of the open-source portrait segmentation solutions with 96.63% mIoU accuracy and 63FPS mobile inference speed. Compared with the V1 solution, the inference speed is increased by 87.15%, the segmentation accuracy is increased by 3.03%, and the visualization effect is better. The PP-HumanSegV2 is comparable to the commercial solutions!\n", "\n", - "PP-HumanSeg is officially produced by PaddlePaddle and proposed by PaddleSeg team. More information about PaddleSeg can be found here https://github.com/PaddlePaddle/PaddleSeg." + "PP-HumanSeg is officially produced by PaddlePaddle and proposed by PaddleSeg team. More information about PaddleSeg can be found here [https://github.com/PaddlePaddle/PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)." ] }, { diff --git a/modelcenter/PP-LiteSeg/benchmark_cn.md b/modelcenter/PP-LiteSeg/benchmark_cn.md index 55c16fc094395b2ba48415073c496427021026dc..c2d62bb0ba941fb7cc2abcfe9406df0065294e64 100644 --- a/modelcenter/PP-LiteSeg/benchmark_cn.md +++ b/modelcenter/PP-LiteSeg/benchmark_cn.md @@ -43,4 +43,4 @@ PP-LiteSeg-B2 | STDC2 | 768x1536 | 78.2 | 77.5 | 102.6| ## 2. 相关使用说明 -1. https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg +1. [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg) diff --git a/modelcenter/PP-LiteSeg/benchmark_en.md b/modelcenter/PP-LiteSeg/benchmark_en.md index d2c3106def8ecc2903bdbf2e765658e5dcf3aa66..12035cf7bd71ceaefc2610bc03fc9939c2a1fe65 100644 --- a/modelcenter/PP-LiteSeg/benchmark_en.md +++ b/modelcenter/PP-LiteSeg/benchmark_en.md @@ -41,4 +41,4 @@ PP-LiteSeg-B2 | STDC2 | 768x1536 | 78.2 | 77.5 | 102.6| ## 2. Reference -Ref: https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg +Ref: [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/configs/pp_liteseg) diff --git a/modelcenter/PP-LiteSeg/download_cn.md b/modelcenter/PP-LiteSeg/download_cn.md index a48ac833b5ee0c20b7c20b175041b3382b50a2e2..c2c3c9766b225e7e240fb9345e7901625dc102d7 100644 --- a/modelcenter/PP-LiteSeg/download_cn.md +++ b/modelcenter/PP-LiteSeg/download_cn.md @@ -4,16 +4,16 @@ | 模型名 | 骨干网络 | 训练迭代次数 | 训练输入尺寸 | 预测输入尺寸 | 精度mIoU | 精度mIoU(flip) | 精度mIoU(ms+flip) | 下载链接 | | --- | --- | --- | ---| --- | --- | --- | --- | --- | -|PP-LiteSeg-T|STDC1|160000|1024x512|1025x512|73.10%|73.89%|-|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| -|PP-LiteSeg-T|STDC1|160000|1024x512|1536x768|76.03%|76.74%|-|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| -|PP-LiteSeg-T|STDC1|160000|1024x512|2048x1024|77.04%|77.73%|77.46%|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|1024x512|75.25%|75.65%|-|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|1536x768|78.75%|79.23%|-|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|2048x1024|79.04%|79.52%|79.85%|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|1025x512|73.10%|73.89%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|1536x768|76.03%|76.74%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|2048x1024|77.04%|77.73%|77.46%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|1024x512|75.25%|75.65%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|1536x768|78.75%|79.23%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|2048x1024|79.04%|79.52%|79.85%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| ## 2 CamVid上语义分割模型 | 模型名 | 骨干网络 | 训练迭代次数 | 训练输入尺寸 | 预测输入尺寸 | 精度mIoU | 精度mIoU(flip) | 精度mIoU(ms+flip) | 下载链接 | | --- | --- | --- | ---| --- | --- | --- | --- | --- | -|PP-LiteSeg-T|STDC1|10000|960x720|960x720|73.30%|73.89%|73.66%|[config](./pp_liteseg_stdc1_camvid_960x720_10k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc1_camvid_960x720_10k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_camvid_960x720_10k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|10000|960x720|960x720|75.10%|75.85%|75.48%|[config](./pp_liteseg_stdc2_camvid_960x720_10k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc2_camvid_960x720_10k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_camvid_960x720_10k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|10000|960x720|960x720|73.30%|73.89%|73.66%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_camvid_960x720_10k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc1_camvid_960x720_10k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_camvid_960x720_10k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|10000|960x720|960x720|75.10%|75.85%|75.48%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_camvid_960x720_10k.yml)\|[训练模型](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc2_camvid_960x720_10k/model.pdparams)\|[预测模型](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_camvid_960x720_10k_inference_model.zip)| diff --git a/modelcenter/PP-LiteSeg/download_en.md b/modelcenter/PP-LiteSeg/download_en.md index 18eec0f3ae5a04e833320c1677644524b61ef749..cc39d9b63d93303d4cf59b711af3d54783277d96 100644 --- a/modelcenter/PP-LiteSeg/download_en.md +++ b/modelcenter/PP-LiteSeg/download_en.md @@ -4,16 +4,16 @@ | Model | Backbone | Training Iters | Train Resolution | Test Resolution | mIoU | mIoU (flip) | mIoU (ms+flip) | Links | | --- | --- | --- | ---| --- | --- | --- | --- | --- | -|PP-LiteSeg-T|STDC1|160000|1024x512|1025x512|73.10%|73.89%|-|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| -|PP-LiteSeg-T|STDC1|160000|1024x512|1536x768|76.03%|76.74%|-|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| -|PP-LiteSeg-T|STDC1|160000|1024x512|2048x1024|77.04%|77.73%|77.46%|[config](./pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|1024x512|75.25%|75.65%|-|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|1536x768|78.75%|79.23%|-|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|160000|1024x512|2048x1024|79.04%|79.52%|79.85%|[config](./pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|1025x512|73.10%|73.89%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|1536x768|76.03%|76.74%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|160000|1024x512|2048x1024|77.04%|77.73%|77.46%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|1024x512|75.25%|75.65%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.5_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|1536x768|78.75%|79.23%|-|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale0.75_160k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|160000|1024x512|2048x1024|79.04%|79.52%|79.85%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k_inference_model.zip)| ## 2 Semantic segmentation models on CamVid | Model | Backbone | Training Iters | Train Resolution | Test Resolution | mIoU | mIoU (flip) | mIoU (ms+flip) | Links | | --- | --- | --- | ---| --- | --- | --- | --- | --- | -|PP-LiteSeg-T|STDC1|10000|960x720|960x720|73.30%|73.89%|73.66%|[config](./pp_liteseg_stdc1_camvid_960x720_10k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc1_camvid_960x720_10k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_camvid_960x720_10k_inference_model.zip)| -|PP-LiteSeg-B|STDC2|10000|960x720|960x720|75.10%|75.85%|75.48%|[config](./pp_liteseg_stdc2_camvid_960x720_10k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc2_camvid_960x720_10k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_camvid_960x720_10k_inference_model.zip)| +|PP-LiteSeg-T|STDC1|10000|960x720|960x720|73.30%|73.89%|73.66%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc1_camvid_960x720_10k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc1_camvid_960x720_10k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc1_camvid_960x720_10k_inference_model.zip)| +|PP-LiteSeg-B|STDC2|10000|960x720|960x720|75.10%|75.85%|75.48%|[config](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg//pp_liteseg_stdc2_camvid_960x720_10k.yml)\|[Pretrained_model](https://paddleseg.bj.bcebos.com/dygraph/camvid/pp_liteseg_stdc2_camvid_960x720_10k/model.pdparams)\|[inference_model](https://paddleseg.bj.bcebos.com/inference/pp_liteseg_infer_models/pp_liteseg_stdc2_camvid_960x720_10k_inference_model.zip)| diff --git a/modelcenter/PP-LiteSeg/introduction_cn.ipynb b/modelcenter/PP-LiteSeg/introduction_cn.ipynb index bdf30cb97eddee85704e74fd101372a02f17661b..3be4f5710edef90de2ad8b3059e66bee4dde170b 100644 --- a/modelcenter/PP-LiteSeg/introduction_cn.ipynb +++ b/modelcenter/PP-LiteSeg/introduction_cn.ipynb @@ -13,7 +13,7 @@ "\n", "在Cityscapes测试集上使用NVIDIA GTX 1080Ti进行实验,PP-LiteSeg的精度和速度可以达到 72.0% mIoU / 273.6 FPS 以及 77.5% mIoU / 102.6 FPS。与其他模型相比,PP-LiteSeg在精度和速度之间实现了SOTA平衡。\n", "\n", - "PP-LiteSeg模型由飞桨官方出品,是PaddleSeg团队推出的SOTA模型。 更多关于PaddleSeg可以点击 https://github.com/PaddlePaddle/PaddleSeg 进行了解。" + "PP-LiteSeg模型由飞桨官方出品,是PaddleSeg团队推出的SOTA模型。 更多关于PaddleSeg可以点击 [https://github.com/PaddlePaddle/PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) 进行了解。" ] }, { @@ -285,4 +285,3 @@ "nbformat": 4, "nbformat_minor": 5 } - diff --git a/modelcenter/PP-LiteSeg/introduction_en.ipynb b/modelcenter/PP-LiteSeg/introduction_en.ipynb index e903b60129d45707b3828634e57a2c6f04a0dbce..d2d09ded0b07b3c4918e99c07daae535a6cee0de 100644 --- a/modelcenter/PP-LiteSeg/introduction_en.ipynb +++ b/modelcenter/PP-LiteSeg/introduction_en.ipynb @@ -13,7 +13,7 @@ "\n", "On the Cityscapes test set, PP-LiteSeg achieves 72.0% mIoU/273.6 FPS and 77.5% mIoU/102.6 FPS on NVIDIA GTX 1080Ti. PP-LiteSeg achieves a superior tradeoff between accuracy and speed compared to other methods.\n", "\n", - "PP-LiteSeg model is officially produced by PaddlePaddle and is a SOTA model proposed by PaddleSeg. More information about PaddleSeg can be found here https://github.com/PaddlePaddle/PaddleSeg." + "PP-LiteSeg model is officially produced by PaddlePaddle and is a SOTA model proposed by PaddleSeg. More information about PaddleSeg can be found here [https://github.com/PaddlePaddle/PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)." ] }, { diff --git a/modelcenter/PP-Matting/benchmark_cn.md b/modelcenter/PP-Matting/benchmark_cn.md index ee5568cfa66ebd2910906bb00f5966c40785558f..0b7378a2ff6a4d1dff53440df3bf0ce17cbfe53d 100644 --- a/modelcenter/PP-Matting/benchmark_cn.md +++ b/modelcenter/PP-Matting/benchmark_cn.md @@ -35,4 +35,4 @@ | ppmatting_hrnet_w18 | PPM-AIM-195 | 31.56|0.0022|31.80|30.13| 24.5 | 91.28 | 28.9 | ## 3. 相关使用说明 -1. https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting +1. [https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) diff --git a/modelcenter/PP-Matting/benchmark_en.md b/modelcenter/PP-Matting/benchmark_en.md index 40c8a463ca0c9091b457fcc331452e707c8d7678..da654ec7e48867a8c86e7ec2c76fe98ff6071e78 100644 --- a/modelcenter/PP-Matting/benchmark_en.md +++ b/modelcenter/PP-Matting/benchmark_en.md @@ -33,4 +33,4 @@ | ppmatting_hrnet_w18 | PPM-AIM-195 | 31.56|0.0022|31.80|30.13| 24.5 | 91.28 | 28.9 | ## 3. Reference -1. https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting +1. [https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) diff --git a/modelcenter/PP-Matting/introduction_cn.ipynb b/modelcenter/PP-Matting/introduction_cn.ipynb index 4800d3252e496fae4572c0bd88b4a6fac2b3e9e2..af2e765dd20cbb3d8b419b1f9220b8fbdcf0fcab 100644 --- a/modelcenter/PP-Matting/introduction_cn.ipynb +++ b/modelcenter/PP-Matting/introduction_cn.ipynb @@ -9,9 +9,9 @@ "\n", "在众多图像抠图算法中,为了追求精度,往往需要输入trimap作为辅助信息,但这极大限制了算法的使用性。PP-Matting作为一种trimap-free的抠图方法,有效克服了辅助信息带来的弊端,在Composition-1k和Distinctions-646数据集中取得了SOTA的效果。PP-Matting利用语义分支(SCB)提取图片高级语义信息并通过引导流设计(Guidance Flow)逐步引导高分辨率细节分支(HRDB)对过度区域的细节提取,最后通过融合模块实现语义和细节的融合得到最终的alpha matte。\n", "\n", - "更多细节可参考技术报告:https://arxiv.org/abs/2204.09433 。\n", + "更多细节可参考技术报告:[https://arxiv.org/abs/2204.09433](https://arxiv.org/abs/2204.09433) 。\n", "\n", - "更多关于PaddleMatting的内容,可以点击 https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting 进行了解。\n", + "更多关于PaddleMatting的内容,可以点击 [https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) 进行了解。\n", "\n" ] }, @@ -173,20 +173,22 @@ "metadata": {}, "source": [ "## 6. 相关论文以及引用信息\n", + "```\n", "@article{chen2022pp,\n", " title={PP-Matting: High-Accuracy Natural Image Matting},\n", " author={Chen, Guowei and Liu, Yi and Wang, Jian and Peng, Juncai and Hao, Yuying and Chu, Lutao and Tang, Shiyu and Wu, Zewu and Chen, Zeyu and Yu, Zhiliang and others},\n", " journal={arXiv preprint arXiv:2204.09433},\n", " year={2022}\n", - "}" + "}\n", + "```" ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3.9.13 64-bit", "language": "python", - "name": "py35-paddle1.2.0" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -198,7 +200,12 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.4" + "version": "3.9.13" + }, + "vscode": { + "interpreter": { + "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" + } } }, "nbformat": 4, diff --git a/modelcenter/PP-Matting/introduction_en.ipynb b/modelcenter/PP-Matting/introduction_en.ipynb index 328f16a55b840bc0a65306f51becaa997eeddba6..5557690700c8a4f5593ce7e6dcdfba0bdcfc46c7 100644 --- a/modelcenter/PP-Matting/introduction_en.ipynb +++ b/modelcenter/PP-Matting/introduction_en.ipynb @@ -10,9 +10,9 @@ "\n", "In many image matting algorithms, in order to pursue precision, trimap is often provided as auxiliary information, but this greatly limits the application of the algorithm. PP-Matting, as a trimap-free image matting method, overcomes the disadvantages of auxiliary information and achieves SOTA performance in Composition-1k and Distinctions-646 datasets. PP-Matting uses Semantic Context Branch (SCB) to extract high-level semantic information of images and gradually guides high-resolution detail branch (HRDB) to extract details in transition area through Guidance Flow. Finally, alpha matte is obtained by fusing semantic map and detail map with fusion module.\n", "\n", - "More details can be found in the paper: https://arxiv.org/abs/2204.09433.\n", + "More details can be found in the paper: [https://arxiv.org/abs/2204.09433](https://arxiv.org/abs/2204.09433).\n", "\n", - "More about PaddleMatting,you can click https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting to learn.\n", + "More about PaddleMatting,you can click [https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting) to learn.\n", "\n" ] }, @@ -176,20 +176,22 @@ "metadata": {}, "source": [ "## 6. Related papers and citations\n", + "```\n", "@article{chen2022pp,\n", " title={PP-Matting: High-Accuracy Natural Image Matting},\n", " author={Chen, Guowei and Liu, Yi and Wang, Jian and Peng, Juncai and Hao, Yuying and Chu, Lutao and Tang, Shiyu and Wu, Zewu and Chen, Zeyu and Yu, Zhiliang and others},\n", " journal={arXiv preprint arXiv:2204.09433},\n", " year={2022}\n", - "}" + "}\n", + "```" ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3.9.13 64-bit", "language": "python", - "name": "py35-paddle1.2.0" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -201,7 +203,12 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.4" + "version": "3.9.13" + }, + "vscode": { + "interpreter": { + "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" + } } }, "nbformat": 4,