未验证 提交 668567e0 编写于 作者: Q qingqing01 提交者: GitHub

Fix doc for windows uers (#2727)

上级 19ed6716
# Simple Baselines for Human Pose Estimation in Fluid # Simple Baselines for Human Pose Estimation in Fluid
## Introduction ## Introduction
This is a simple demonstration of re-implementation in [PaddlePaddle.Fluid](http://www.paddlepaddle.org/en) for the paper [Simple Baselines for Human Pose Estimation and Tracking](https://arxiv.org/abs/1804.06208) (ECCV'18) from MSRA. This is a simple demonstration of re-implementation in [PaddlePaddle.Fluid](http://www.paddlepaddle.org/en) for the paper [Simple Baselines for Human Pose Estimation and Tracking](https://arxiv.org/abs/1804.06208) (ECCV'18) from MSRA.
![demo](demo.gif) ![demo](demo.gif)
...@@ -10,7 +10,7 @@ This is a simple demonstration of re-implementation in [PaddlePaddle.Fluid](http ...@@ -10,7 +10,7 @@ This is a simple demonstration of re-implementation in [PaddlePaddle.Fluid](http
## Requirements ## Requirements
- Python == 2.7 or 3.6 - Python == 2.7 or 3.6
- PaddlePaddle >= 1.1.0 - PaddlePaddle >= 1.1.0
- opencv-python >= 3.3 - opencv-python >= 3.3
### Notes: ### Notes:
...@@ -23,7 +23,7 @@ The code is developed and tested under 4 Tesla K40/P40 GPUS cards on CentOS with ...@@ -23,7 +23,7 @@ The code is developed and tested under 4 Tesla K40/P40 GPUS cards on CentOS with
## Results on MPII Val ## Results on MPII Val
| Arch | Head | Shoulder | Elbow | Wrist | Hip | Knee | Ankle | Mean | Mean@0.1| Models | | Arch | Head | Shoulder | Elbow | Wrist | Hip | Knee | Ankle | Mean | Mean@0.1| Models |
| ---- |:----:|:--------:|:-----:|:-----:|:---:|:----:|:-----:|:----:|:-------:|:------:| | ---- |:----:|:--------:|:-----:|:-----:|:---:|:----:|:-----:|:----:|:-------:|:------:|
| 256x256\_pose\_resnet\_50 in PyTorch | 96.351 | 95.329 | 88.989 | 83.176 | 88.420 | 83.960 | 79.594 | 88.532 | 33.911 | - | | 256x256\_pose\_resnet\_50 in PyTorch | 96.351 | 95.329 | 88.989 | 83.176 | 88.420 | 83.960 | 79.594 | 88.532 | 33.911 | - |
| 256x256\_pose\_resnet\_50 in Fluid | 96.385 | 95.363 | 89.211 | 84.084 | 88.454 | 84.182 | 79.546 | 88.748 | 33.750 | [`link`](https://paddlemodels.bj.bcebos.com/pose/pose-resnet50-mpii-256x256.tar.gz) | | 256x256\_pose\_resnet\_50 in Fluid | 96.385 | 95.363 | 89.211 | 84.084 | 88.454 | 84.182 | 79.546 | 88.748 | 33.750 | [`link`](https://paddlemodels.bj.bcebos.com/pose/pose-resnet50-mpii-256x256.tar.gz) |
| 384x384\_pose\_resnet\_50 in PyTorch | 96.658 | 95.754 | 89.790 | 84.614 | 88.523 | 84.666 | 79.287 | 89.066 | 38.046 | - | | 384x384\_pose\_resnet\_50 in PyTorch | 96.658 | 95.754 | 89.790 | 84.614 | 88.523 | 84.666 | 79.287 | 89.066 | 38.046 | - |
| 384x384\_pose\_resnet\_50 in Fluid | 96.862 | 95.635 | 90.046 | 85.557 | 88.818 | 84.948 | 78.484 | 89.235 | 38.093 | [`link`](https://paddlemodels.bj.bcebos.com/pose/pose-resnet50-mpii-384x384.tar.gz) | | 384x384\_pose\_resnet\_50 in Fluid | 96.862 | 95.635 | 90.046 | 85.557 | 88.818 | 84.948 | 78.484 | 89.235 | 38.093 | [`link`](https://paddlemodels.bj.bcebos.com/pose/pose-resnet50-mpii-384x384.tar.gz) |
...@@ -88,13 +88,13 @@ python2 setup.py install --user ...@@ -88,13 +88,13 @@ python2 setup.py install --user
Downloading the checkpoints of Pose-ResNet-50 trained on MPII dataset from [here](https://paddlemodels.bj.bcebos.com/pose/pose-resnet50-mpii-384x384.tar.gz). Extract it into the folder `checkpoints` under the directory root of this repo. Then run Downloading the checkpoints of Pose-ResNet-50 trained on MPII dataset from [here](https://paddlemodels.bj.bcebos.com/pose/pose-resnet50-mpii-384x384.tar.gz). Extract it into the folder `checkpoints` under the directory root of this repo. Then run
```bash ```bash
python val.py --dataset 'mpii' --checkpoint 'checkpoints/pose-resnet50-mpii-384x384' --data_root 'data/mpii' python val.py --dataset mpii --checkpoint checkpoints/pose-resnet50-mpii-384x384 --data_root data/mpii
``` ```
### Perform Training ### Perform Training
```bash ```bash
python train.py --dataset 'mpii' python train.py --dataset mpii
``` ```
**Note**: Configurations for training are aggregated in the `lib/mpii_reader.py` and `lib/coco_reader.py`. **Note**: Configurations for training are aggregated in the `lib/mpii_reader.py` and `lib/coco_reader.py`.
...@@ -106,7 +106,7 @@ We also support to apply pre-trained models on customized images. ...@@ -106,7 +106,7 @@ We also support to apply pre-trained models on customized images.
Put the images into the folder `test` under the directory root of this repo. Then run Put the images into the folder `test` under the directory root of this repo. Then run
```bash ```bash
python test.py --checkpoint 'checkpoints/pose-resnet-50-384x384-mpii' python test.py --checkpoint checkpoints/pose-resnet-50-384x384-mpii
``` ```
If there are multiple persons in images, detectors such as [Faster R-CNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/rcnn), [SSD](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/object_detection) or others should be used first to crop them out. Because the simple baseline for human pose estimation is a top-down method. If there are multiple persons in images, detectors such as [Faster R-CNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/rcnn), [SSD](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/object_detection) or others should be used first to crop them out. Because the simple baseline for human pose estimation is a top-down method.
......
...@@ -47,7 +47,7 @@ Declaration: the MobileNet-v1 SSD model is converted by [TensorFlow model](https ...@@ -47,7 +47,7 @@ Declaration: the MobileNet-v1 SSD model is converted by [TensorFlow model](https
`train.py` is the main caller of the training module. Examples of usage are shown below. `train.py` is the main caller of the training module. Examples of usage are shown below.
```bash ```bash
python -u train.py --batch_size=64 --dataset='pascalvoc' --pretrained_model='pretrained/ssd_mobilenet_v1_coco/' python -u train.py --batch_size=64 --dataset=pascalvoc --pretrained_model=pretrained/ssd_mobilenet_v1_coco/
``` ```
- Set ```export CUDA_VISIBLE_DEVICES=0,1``` to specifiy the number of GPU you want to use. - Set ```export CUDA_VISIBLE_DEVICES=0,1``` to specifiy the number of GPU you want to use.
- For more help on arguments: - For more help on arguments:
...@@ -70,13 +70,13 @@ You can evaluate your trained model in different metrics like 11point, integral ...@@ -70,13 +70,13 @@ You can evaluate your trained model in different metrics like 11point, integral
`eval.py` is the main caller of the evaluating module. Examples of usage are shown below. `eval.py` is the main caller of the evaluating module. Examples of usage are shown below.
```bash ```bash
python eval.py --dataset='pascalvoc' --model_dir='train_pascal_model/best_model' --data_dir='data/pascalvoc' --test_list='test.txt' --ap_version='11point' --nms_threshold=0.45 python eval.py --dataset=pascalvoc --model_dir=model/best_model --data_dir=data/pascalvoc --test_list=test.txt
``` ```
### Infer and Visualize ### Infer and Visualize
`infer.py` is the main caller of the inferring module. Examples of usage are shown below. `infer.py` is the main caller of the inferring module. Examples of usage are shown below.
```bash ```bash
python infer.py --dataset='pascalvoc' --nms_threshold=0.45 --model_dir='train_pascal_model/best_model' --image_path='./data/pascalvoc/VOCdevkit/VOC2007/JPEGImages/009963.jpg' python infer.py --dataset=pascalvoc --nms_threshold=0.45 --model_dir=model/best_model --image_path=./data/pascalvoc/VOCdevkit/VOC2007/JPEGImages/009963.jpg
``` ```
Below are the examples of running the inference and visualizing the model result. Below are the examples of running the inference and visualizing the model result.
<p align="center"> <p align="center">
......
...@@ -203,6 +203,7 @@ def train(args, ...@@ -203,6 +203,7 @@ def train(args,
fluid.io.save_persistables(exe, model_path, main_program=main_prog) fluid.io.save_persistables(exe, model_path, main_program=main_prog)
best_map = 0. best_map = 0.
test_map = None
def test(epoc_id, best_map): def test(epoc_id, best_map):
_, accum_map = map_eval.get_map_var() _, accum_map = map_eval.get_map_var()
map_eval.reset(exe) map_eval.reset(exe)
...@@ -329,4 +330,4 @@ def main(): ...@@ -329,4 +330,4 @@ def main():
if __name__ == '__main__': if __name__ == '__main__':
main() main()
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册