未验证 提交 668567e0 编写于 作者: Q qingqing01 提交者: GitHub

Fix doc for windows uers (#2727)

上级 19ed6716
...@@ -88,13 +88,13 @@ python2 setup.py install --user ...@@ -88,13 +88,13 @@ python2 setup.py install --user
Downloading the checkpoints of Pose-ResNet-50 trained on MPII dataset from [here](https://paddlemodels.bj.bcebos.com/pose/pose-resnet50-mpii-384x384.tar.gz). Extract it into the folder `checkpoints` under the directory root of this repo. Then run Downloading the checkpoints of Pose-ResNet-50 trained on MPII dataset from [here](https://paddlemodels.bj.bcebos.com/pose/pose-resnet50-mpii-384x384.tar.gz). Extract it into the folder `checkpoints` under the directory root of this repo. Then run
```bash ```bash
python val.py --dataset 'mpii' --checkpoint 'checkpoints/pose-resnet50-mpii-384x384' --data_root 'data/mpii' python val.py --dataset mpii --checkpoint checkpoints/pose-resnet50-mpii-384x384 --data_root data/mpii
``` ```
### Perform Training ### Perform Training
```bash ```bash
python train.py --dataset 'mpii' python train.py --dataset mpii
``` ```
**Note**: Configurations for training are aggregated in the `lib/mpii_reader.py` and `lib/coco_reader.py`. **Note**: Configurations for training are aggregated in the `lib/mpii_reader.py` and `lib/coco_reader.py`.
...@@ -106,7 +106,7 @@ We also support to apply pre-trained models on customized images. ...@@ -106,7 +106,7 @@ We also support to apply pre-trained models on customized images.
Put the images into the folder `test` under the directory root of this repo. Then run Put the images into the folder `test` under the directory root of this repo. Then run
```bash ```bash
python test.py --checkpoint 'checkpoints/pose-resnet-50-384x384-mpii' python test.py --checkpoint checkpoints/pose-resnet-50-384x384-mpii
``` ```
If there are multiple persons in images, detectors such as [Faster R-CNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/rcnn), [SSD](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/object_detection) or others should be used first to crop them out. Because the simple baseline for human pose estimation is a top-down method. If there are multiple persons in images, detectors such as [Faster R-CNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/rcnn), [SSD](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/object_detection) or others should be used first to crop them out. Because the simple baseline for human pose estimation is a top-down method.
......
...@@ -47,7 +47,7 @@ Declaration: the MobileNet-v1 SSD model is converted by [TensorFlow model](https ...@@ -47,7 +47,7 @@ Declaration: the MobileNet-v1 SSD model is converted by [TensorFlow model](https
`train.py` is the main caller of the training module. Examples of usage are shown below. `train.py` is the main caller of the training module. Examples of usage are shown below.
```bash ```bash
python -u train.py --batch_size=64 --dataset='pascalvoc' --pretrained_model='pretrained/ssd_mobilenet_v1_coco/' python -u train.py --batch_size=64 --dataset=pascalvoc --pretrained_model=pretrained/ssd_mobilenet_v1_coco/
``` ```
- Set ```export CUDA_VISIBLE_DEVICES=0,1``` to specifiy the number of GPU you want to use. - Set ```export CUDA_VISIBLE_DEVICES=0,1``` to specifiy the number of GPU you want to use.
- For more help on arguments: - For more help on arguments:
...@@ -70,13 +70,13 @@ You can evaluate your trained model in different metrics like 11point, integral ...@@ -70,13 +70,13 @@ You can evaluate your trained model in different metrics like 11point, integral
`eval.py` is the main caller of the evaluating module. Examples of usage are shown below. `eval.py` is the main caller of the evaluating module. Examples of usage are shown below.
```bash ```bash
python eval.py --dataset='pascalvoc' --model_dir='train_pascal_model/best_model' --data_dir='data/pascalvoc' --test_list='test.txt' --ap_version='11point' --nms_threshold=0.45 python eval.py --dataset=pascalvoc --model_dir=model/best_model --data_dir=data/pascalvoc --test_list=test.txt
``` ```
### Infer and Visualize ### Infer and Visualize
`infer.py` is the main caller of the inferring module. Examples of usage are shown below. `infer.py` is the main caller of the inferring module. Examples of usage are shown below.
```bash ```bash
python infer.py --dataset='pascalvoc' --nms_threshold=0.45 --model_dir='train_pascal_model/best_model' --image_path='./data/pascalvoc/VOCdevkit/VOC2007/JPEGImages/009963.jpg' python infer.py --dataset=pascalvoc --nms_threshold=0.45 --model_dir=model/best_model --image_path=./data/pascalvoc/VOCdevkit/VOC2007/JPEGImages/009963.jpg
``` ```
Below are the examples of running the inference and visualizing the model result. Below are the examples of running the inference and visualizing the model result.
<p align="center"> <p align="center">
......
...@@ -203,6 +203,7 @@ def train(args, ...@@ -203,6 +203,7 @@ def train(args,
fluid.io.save_persistables(exe, model_path, main_program=main_prog) fluid.io.save_persistables(exe, model_path, main_program=main_prog)
best_map = 0. best_map = 0.
test_map = None
def test(epoc_id, best_map): def test(epoc_id, best_map):
_, accum_map = map_eval.get_map_var() _, accum_map = map_eval.get_map_var()
map_eval.reset(exe) map_eval.reset(exe)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册