提交 a653012d 编写于 作者: K Kaipeng Deng 提交者: GitHub

Ppdet doc zh (#2741)

* add cn doc for INSTALL/GETTING_START

* cn for GETTING_STARTED

* fix doc
上级 584126e5
# Getting Started
For setting up the test environment, please refer to [installation
For setting up the running environment, please refer to [installation
instructions](INSTALL.md).
......@@ -24,12 +24,13 @@ python tools/train.py -c configs/faster_rcnn_r50_1x.yml
```
- Datasets is stored in `dataset/coco` by default (configurable).
- Datasets will be downloaded automatically and cached in `~/.cache/paddle/dataset` if not be found locally.
- Pretrained model is downloaded automatically and cached in `~/.cache/paddle/weights`.
- Model checkpoints is saved in `output` by default (configurable).
- To check out hyper parameters used, please refer to the config file.
Alternating between training epoch and evaluation run is possible, simply pass
in `--eval=True` to do so (tested with `SSD` detector on Pascal-VOC, not
in `--eval` to do so (tested with `SSD` detector on Pascal-VOC, not
recommended for two stage models or training sessions on COCO dataset)
......@@ -82,7 +83,7 @@ python tools/infer.py -c configs/faster_rcnn_r50_1x.yml --infer_img=demo/0000005
--save_inference_model
```
Save inference model by set `--save_inference_model`.
Save inference model by set `--save_inference_model`, which can be loaded by PaddlePaddle predict library.
## FAQ
......
# 开始
关于配置运行环境,请参考[安装指南](INSTALL_cn.md)
## 训练
#### 单GPU训练
```bash
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/faster_rcnn_r50_1x.yml
```
#### 多GPU训练
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python tools/train.py -c configs/faster_rcnn_r50_1x.yml
```
- 数据集默认存储在`dataset/coco`中(可配置)。
- 若本地未找到数据集,将自动下载数据集并保存在`~/.cache/paddle/dataset`中。
- 预训练模型自动下载并保存在`〜/.cache/paddle/weights`中。
- 模型checkpoints默认保存在`output`中(可配置)。
- 更多参数配置,请参考配置文件。
可通过设置`--eval`在训练epoch中交替执行评估(已在在Pascal-VOC数据集上
`SSD`检测器验证,不推荐在COCO数据集上的两阶段模型上执行交替评估)
## 评估
```bash
export CUDA_VISIBLE_DEVICES=0
# 若使用CPU,则执行
# export CPU_NUM=1
python tools/eval.py -c configs/faster_rcnn_r50_1x.yml
```
- 默认从`output`加载checkpoint(可配置)
- R-CNN和SSD模型目前暂不支持多GPU评估,将在后续版本支持
## 推断
- 单图片推断
```bash
export CUDA_VISIBLE_DEVICES=0
# 若使用CPU,则执行
# export CPU_NUM=1
python tools/infer.py -c configs/faster_rcnn_r50_1x.yml --infer_img=demo/000000570688.jpg
```
- 多图片推断
```bash
export CUDA_VISIBLE_DEVICES=0
# 若使用CPU,则执行
# export CPU_NUM=1
python tools/infer.py -c configs/faster_rcnn_r50_1x.yml --infer_dir=demo
```
可视化文件默认保存在`output`中,可通过`--save_file=`指定不同的输出路径。
- 保存推断模型
```bash
export CUDA_VISIBLE_DEVICES=0
# or run on CPU with:
# export CPU_NUM=1
python tools/infer.py -c configs/faster_rcnn_r50_1x.yml --infer_img=demo/000000570688.jpg \
--save_inference_model
```
通过设置`--save_inference_model`保存可供PaddlePaddle预测库加载的推断模型。
## FAQ
**Q:** 为什么我使用单GPU训练loss会出`NaN`? </br>
**A:** 默认学习率是适配多GPU训练(8x GPU),若使用单GPU训练,须对应调整学习率(例如,除以8)。
**Q:** 如何减少GPU显存使用率? </br>
**A:** 可通过设置环境变量`FLAGS_conv_workspace_size_limit`为较小的值来减少显存消耗,并且不
会影响训练速度。以Mask-RCNN(R50)为例,设置`export FLAGS_conv_workspace_size_limit = 512`
batch size可以达到每GPU 4 (Tesla V100 16GB)。
......@@ -26,8 +26,9 @@ Please make sure your PaddlePaddle installation was successful and the version
of your PaddlePaddle is not lower than required. Verify with the following commands.
```
# To check if PaddlePaddle installation was sucessful
python -c "from paddle.fluid import fluid; fluid.install_check.run_check()"
# To check PaddlePaddle installation in your Python interpreter
>>> import paddle.fluid as fluid
>>> fluid.install_check.run_check()
# To check PaddlePaddle version
python -c "import paddle; print(paddle.__version__)"
......@@ -45,7 +46,7 @@ python -c "import paddle; print(paddle.__version__)"
[COCO-API](https://github.com/cocodataset/cocoapi):
COCO-API is needed for training. Installation is as follows:
COCO-API is needed for running. Installation is as follows:
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
......
# Installation
---
## Table of Contents
- [简介](#introduction)
- [PaddlePaddle](#paddlepaddle)
- [其他依赖安装](#other-dependencies)
- [PaddleDetection](#paddle-detection)
- [数据集](#datasets)
## 简介
这份文档介绍了如何安装PaddleDetection及其依赖项(包括PaddlePaddle),以及COCO和Pascal VOC数据集。
PaddleDetection的相关信息,请参考[README.md](../README.md).
## PaddlePaddle
运行PaddleDetection需要PaddlePaddle Fluid v.1.5及更高版本。请按照[安装文档](http://www.paddlepaddle.org.cn/)中的说明进行操作。
请确保您的PaddlePaddle安装成功并且版本不低于需求版本。使用以下命令进行验证。
```
# 在您的Python解释器中确认PaddlePaddle安装成功
>>> import paddle.fluid as fluid
>>> fluid.install_check.run_check()
# 确认PaddlePaddle版本
python -c "import paddle; print(paddle.__version__)"
```
### 环境需求:
- Python2 or Python3
- CUDA >= 8.0
- cuDNN >= 5.0
- nccl >= 2.1.2
## 其他依赖安装
[COCO-API](https://github.com/cocodataset/cocoapi):
运行需要COCO-API,安装方式如下:
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
# 若Cython未安装,请安装Cython
pip install Cython
# 安装至全局site-packages
make install
# 若您没有权限或更倾向不安装至全局site-packages
python setup.py install --user
## PaddleDetection
**克隆Paddle models模型库:**
您可以通过以下命令克隆Paddle models模型库并切换工作目录至PaddleDetection:
```
cd <path/to/clone/models>
git clone https://github.com/PaddlePaddle/models
cd models/PaddleCV/PaddleDetection
```
**安装Python依赖库:**
Python依赖库在[requirements.txt](../requirements.txt)中给出,可通过如下命令安装:
```
pip install -r requirements.txt
```
**确认测试通过:**
```
export PYTHONPATH=`pwd`:$PYTHONPATH
python ppdet/modeling/tests/test_architectures.py
```
## 数据集
PaddleDetection默认支持[COCO](http://cocodataset.org)[Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/)
请按照如下步骤设置数据集。
**为本地数据集创建软链接:**
配置文件中默认的数据集路径是`dataset/coco``dataset/voc`,如果您本地磁盘上已有数据集,
只需创建软链接至数据集目录:
```
ln -sf <path/to/coco> <path/to/paddle_detection>/dataset/coco
ln -sf <path/to/voc> <path/to/paddle_detection>/dataset/voc
```
**手动下载数据集:**
若您本地没有数据集,可通过如下命令下载:
- COCO
```
cd dataset/coco
./download.sh
```
- Pascal VOC
```
cd dataset/voc
./download.sh
```
**自动下载数据集:**
若您在数据集未成功设置(例如,在`dataset/coco``dataset/voc`中找不到)的情况下开始运行,
PaddleDetection将自动从[COCO-2017](http://images.cocodataset.org)
[VOC2012](http://host.robots.ox.ac.uk/pascal/VOC)下载,解压后的数据集将被保存在
`〜/.cache/paddle/dataset/`目录下,下次运行时,也可自动从该目录发现数据集。
**说明:** 更多有关数据集的介绍,请参考[DATA.md](DATA_cn.md)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册