diff --git a/docs/deploy/nvidia-jetson.md b/docs/deploy/nvidia-jetson.md
index a707f4689198f0c1c626d75938a53df76ebfa881..8a187b8f6a8fed1f15cb10b9c8cf8adb8efabc00 100644
--- a/docs/deploy/nvidia-jetson.md
+++ b/docs/deploy/nvidia-jetson.md
@@ -1,11 +1,11 @@
# Nvidia Jetson开发板
## 说明
-本文档在 `Linux`平台使用`GCC 7.4`测试过,如果需要使用更高G++版本编译使用,则需要重新编译Paddle预测库,请参考: [Nvidia Jetson嵌入式硬件预测库源码编译](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_guide/inference_deployment/inference/build_and_install_lib_cn.html#id12)。
+本文档在基于Nvidia Jetpack 4.4的`Linux`平台上使用`GCC 7.4`测试过,如需使用不同G++版本,则需要重新编译Paddle预测库,请参考: [NVIDIA Jetson嵌入式硬件预测库源码编译](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_guide/inference_deployment/inference/build_and_install_lib_cn.html#id12)。
## 前置条件
* G++ 7.4
-* CUDA 9.0 / CUDA 10.0, CUDNN 7+ (仅在使用GPU版本的预测库时需要)
+* CUDA 10.0 / CUDNN 8 (仅在使用GPU版本的预测库时需要)
* CMake 3.0+
请确保系统已经安装好上述基本软件,**下面所有示例以工作目录 `/root/projects/`演示**。
diff --git a/docs/deploy/server/encryption.md b/docs/deploy/server/encryption.md
index dcf67b1b8de31afcd638b5858624b52a5c17872a..c172cc802bc859f427e13f5684f092a5b8c5fc1f 100644
--- a/docs/deploy/server/encryption.md
+++ b/docs/deploy/server/encryption.md
@@ -51,7 +51,7 @@ paddlex-encryption
|
├── lib # libpmodel-encrypt.so和libpmodel-decrypt.so动态库
|
-└── tool # paddlex_encrypt_tool
+└── tool # paddle_encrypt_tool
```
Windows加密工具包含内容为:
@@ -61,7 +61,7 @@ paddlex-encryption
|
├── lib # pmodel-encrypt.dll和pmodel-decrypt.dll动态库 pmodel-encrypt.lib和pmodel-encrypt.lib静态库
|
-└── tool # paddlex_encrypt_tool.exe 模型加密工具
+└── tool # paddle_encrypt_tool.exe 模型加密工具
```
### 1.3 加密PaddleX模型
@@ -71,13 +71,13 @@ paddlex-encryption
Linux平台:
```
# 假设模型在/root/projects下
-./paddlex-encryption/tool/paddlex_encrypt_tool -model_dir /root/projects/paddlex_inference_model -save_dir /root/projects/paddlex_encrypted_model
+./paddlex-encryption/tool/paddle_encrypt_tool -model_dir /root/projects/paddlex_inference_model -save_dir /root/projects/paddlex_encrypted_model
```
Windows平台:
```
# 假设模型在D:/projects下
-.\paddlex-encryption\tool\paddlex_encrypt_tool.exe -model_dir D:\projects\paddlex_inference_model -save_dir D:\projects\paddlex_encrypted_model
+.\paddlex-encryption\tool\paddle_encrypt_tool.exe -model_dir D:\projects\paddlex_inference_model -save_dir D:\projects\paddlex_encrypted_model
```
`-model_dir`用于指定inference模型路径(参考[导出inference模型](../export_model.md)将模型导出为inference格式模型),可使用[导出小度熊识别模型](../export_model.md)中导出的`inference_model`。加密完成后,加密过的模型会保存至指定的`-save_dir`下,包含`__model__.encrypted`、`__params__.encrypted`和`model.yml`三个文件,同时生成密钥信息,命令输出如下图所示,密钥为`kLAl1qOs5uRbFt0/RrIDTZW2+tOf5bzvUIaHGF8lJ1c=`
diff --git a/docs/examples/meter_reader.md b/docs/examples/meter_reader.md
index 6eabe48aa124672ce33caba16f2e93cdb62edc92..f30868b3b3e7f67a821783a142dbdf120190f3b6 100644
--- a/docs/examples/meter_reader.md
+++ b/docs/examples/meter_reader.md
@@ -46,13 +46,13 @@
#### 测试表计读数
-1. 下载PaddleX源码:
+step 1. 下载PaddleX源码:
```
git clone https://github.com/PaddlePaddle/PaddleX
```
-2. 预测执行文件位于`PaddleX/examples/meter_reader/`,进入该目录:
+step 2. 预测执行文件位于`PaddleX/examples/meter_reader/`,进入该目录:
```
cd PaddleX/examples/meter_reader/
@@ -76,7 +76,7 @@ cd PaddleX/examples/meter_reader/
| use_erode | 是否使用图像腐蚀对分割预测图进行细分,默认为False |
| erode_kernel | 图像腐蚀操作时的卷积核大小,默认值为4 |
-3. 预测
+step 3. 预测
若要使用GPU,则指定GPU卡号(以0号卡为例):
@@ -112,17 +112,17 @@ python3 reader_infer.py --detector_dir /path/to/det_inference_model --segmenter_
#### c++部署
-1. 下载PaddleX源码:
+step 1. 下载PaddleX源码:
```
git clone https://github.com/PaddlePaddle/PaddleX
```
-2. 将`PaddleX\examples\meter_reader\deploy\cpp`下的`meter_reader`文件夹和`CMakeList.txt`拷贝至`PaddleX\deploy\cpp`目录下,拷贝之前可以将`PaddleX\deploy\cpp`下原本的`CMakeList.txt`做好备份。
+step 2. 将`PaddleX\examples\meter_reader\deploy\cpp`下的`meter_reader`文件夹和`CMakeList.txt`拷贝至`PaddleX\deploy\cpp`目录下,拷贝之前可以将`PaddleX\deploy\cpp`下原本的`CMakeList.txt`做好备份。
-3. 按照[Windows平台部署](../deploy/server/cpp/windows.md)中的Step2至Step4完成C++预测代码的编译。
+step 3. 按照[Windows平台部署](../deploy/server/cpp/windows.md)中的Step2至Step4完成C++预测代码的编译。
-4. 编译成功后,可执行文件在`out\build\x64-Release`目录下,打开`cmd`,并切换到该目录:
+step 4. 编译成功后,可执行文件在`out\build\x64-Release`目录下,打开`cmd`,并切换到该目录:
```
cd PaddleX\deploy\cpp\out\build\x64-Release
@@ -139,8 +139,6 @@ git clone https://github.com/PaddlePaddle/PaddleX
| use_gpu | 是否使用 GPU 预测, 支持值为0或1(默认值为0)|
| gpu_id | GPU 设备ID, 默认值为0 |
| save_dir | 保存可视化结果的路径, 默认值为"output"|
- | det_key | 检测模型加密过程中产生的密钥信息,默认值为""表示加载的是未加密的检测模型 |
- | seg_key | 分割模型加密过程中产生的密钥信息,默认值为""表示加载的是未加密的分割模型 |
| seg_batch_size | 分割的批量大小,默认为2 |
| thread_num | 分割预测的线程数,默认为cpu处理器个数 |
| use_camera | 是否使用摄像头采集图片,支持值为0或1(默认值为0) |
@@ -149,7 +147,7 @@ git clone https://github.com/PaddlePaddle/PaddleX
| erode_kernel | 图像腐蚀操作时的卷积核大小,默认值为4 |
| score_threshold | 检测模型输出结果中,预测得分低于该阈值的框将被滤除,默认值为0.5|
-5. 推理预测:
+step 5. 推理预测:
用于部署推理的模型应为inference格式,本案例提供的预训练模型均为inference格式,如若是重新训练的模型,需参考[部署模型导出](../deploy/export_model.md)将模型导出为inference格式。
@@ -160,6 +158,13 @@ git clone https://github.com/PaddlePaddle/PaddleX
```
* 使用未加密的模型对图像列表做预测
+ 图像列表image_list.txt内容的格式如下,因绝对路径不同,暂未提供该文件,用户可根据实际情况自行生成:
+ ```
+ \path\to\images\1.jpg
+ \path\to\images\2.jpg
+ ...
+ \path\to\images\n.jpg
+ ```
```shell
.\paddlex_inference\meter_reader.exe --det_model_dir=\path\to\det_inference_model --seg_model_dir=\path\to\seg_inference_model --image_list=\path\to\meter_test\image_list.txt --use_gpu=1 --use_erode=1 --save_dir=output
@@ -171,29 +176,21 @@ git clone https://github.com/PaddlePaddle/PaddleX
.\paddlex_inference\meter_reader.exe --det_model_dir=\path\to\det_inference_model --seg_model_dir=\path\to\seg_inference_model --use_camera=1 --use_gpu=1 --use_erode=1 --save_dir=output
```
- * 使用加密后的模型对单张图片做预测
-
- 如果未对模型进行加密,请参考[加密PaddleX模型](../deploy/server/encryption.html#paddlex)对模型进行加密。例如加密后的检测模型所在目录为`\path\to\encrypted_det_inference_model`,密钥为`yEBLDiBOdlj+5EsNNrABhfDuQGkdcreYcHcncqwdbx0=`;加密后的分割模型所在目录为`\path\to\encrypted_seg_inference_model`,密钥为`DbVS64I9pFRo5XmQ8MNV2kSGsfEr4FKA6OH9OUhRrsY=`
-
- ```shell
- .\paddlex_inference\meter_reader.exe --det_model_dir=\path\to\encrypted_det_inference_model --seg_model_dir=\path\to\encrypted_seg_inference_model --image=\path\to\test.jpg --use_gpu=1 --use_erode=1 --save_dir=output --det_key yEBLDiBOdlj+5EsNNrABhfDuQGkdcreYcHcncqwdbx0= --seg_key DbVS64I9pFRo5XmQ8MNV2kSGsfEr4FKA6OH9OUhRrsY=
- ```
-
### Linux系统的jetson嵌入式设备安全部署
#### c++部署
-1. 下载PaddleX源码:
+step 1. 下载PaddleX源码:
```
git clone https://github.com/PaddlePaddle/PaddleX
```
-2. 将`PaddleX/examples/meter_reader/deploy/cpp`下的`meter_reader`文件夹和`CMakeList.txt`拷贝至`PaddleX/deploy/cpp`目录下,拷贝之前可以将`PaddleX/deploy/cpp`下原本的`CMakeList.txt`做好备份。
+step 2. 将`PaddleX/examples/meter_reader/deploy/cpp`下的`meter_reader`文件夹和`CMakeList.txt`拷贝至`PaddleX/deploy/cpp`目录下,拷贝之前可以将`PaddleX/deploy/cpp`下原本的`CMakeList.txt`做好备份。
-3. 按照[Nvidia Jetson开发板部署](../deploy/nvidia-jetson.md)中的Step2至Step3完成C++预测代码的编译。
+step 3. 按照[Nvidia Jetson开发板部署](../deploy/nvidia-jetson.md)中的Step2至Step3完成C++预测代码的编译。
-4. 编译成功后,可执行程为`build/meter_reader/meter_reader`,其主要命令参数说明如下:
+step 4. 编译成功后,可执行程为`build/meter_reader/meter_reader`,其主要命令参数说明如下:
| 参数 | 说明 |
| ---- | ---- |
@@ -204,8 +201,6 @@ git clone https://github.com/PaddlePaddle/PaddleX
| use_gpu | 是否使用 GPU 预测, 支持值为0或1(默认值为0)|
| gpu_id | GPU 设备ID, 默认值为0 |
| save_dir | 保存可视化结果的路径, 默认值为"output"|
- | det_key | 检测模型加密过程中产生的密钥信息,默认值为""表示加载的是未加密的检测模型 |
- | seg_key | 分割模型加密过程中产生的密钥信息,默认值为""表示加载的是未加密的分割模型 |
| seg_batch_size | 分割的批量大小,默认为2 |
| thread_num | 分割预测的线程数,默认为cpu处理器个数 |
| use_camera | 是否使用摄像头采集图片,支持值为0或1(默认值为0) |
@@ -214,7 +209,7 @@ git clone https://github.com/PaddlePaddle/PaddleX
| erode_kernel | 图像腐蚀操作时的卷积核大小,默认值为4 |
| score_threshold | 检测模型输出结果中,预测得分低于该阈值的框将被滤除,默认值为0.5|
-5. 推理预测:
+step 5. 推理预测:
用于部署推理的模型应为inference格式,本案例提供的预训练模型均为inference格式,如若是重新训练的模型,需参考[部署模型导出](../deploy/export_model.md)将模型导出为inference格式。
@@ -225,7 +220,13 @@ git clone https://github.com/PaddlePaddle/PaddleX
```
* 使用未加密的模型对图像列表做预测
-
+ 图像列表image_list.txt内容的格式如下,因绝对路径不同,暂未提供该文件,用户可根据实际情况自行生成:
+ ```
+ \path\to\images\1.jpg
+ \path\to\images\2.jpg
+ ...
+ \path\to\images\n.jpg
+ ```
```shell
./build/meter_reader/meter_reader --det_model_dir=/path/to/det_inference_model --seg_model_dir=/path/to/seg_inference_model --image_list=/path/to/image_list.txt --use_gpu=1 --use_erode=1 --save_dir=output
```
@@ -236,14 +237,6 @@ git clone https://github.com/PaddlePaddle/PaddleX
./build/meter_reader/meter_reader --det_model_dir=/path/to/det_inference_model --seg_model_dir=/path/to/seg_inference_model --use_camera=1 --use_gpu=1 --use_erode=1 --save_dir=output
```
- * 使用加密后的模型对单张图片做预测
-
- 如果未对模型进行加密,请参考[加密PaddleX模型](../deploy/server/encryption.html#paddlex)对模型进行加密。例如加密后的检测模型所在目录为`/path/to/encrypted_det_inference_model`,密钥为`yEBLDiBOdlj+5EsNNrABhfDuQGkdcreYcHcncqwdbx0=`;加密后的分割模型所在目录为`/path/to/encrypted_seg_inference_model`,密钥为`DbVS64I9pFRo5XmQ8MNV2kSGsfEr4FKA6OH9OUhRrsY=`
-
- ```shell
- ./build/meter_reader/meter_reader --det_model_dir=/path/to/encrypted_det_inference_model --seg_model_dir=/path/to/encrypted_seg_inference_model --image=/path/to/test.jpg --use_gpu=1 --use_erode=1 --save_dir=output --det_key yEBLDiBOdlj+5EsNNrABhfDuQGkdcreYcHcncqwdbx0= --seg_key DbVS64I9pFRo5XmQ8MNV2kSGsfEr4FKA6OH9OUhRrsY=
- ```
-
## 模型训练
diff --git a/docs/images/vdl1.jpg b/docs/images/vdl1.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..5b0c90d28bc9bda583008fe2fb9729a7c3e06df6
Binary files /dev/null and b/docs/images/vdl1.jpg differ
diff --git a/docs/images/vdl2.jpg b/docs/images/vdl2.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..502a5f861104e2b20869b06cf8eb215ec58f0435
Binary files /dev/null and b/docs/images/vdl2.jpg differ
diff --git a/docs/images/vdl3.jpg b/docs/images/vdl3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..a16d6924d8867949ecae258ee588296845c6da86
Binary files /dev/null and b/docs/images/vdl3.jpg differ
diff --git a/docs/images/xiaoduxiong.jpeg b/docs/images/xiaoduxiong.jpeg
new file mode 100644
index 0000000000000000000000000000000000000000..d8e64639827da47e64033c00b82ef85be6c0b42f
Binary files /dev/null and b/docs/images/xiaoduxiong.jpeg differ
diff --git a/docs/train/index.rst b/docs/train/index.rst
index 54a8a1a7d39019a33a87d1c94ce04b76eb6fb8e8..b922c31268712c9a7c471491fa867746f0d93781 100755
--- a/docs/train/index.rst
+++ b/docs/train/index.rst
@@ -13,3 +13,4 @@ PaddleX集成了PaddleClas、PaddleDetection和PaddleSeg三大CV工具套件中
instance_segmentation.md
semantic_segmentation.md
prediction.md
+ visualdl.md
diff --git a/docs/train/visualdl.md b/docs/train/visualdl.md
new file mode 100755
index 0000000000000000000000000000000000000000..ac94d6d2c31e838924e7a393024ec4aac75c227a
--- /dev/null
+++ b/docs/train/visualdl.md
@@ -0,0 +1,26 @@
+# VisualDL可视化训练指标
+在使用PaddleX训练模型过程中,各个训练指标和评估指标会直接输出到标准输出流,同时也可通过VisualDL对训练过程中的指标进行可视化,只需在调用`train`函数时,将`use_vdl`参数设为`True`即可,如下代码所示,
+```
+model = paddlex.cls.ResNet50(num_classes=1000)
+model.train(num_epochs=120, train_dataset=train_dataset,
+ train_batch_size=32, eval_dataset=eval_dataset,
+ log_interval_steps=10, save_interval_epochs=10,
+ save_dir='./output', use_vdl=True)
+```
+
+模型在训练过程中,会在`save_dir`下生成`vdl_log`目录,通过在命令行终端执行以下命令,启动VisualDL。
+```
+visualdl --logdir=output/vdl_log --port=8008
+```
+在浏览器打开`http://0.0.0.0:8008`便可直接查看随训练迭代动态变化的各个指标(0.0.0.0表示启动VisualDL所在服务器的IP,本机使用0.0.0.0即可)。
+
+在训练分类模型过程中,使用VisualDL进行可视化的示例图如下所示。
+
+> 训练过程中每个Step的`Loss`和相应`Top1准确率`变化趋势:
+![](../images/vdl1.jpg)
+
+> 训练过程中每个Step的`学习率lr`和相应`Top5准确率`变化趋势:
+![](../images/vdl2.jpg)
+
+> 训练过程中,每次保存模型时,模型在验证数据集上的`Top1准确率`和`Top5准确率`:
+![](../images/vdl3.jpg)
diff --git a/examples/meter_reader/README.md b/examples/meter_reader/README.md
index f8c8388f395bbf64e7111e873f4e269702b3c6eb..43d18eec2bbcac0195e8beef2a377141f1aa29b8 100644
--- a/examples/meter_reader/README.md
+++ b/examples/meter_reader/README.md
@@ -148,8 +148,6 @@ git clone https://github.com/PaddlePaddle/PaddleX
| use_gpu | 是否使用 GPU 预测, 支持值为0或1(默认值为0)|
| gpu_id | GPU 设备ID, 默认值为0 |
| save_dir | 保存可视化结果的路径, 默认值为"output"|
- | det_key | 检测模型加密过程中产生的密钥信息,默认值为""表示加载的是未加密的检测模型 |
- | seg_key | 分割模型加密过程中产生的密钥信息,默认值为""表示加载的是未加密的分割模型 |
| seg_batch_size | 分割的批量大小,默认为2 |
| thread_num | 分割预测的线程数,默认为cpu处理器个数 |
| use_camera | 是否使用摄像头采集图片,支持值为0或1(默认值为0) |
@@ -163,13 +161,20 @@ git clone https://github.com/PaddlePaddle/PaddleX
用于部署推理的模型应为inference格式,本案例提供的预训练模型均为inference格式,如若是重新训练的模型,需参考[导出inference模型](https://paddlex.readthedocs.io/zh_CN/latest/tutorials/deploy/deploy_server/deploy_python.html#inference)将模型导出为inference格式。
* 使用未加密的模型对单张图片做预测
-
```shell
.\paddlex_inference\meter_reader.exe --det_model_dir=\path\to\det_inference_model --seg_model_dir=\path\to\seg_inference_model --image=\path\to\meter_test\20190822_168.jpg --use_gpu=1 --use_erode=1 --save_dir=output
```
* 使用未加密的模型对图像列表做预测
+ 图像列表image_list.txt内容的格式如下,因绝对路径不同,暂未提供该文件,用户可根据实际情况自行生成:
+ ```
+ \path\to\images\1.jpg
+ \path\to\images\2.jpg
+ ...
+ \path\to\images\n.jpg
+ ```
+
```shell
.\paddlex_inference\meter_reader.exe --det_model_dir=\path\to\det_inference_model --seg_model_dir=\path\to\seg_inference_model --image_list=\path\to\meter_test\image_list.txt --use_gpu=1 --use_erode=1 --save_dir=output
```
@@ -180,14 +185,6 @@ git clone https://github.com/PaddlePaddle/PaddleX
.\paddlex_inference\meter_reader.exe --det_model_dir=\path\to\det_inference_model --seg_model_dir=\path\to\seg_inference_model --use_camera=1 --use_gpu=1 --use_erode=1 --save_dir=output
```
- * 使用加密后的模型对单张图片做预测
-
- 如果未对模型进行加密,请参考[加密PaddleX模型](../../docs/deploy/server/encryption.md#13-加密paddlex模型)对模型进行加密。例如加密后的检测模型所在目录为`\path\to\encrypted_det_inference_model`,密钥为`yEBLDiBOdlj+5EsNNrABhfDuQGkdcreYcHcncqwdbx0=`;加密后的分割模型所在目录为`\path\to\encrypted_seg_inference_model`,密钥为`DbVS64I9pFRo5XmQ8MNV2kSGsfEr4FKA6OH9OUhRrsY=`
-
- ```shell
- .\paddlex_inference\meter_reader.exe --det_model_dir=\path\to\encrypted_det_inference_model --seg_model_dir=\path\to\encrypted_seg_inference_model --image=\path\to\test.jpg --use_gpu=1 --use_erode=1 --save_dir=output --det_key yEBLDiBOdlj+5EsNNrABhfDuQGkdcreYcHcncqwdbx0= --seg_key DbVS64I9pFRo5XmQ8MNV2kSGsfEr4FKA6OH9OUhRrsY=
- ```
-
### Linux系统的jetson嵌入式设备安全部署
#### c++部署
@@ -213,8 +210,6 @@ git clone https://github.com/PaddlePaddle/PaddleX
| use_gpu | 是否使用 GPU 预测, 支持值为0或1(默认值为0)|
| gpu_id | GPU 设备ID, 默认值为0 |
| save_dir | 保存可视化结果的路径, 默认值为"output"|
- | det_key | 检测模型加密过程中产生的密钥信息,默认值为""表示加载的是未加密的检测模型 |
- | seg_key | 分割模型加密过程中产生的密钥信息,默认值为""表示加载的是未加密的分割模型 |
| seg_batch_size | 分割的批量大小,默认为2 |
| thread_num | 分割预测的线程数,默认为cpu处理器个数 |
| use_camera | 是否使用摄像头采集图片,支持值为0或1(默认值为0) |
@@ -234,6 +229,13 @@ git clone https://github.com/PaddlePaddle/PaddleX
```
* 使用未加密的模型对图像列表做预测
+ 图像列表image_list.txt内容的格式如下,因绝对路径不同,暂未提供该文件,用户可根据实际情况自行生成:
+ ```
+ \path\to\images\1.jpg
+ \path\to\images\2.jpg
+ ...
+ \path\to\images\n.jpg
+ ```
```shell
./build/meter_reader/meter_reader --det_model_dir=/path/to/det_inference_model --seg_model_dir=/path/to/seg_inference_model --image_list=/path/to/image_list.txt --use_gpu=1 --use_erode=1 --save_dir=output
@@ -245,15 +247,6 @@ git clone https://github.com/PaddlePaddle/PaddleX
./build/meter_reader/meter_reader --det_model_dir=/path/to/det_inference_model --seg_model_dir=/path/to/seg_inference_model --use_camera=1 --use_gpu=1 --use_erode=1 --save_dir=output
```
- * 使用加密后的模型对单张图片做预测
-
- 如果未对模型进行加密,请参考[加密PaddleX模型](../../docs/deploy/server/encryption.md#13-加密paddlex模型)对模型进行加密。例如加密后的检测模型所在目录为`/path/to/encrypted_det_inference_model`,密钥为`yEBLDiBOdlj+5EsNNrABhfDuQGkdcreYcHcncqwdbx0=`;加密后的分割模型所在目录为`/path/to/encrypted_seg_inference_model`,密钥为`DbVS64I9pFRo5XmQ8MNV2kSGsfEr4FKA6OH9OUhRrsY=`
-
- ```shell
- ./build/meter_reader/meter_reader --det_model_dir=/path/to/encrypted_det_inference_model --seg_model_dir=/path/to/encrypted_seg_inference_model --image=/path/to/test.jpg --use_gpu=1 --use_erode=1 --save_dir=output --det_key yEBLDiBOdlj+5EsNNrABhfDuQGkdcreYcHcncqwdbx0= --seg_key DbVS64I9pFRo5XmQ8MNV2kSGsfEr4FKA6OH9OUhRrsY=
- ```
-
-
##
模型训练
diff --git a/paddlex/cv/datasets/coco.py b/paddlex/cv/datasets/coco.py
index 264b2da1e6a6aa9e15bf8a2ae9b3fbdc3ee75f1b..3bafb057323b78dc27ca33835b93173a7e7e458b 100644
--- a/paddlex/cv/datasets/coco.py
+++ b/paddlex/cv/datasets/coco.py
@@ -15,6 +15,8 @@
from __future__ import absolute_import
import copy
import os.path as osp
+import six
+import sys
import random
import numpy as np
import paddlex.utils.logging as logging
@@ -48,6 +50,12 @@ class CocoDetection(VOCDetection):
shuffle=False):
from pycocotools.coco import COCO
+ try:
+ import shapely.ops
+ from shapely.geometry import Polygon, MultiPolygon, GeometryCollection
+ except:
+ six.reraise(*sys.exc_info())
+
super(VOCDetection, self).__init__(
transforms=transforms,
num_workers=num_workers,
diff --git a/paddlex/deploy.py b/paddlex/deploy.py
index c5d114a230f83241df743166ad51bb04ad71f499..dd4e1f2a34d354300aed45c75e2cd1279b819ee0 100644
--- a/paddlex/deploy.py
+++ b/paddlex/deploy.py
@@ -28,6 +28,7 @@ class Predictor:
use_gpu=True,
gpu_id=0,
use_mkl=False,
+ mkl_thread_num=4,
use_trt=False,
use_glog=False,
memory_optimize=True):
@@ -38,6 +39,7 @@ class Predictor:
use_gpu: 是否使用gpu,默认True
gpu_id: 使用gpu的id,默认0
use_mkl: 是否使用mkldnn计算库,CPU情况下使用,默认False
+ mkl_thread_num: mkldnn计算线程数,默认为4
use_trt: 是否使用TensorRT,默认False
use_glog: 是否启用glog日志, 默认False
memory_optimize: 是否启动内存优化,默认True
@@ -72,13 +74,15 @@ class Predictor:
to_rgb = False
self.transforms = build_transforms(self.model_type,
self.info['Transforms'], to_rgb)
- self.predictor = self.create_predictor(
- use_gpu, gpu_id, use_mkl, use_trt, use_glog, memory_optimize)
+ self.predictor = self.create_predictor(use_gpu, gpu_id, use_mkl,
+ mkl_thread_num, use_trt,
+ use_glog, memory_optimize)
def create_predictor(self,
use_gpu=True,
gpu_id=0,
use_mkl=False,
+ mkl_thread_num=4,
use_trt=False,
use_glog=False,
memory_optimize=True):
@@ -93,6 +97,7 @@ class Predictor:
config.disable_gpu()
if use_mkl:
config.enable_mkldnn()
+ config.set_cpu_math_library_num_threads(mkl_thread_num)
if use_glog:
config.enable_glog_info()
else:
diff --git a/tutorials/train/README.md b/tutorials/train/README.md
index 1900143bceb3435da8ffa04a7fed7b0205e04477..56e55001f28cbd7c12f59b8bae6a9527d8eb754d 100644
--- a/tutorials/train/README.md
+++ b/tutorials/train/README.md
@@ -4,15 +4,29 @@
|代码 | 模型任务 | 数据 |
|------|--------|---------|
-|classification/mobilenetv2.py | 图像分类MobileNetV2 | 蔬菜分类 |
-|classification/resnet50.py | 图像分类ResNet50 | 蔬菜分类 |
-|detection/faster_rcnn_r50_fpn.py | 目标检测FasterRCNN | 昆虫检测 |
-|detection/mask_rcnn_f50_fpn.py | 实例分割MaskRCNN | 垃圾分拣 |
-|segmentation/deeplabv3p.py | 语义分割DeepLabV3| 视盘分割 |
-|segmentation/unet.py | 语义分割UNet | 视盘分割 |
+|image_classification/alexnet.py | 图像分类AlexyNet | 蔬菜分类 |
+|image_classification/mobilenetv2.py | 图像分类MobileNetV2 | 蔬菜分类 |
+|image_classification/mobilenetv3_small_ssld.py | 图像分类MobileNetV3_small_ssld | 蔬菜分类 |
+|image_classification/resnet50_vd_ssld.py | 图像分类ResNet50_vd_ssld | 蔬菜分类 |
+|image_classification/shufflenetv2.py | 图像分类ShuffleNetV2 | 蔬菜分类 |
+|object_detection/faster_rcnn_hrnet_fpn.py | 目标检测FasterRCNN | 昆虫检测 |
+|object_detection/faster_rcnn_r18_fpn.py | 目标检测FasterRCNN | 昆虫检测 |
+|object_detection/faster_rcnn_r50_fpn.py | 目标检测FasterRCNN | 昆虫检测 |
+|object_detection/yolov3_darknet53.py | 目标检测YOLOv3 | 昆虫检测 |
+|object_detection/yolov3_mobilenetv1.py | 目标检测YOLOv3 | 昆虫检测 |
+|object_detection/yolov3_mobilenetv3.py | 目标检测YOLOv3 | 昆虫检测 |
+|instance_segmentation/mask_rcnn_hrnet_fpn.py | 实例分割MaskRCNN | 小度熊分拣 |
+|instance_segmentation/mask_rcnn_r18_fpn.py | 实例分割MaskRCNN | 小度熊分拣 |
+|instance_segmentation/mask_rcnn_f50_fpn.py | 实例分割MaskRCNN | 小度熊分拣 |
+|semantic_segmentation/deeplabv3p_mobilenetv2.py | 语义分割DeepLabV3 | 视盘分割 |
+|semantic_segmentation/deeplabv3p_mobilenetv2_x0.25.py | 语义分割DeepLabV3 | 视盘分割 |
+|semantic_segmentation/deeplabv3p_xception65.py | 语义分割DeepLabV3 | 视盘分割 |
+|semantic_segmentation/fast_scnn.py | 语义分割FastSCNN | 视盘分割 |
+|semantic_segmentation/hrnet.py | 语义分割HRNet | 视盘分割 |
+|semantic_segmentation/unet.py | 语义分割UNet | 视盘分割 |
## 开始训练
在安装PaddleX后,使用如下命令开始训练
```
-python classification/mobilenetv2.py
+python image_classification/mobilenetv2.py
```
diff --git a/tutorials/train/image_classification/README.md b/tutorials/train/image_classification/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..4343d34cd93823b6b2c6d4a6b56446cf428f42f0
--- /dev/null
+++ b/tutorials/train/image_classification/README.md
@@ -0,0 +1,20 @@
+# 图像分类训练示例
+
+本目录下为图像分类示例代码,用户在安装完PaddlePaddle和PaddleX即可直接进行训练。
+
+- [PaddlePaddle安装](https://www.paddlepaddle.org.cn/install/quick)
+- [PaddleX安装](https://paddlex.readthedocs.io/zh_CN/develop/install.html)
+
+## 模型训练
+如下所示,直接下载代码后运行即可,代码会自动下载训练数据
+```
+python mobilenetv3_small_ssld.py
+```
+
+## VisualDL可视化训练指标
+在模型训练过程,在`train`函数中,将`use_vdl`设为True,则训练过程会自动将训练日志以VisualDL的格式打点在`save_dir`(用户自己指定的路径)下的`vdl_log`目录,用户可以使用如下命令启动VisualDL服务,查看可视化指标
+```
+visualdl --logdir output/mobilenetv3_small_ssld/vdl_log --port 8001
+```
+
+服务启动后,使用浏览器打开 https://0.0.0.0:8001 或 https://localhost:8001
diff --git a/tutorials/train/image_classification/alexnet.py b/tutorials/train/image_classification/alexnet.py
index bec066962abd8955f6021c8d578e6543eefa0a70..7eb76b94697c7a19127abdc9362ff27abf48e36d 100644
--- a/tutorials/train/image_classification/alexnet.py
+++ b/tutorials/train/image_classification/alexnet.py
@@ -13,14 +13,12 @@ pdx.utils.download_and_decompress(veg_dataset, path='./')
# 定义训练和验证时的transforms
# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomCrop(crop_size=224),
- transforms.RandomHorizontalFlip(),
+ transforms.RandomCrop(crop_size=224), transforms.RandomHorizontalFlip(),
transforms.Normalize()
])
eval_transforms = transforms.Compose([
transforms.ResizeByShort(short_size=256),
- transforms.CenterCrop(crop_size=224),
- transforms.Normalize()
+ transforms.CenterCrop(crop_size=224), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -38,10 +36,7 @@ eval_dataset = pdx.datasets.ImageNet(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/mobilenetv2/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001或https://localhost:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
model = pdx.cls.AlexNet(num_classes=len(train_dataset.labels))
# AlexNet需要指定确定的input_shape
model.fixed_input_shape = [224, 224]
diff --git a/tutorials/train/image_classification/mobilenetv2.py b/tutorials/train/image_classification/mobilenetv2.py
index 7533aab7bc0fc2498d17fd1bd554f595253c05b8..940c3c499a58c3079d8542375cc14c23c46d70ab 100644
--- a/tutorials/train/image_classification/mobilenetv2.py
+++ b/tutorials/train/image_classification/mobilenetv2.py
@@ -13,14 +13,12 @@ pdx.utils.download_and_decompress(veg_dataset, path='./')
# 定义训练和验证时的transforms
# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomCrop(crop_size=224),
- transforms.RandomHorizontalFlip(),
+ transforms.RandomCrop(crop_size=224), transforms.RandomHorizontalFlip(),
transforms.Normalize()
])
eval_transforms = transforms.Compose([
transforms.ResizeByShort(short_size=256),
- transforms.CenterCrop(crop_size=224),
- transforms.Normalize()
+ transforms.CenterCrop(crop_size=224), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -38,10 +36,7 @@ eval_dataset = pdx.datasets.ImageNet(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/mobilenetv2/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
model = pdx.cls.MobileNetV2(num_classes=len(train_dataset.labels))
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/classification.html#train
diff --git a/tutorials/train/image_classification/mobilenetv3_small_ssld.py b/tutorials/train/image_classification/mobilenetv3_small_ssld.py
index 8f13312d835b582ec673635f11b4c3fff1c95dda..7c3fb7ffcdc43517de6a7437529d5106c83fb435 100644
--- a/tutorials/train/image_classification/mobilenetv3_small_ssld.py
+++ b/tutorials/train/image_classification/mobilenetv3_small_ssld.py
@@ -13,14 +13,12 @@ pdx.utils.download_and_decompress(veg_dataset, path='./')
# 定义训练和验证时的transforms
# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomCrop(crop_size=224),
- transforms.RandomHorizontalFlip(),
+ transforms.RandomCrop(crop_size=224), transforms.RandomHorizontalFlip(),
transforms.Normalize()
])
eval_transforms = transforms.Compose([
transforms.ResizeByShort(short_size=256),
- transforms.CenterCrop(crop_size=224),
- transforms.Normalize()
+ transforms.CenterCrop(crop_size=224), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -38,10 +36,7 @@ eval_dataset = pdx.datasets.ImageNet(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/mobilenetv2/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
model = pdx.cls.MobileNetV3_small_ssld(num_classes=len(train_dataset.labels))
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/datasets.html#paddlex-datasets-imagenet
diff --git a/tutorials/train/image_classification/resnet50_vd_ssld.py b/tutorials/train/image_classification/resnet50_vd_ssld.py
index b72ebc52d74f6a0023b830c33f5afc31fb4b7196..547e65fcc922c8576243dffdd07f9bfa65364687 100644
--- a/tutorials/train/image_classification/resnet50_vd_ssld.py
+++ b/tutorials/train/image_classification/resnet50_vd_ssld.py
@@ -13,14 +13,12 @@ pdx.utils.download_and_decompress(veg_dataset, path='./')
# 定义训练和验证时的transforms
# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomCrop(crop_size=224),
- transforms.RandomHorizontalFlip(),
+ transforms.RandomCrop(crop_size=224), transforms.RandomHorizontalFlip(),
transforms.Normalize()
])
eval_transforms = transforms.Compose([
transforms.ResizeByShort(short_size=256),
- transforms.CenterCrop(crop_size=224),
- transforms.Normalize()
+ transforms.CenterCrop(crop_size=224), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -38,10 +36,7 @@ eval_dataset = pdx.datasets.ImageNet(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/mobilenetv2/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
model = pdx.cls.ResNet50_vd_ssld(num_classes=len(train_dataset.labels))
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/classification.html#train
diff --git a/tutorials/train/image_classification/shufflenetv2.py b/tutorials/train/image_classification/shufflenetv2.py
index cdfa1889ba926f4728277929b76536ddaea75c04..23c338b071706ef3a139f4807b3e7d0500e8d1c4 100644
--- a/tutorials/train/image_classification/shufflenetv2.py
+++ b/tutorials/train/image_classification/shufflenetv2.py
@@ -13,14 +13,12 @@ pdx.utils.download_and_decompress(veg_dataset, path='./')
# 定义训练和验证时的transforms
# API说明https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/cls_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomCrop(crop_size=224),
- transforms.RandomHorizontalFlip(),
+ transforms.RandomCrop(crop_size=224), transforms.RandomHorizontalFlip(),
transforms.Normalize()
])
eval_transforms = transforms.Compose([
transforms.ResizeByShort(short_size=256),
- transforms.CenterCrop(crop_size=224),
- transforms.Normalize()
+ transforms.CenterCrop(crop_size=224), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -38,10 +36,7 @@ eval_dataset = pdx.datasets.ImageNet(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/mobilenetv2/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
model = pdx.cls.ShuffleNetV2(num_classes=len(train_dataset.labels))
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/classification.html#train
diff --git a/tutorials/train/instance_segmentation/mask_rcnn_hrnet_fpn.py b/tutorials/train/instance_segmentation/mask_rcnn_hrnet_fpn.py
index f78446546cd793f96cb074f0a1701a718f7d84b4..6450d6fd1efa4e71049ccf04e88e5a45b0e8a0b3 100644
--- a/tutorials/train/instance_segmentation/mask_rcnn_hrnet_fpn.py
+++ b/tutorials/train/instance_segmentation/mask_rcnn_hrnet_fpn.py
@@ -13,15 +13,15 @@ pdx.utils.download_and_decompress(xiaoduxiong_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.RandomHorizontalFlip(), transforms.Normalize(),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
eval_transforms = transforms.Compose([
transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333),
transforms.Padding(coarsest_stride=32),
])
@@ -38,10 +38,7 @@ eval_dataset = pdx.datasets.CocoDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/mask_rcnn_r50_fpn/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
# num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
num_classes = len(train_dataset.labels) + 1
diff --git a/tutorials/train/instance_segmentation/mask_rcnn_r18_fpn.py b/tutorials/train/instance_segmentation/mask_rcnn_r18_fpn.py
index dc16b66b3941e0d639fd45dbaa691ec51bc5cfbd..d4f9bd640e50329457908a5be7d40529785be7e5 100644
--- a/tutorials/train/instance_segmentation/mask_rcnn_r18_fpn.py
+++ b/tutorials/train/instance_segmentation/mask_rcnn_r18_fpn.py
@@ -13,16 +13,14 @@ pdx.utils.download_and_decompress(xiaoduxiong_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.RandomHorizontalFlip(), transforms.Normalize(),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
eval_transforms = transforms.Compose([
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.Normalize(), transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
# 定义训练和验证所用的数据集
@@ -38,10 +36,7 @@ eval_dataset = pdx.datasets.CocoDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/mask_rcnn_r50_fpn/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
# num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
num_classes = len(train_dataset.labels) + 1
diff --git a/tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py b/tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py
index e87c88e5d8feba36df1bd65430058a4f413ba73c..9a93ec35c0178693dbbde5dc564246e443f55fb3 100644
--- a/tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py
+++ b/tutorials/train/instance_segmentation/mask_rcnn_r50_fpn.py
@@ -13,16 +13,14 @@ pdx.utils.download_and_decompress(xiaoduxiong_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.RandomHorizontalFlip(), transforms.Normalize(),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
eval_transforms = transforms.Compose([
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.Normalize(), transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
# 定义训练和验证所用的数据集
@@ -38,10 +36,7 @@ eval_dataset = pdx.datasets.CocoDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/mask_rcnn_r50_fpn/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
# num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
num_classes = len(train_dataset.labels) + 1
diff --git a/tutorials/train/object_detection/faster_rcnn_hrnet_fpn.py b/tutorials/train/object_detection/faster_rcnn_hrnet_fpn.py
index e46d3ae56b57aa90cdcecdcce3ad3ee1ad67d098..c948d16b40d14ab723cd3b8fa0dce472c3f49118 100644
--- a/tutorials/train/object_detection/faster_rcnn_hrnet_fpn.py
+++ b/tutorials/train/object_detection/faster_rcnn_hrnet_fpn.py
@@ -13,16 +13,14 @@ pdx.utils.download_and_decompress(insect_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.RandomHorizontalFlip(), transforms.Normalize(),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
eval_transforms = transforms.Compose([
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.Normalize(), transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
# 定义训练和验证所用的数据集
@@ -40,10 +38,7 @@ eval_dataset = pdx.datasets.VOCDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/faster_rcnn_r50_fpn/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
# num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
num_classes = len(train_dataset.labels) + 1
diff --git a/tutorials/train/object_detection/faster_rcnn_r18_fpn.py b/tutorials/train/object_detection/faster_rcnn_r18_fpn.py
index 0ae82d3ec8166159649f09d33b3f2ad094c3c6ee..46679f22018b330b3e44eb668ee4c890a7af13fb 100644
--- a/tutorials/train/object_detection/faster_rcnn_r18_fpn.py
+++ b/tutorials/train/object_detection/faster_rcnn_r18_fpn.py
@@ -13,15 +13,15 @@ pdx.utils.download_and_decompress(insect_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.RandomHorizontalFlip(), transforms.Normalize(),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
eval_transforms = transforms.Compose([
transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333),
transforms.Padding(coarsest_stride=32),
])
@@ -40,10 +40,7 @@ eval_dataset = pdx.datasets.VOCDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/faster_rcnn_r50_fpn/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
# num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
num_classes = len(train_dataset.labels) + 1
diff --git a/tutorials/train/object_detection/faster_rcnn_r50_fpn.py b/tutorials/train/object_detection/faster_rcnn_r50_fpn.py
index 0f26bfa9a5c571419c5b4b2f6e553f383d011399..fde705bfbb0b1732a4146222851b790098619fcf 100644
--- a/tutorials/train/object_detection/faster_rcnn_r50_fpn.py
+++ b/tutorials/train/object_detection/faster_rcnn_r50_fpn.py
@@ -13,15 +13,15 @@ pdx.utils.download_and_decompress(insect_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
- transforms.Padding(coarsest_stride=32)
+ transforms.RandomHorizontalFlip(), transforms.Normalize(),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333), transforms.Padding(coarsest_stride=32)
])
eval_transforms = transforms.Compose([
transforms.Normalize(),
- transforms.ResizeByShort(short_size=800, max_size=1333),
+ transforms.ResizeByShort(
+ short_size=800, max_size=1333),
transforms.Padding(coarsest_stride=32),
])
@@ -40,10 +40,7 @@ eval_dataset = pdx.datasets.VOCDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/faster_rcnn_r50_fpn/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
# num_classes 需要设置为包含背景类的类别数,即: 目标类别数量 + 1
num_classes = len(train_dataset.labels) + 1
diff --git a/tutorials/train/object_detection/yolov3_darknet53.py b/tutorials/train/object_detection/yolov3_darknet53.py
index 085be4bf7ffa3f9eca31f3b2807d83f00544b455..7e5b0b07dbdddf7859528556819700d785ad2845 100644
--- a/tutorials/train/object_detection/yolov3_darknet53.py
+++ b/tutorials/train/object_detection/yolov3_darknet53.py
@@ -13,18 +13,15 @@ pdx.utils.download_and_decompress(insect_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
train_transforms = transforms.Compose([
- transforms.MixupImage(mixup_epoch=250),
- transforms.RandomDistort(),
- transforms.RandomExpand(),
- transforms.RandomCrop(),
- transforms.Resize(target_size=608, interp='RANDOM'),
- transforms.RandomHorizontalFlip(),
+ transforms.MixupImage(mixup_epoch=250), transforms.RandomDistort(),
+ transforms.RandomExpand(), transforms.RandomCrop(), transforms.Resize(
+ target_size=608, interp='RANDOM'), transforms.RandomHorizontalFlip(),
transforms.Normalize()
])
eval_transforms = transforms.Compose([
- transforms.Resize(target_size=608, interp='CUBIC'),
- transforms.Normalize()
+ transforms.Resize(
+ target_size=608, interp='CUBIC'), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -42,10 +39,7 @@ eval_dataset = pdx.datasets.VOCDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/yolov3_darknet/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
num_classes = len(train_dataset.labels)
# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-yolov3
diff --git a/tutorials/train/object_detection/yolov3_mobilenetv1.py b/tutorials/train/object_detection/yolov3_mobilenetv1.py
index bfc2bea0716c1bc0b7c27cb8014d6215eed8306c..e565ce0714b67669afcbeb827c45cee9d38370b4 100644
--- a/tutorials/train/object_detection/yolov3_mobilenetv1.py
+++ b/tutorials/train/object_detection/yolov3_mobilenetv1.py
@@ -17,13 +17,15 @@ train_transforms = transforms.Compose([
transforms.RandomDistort(),
transforms.RandomExpand(),
transforms.RandomCrop(),
- transforms.Resize(target_size=608, interp='RANDOM'),
+ transforms.Resize(
+ target_size=608, interp='RANDOM'),
transforms.RandomHorizontalFlip(),
transforms.Normalize(),
])
eval_transforms = transforms.Compose([
- transforms.Resize(target_size=608, interp='CUBIC'),
+ transforms.Resize(
+ target_size=608, interp='CUBIC'),
transforms.Normalize(),
])
@@ -42,10 +44,7 @@ eval_dataset = pdx.datasets.VOCDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/yolov3_darknet/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
num_classes = len(train_dataset.labels)
# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-yolov3
diff --git a/tutorials/train/object_detection/yolov3_mobilenetv3.py b/tutorials/train/object_detection/yolov3_mobilenetv3.py
index 85570781851665a9ab28a718ecf85a0b078508a3..a80f34899ca1e8b6fb42a790b4782543880ae992 100644
--- a/tutorials/train/object_detection/yolov3_mobilenetv3.py
+++ b/tutorials/train/object_detection/yolov3_mobilenetv3.py
@@ -13,18 +13,15 @@ pdx.utils.download_and_decompress(insect_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/det_transforms.html
train_transforms = transforms.Compose([
- transforms.MixupImage(mixup_epoch=250),
- transforms.RandomDistort(),
- transforms.RandomExpand(),
- transforms.RandomCrop(),
- transforms.Resize(target_size=608, interp='RANDOM'),
- transforms.RandomHorizontalFlip(),
+ transforms.MixupImage(mixup_epoch=250), transforms.RandomDistort(),
+ transforms.RandomExpand(), transforms.RandomCrop(), transforms.Resize(
+ target_size=608, interp='RANDOM'), transforms.RandomHorizontalFlip(),
transforms.Normalize()
])
eval_transforms = transforms.Compose([
- transforms.Resize(target_size=608, interp='CUBIC'),
- transforms.Normalize()
+ transforms.Resize(
+ target_size=608, interp='CUBIC'), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -42,10 +39,7 @@ eval_dataset = pdx.datasets.VOCDetection(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/yolov3_darknet/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
num_classes = len(train_dataset.labels)
# API说明: https://paddlex.readthedocs.io/zh_CN/develop/apis/models/detection.html#paddlex-det-yolov3
diff --git a/tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2.py b/tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2.py
index fc5b738a0641604f28fd83a47b795313c13bcd39..ea7891ac8d607d3954cbf39614da13d17137dabe 100644
--- a/tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2.py
+++ b/tutorials/train/semantic_segmentation/deeplabv3p_mobilenetv2.py
@@ -13,16 +13,13 @@ pdx.utils.download_and_decompress(optic_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.ResizeRangeScaling(),
- transforms.RandomPaddingCrop(crop_size=512),
- transforms.Normalize()
+ transforms.RandomHorizontalFlip(), transforms.ResizeRangeScaling(),
+ transforms.RandomPaddingCrop(crop_size=512), transforms.Normalize()
])
eval_transforms = transforms.Compose([
- transforms.ResizeByLong(long_size=512),
- transforms.Padding(target_size=512),
- transforms.Normalize()
+ transforms.ResizeByLong(long_size=512),
+ transforms.Padding(target_size=512), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -40,15 +37,12 @@ eval_dataset = pdx.datasets.SegDataset(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/deeplab/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
num_classes = len(train_dataset.labels)
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-deeplabv3p
-model = pdx.seg.DeepLabv3p(num_classes=num_classes, backbone='MobileNetV2_x1.0')
-
+model = pdx.seg.DeepLabv3p(
+ num_classes=num_classes, backbone='MobileNetV2_x1.0')
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#train
# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
diff --git a/tutorials/train/semantic_segmentation/fast_scnn.py b/tutorials/train/semantic_segmentation/fast_scnn.py
index 38fa51a7ab6242795dfd16c322d004b733e62a74..bb1de91df483e7f13da1681f21b8c468c9a09244 100644
--- a/tutorials/train/semantic_segmentation/fast_scnn.py
+++ b/tutorials/train/semantic_segmentation/fast_scnn.py
@@ -13,16 +13,13 @@ pdx.utils.download_and_decompress(optic_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.ResizeRangeScaling(),
- transforms.RandomPaddingCrop(crop_size=512),
- transforms.Normalize()
+ transforms.RandomHorizontalFlip(), transforms.ResizeRangeScaling(),
+ transforms.RandomPaddingCrop(crop_size=512), transforms.Normalize()
])
eval_transforms = transforms.Compose([
- transforms.ResizeByLong(long_size=512),
- transforms.Padding(target_size=512),
- transforms.Normalize()
+ transforms.ResizeByLong(long_size=512),
+ transforms.Padding(target_size=512), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -40,13 +37,8 @@ eval_dataset = pdx.datasets.SegDataset(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/unet/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
-
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
num_classes = len(train_dataset.labels)
-
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-fastscnn
model = pdx.seg.FastSCNN(num_classes=num_classes)
diff --git a/tutorials/train/semantic_segmentation/hrnet.py b/tutorials/train/semantic_segmentation/hrnet.py
index 9526e99b352eee73ca3ee4d308ec9fe36250f7d1..91514ea0218dfd7830bdce75ab2987509b62b0ce 100644
--- a/tutorials/train/semantic_segmentation/hrnet.py
+++ b/tutorials/train/semantic_segmentation/hrnet.py
@@ -13,16 +13,13 @@ pdx.utils.download_and_decompress(optic_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.ResizeRangeScaling(),
- transforms.RandomPaddingCrop(crop_size=512),
- transforms.Normalize()
+ transforms.RandomHorizontalFlip(), transforms.ResizeRangeScaling(),
+ transforms.RandomPaddingCrop(crop_size=512), transforms.Normalize()
])
eval_transforms = transforms.Compose([
- transforms.ResizeByLong(long_size=512),
- transforms.Padding(target_size=512),
- transforms.Normalize()
+ transforms.ResizeByLong(long_size=512),
+ transforms.Padding(target_size=512), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -40,10 +37,7 @@ eval_dataset = pdx.datasets.SegDataset(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/unet/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
num_classes = len(train_dataset.labels)
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-hrnet
diff --git a/tutorials/train/semantic_segmentation/unet.py b/tutorials/train/semantic_segmentation/unet.py
index c0ba72666d4b386667cc747077916eaf251675a9..81d346988cf634c2e07e981f48d2b610bf44d81d 100644
--- a/tutorials/train/semantic_segmentation/unet.py
+++ b/tutorials/train/semantic_segmentation/unet.py
@@ -13,15 +13,13 @@ pdx.utils.download_and_decompress(optic_dataset, path='./')
# 定义训练和验证时的transforms
# API说明 https://paddlex.readthedocs.io/zh_CN/develop/apis/transforms/seg_transforms.html
train_transforms = transforms.Compose([
- transforms.RandomHorizontalFlip(),
- transforms.ResizeRangeScaling(),
- transforms.RandomPaddingCrop(crop_size=512),
- transforms.Normalize()
+ transforms.RandomHorizontalFlip(), transforms.ResizeRangeScaling(),
+ transforms.RandomPaddingCrop(crop_size=512), transforms.Normalize()
])
eval_transforms = transforms.Compose([
- transforms.ResizeByLong(long_size=512), transforms.Padding(target_size=512),
- transforms.Normalize()
+ transforms.ResizeByLong(long_size=512),
+ transforms.Padding(target_size=512), transforms.Normalize()
])
# 定义训练和验证所用的数据集
@@ -39,10 +37,7 @@ eval_dataset = pdx.datasets.SegDataset(
transforms=eval_transforms)
# 初始化模型,并进行训练
-# 可使用VisualDL查看训练指标
-# VisualDL启动方式: visualdl --logdir output/unet/vdl_log --port 8001
-# 浏览器打开 https://0.0.0.0:8001即可
-# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
+# 可使用VisualDL查看训练指标,参考https://paddlex.readthedocs.io/zh_CN/develop/train/visualdl.html
num_classes = len(train_dataset.labels)
# API说明:https://paddlex.readthedocs.io/zh_CN/develop/apis/models/semantic_segmentation.html#paddlex-seg-deeplabv3p