未验证 提交 4a2d8927 编写于 作者: J Jason 提交者: GitHub

Merge pull request #14 from FlyingQianMM/develop_qh

change ins_seg demo datasets
......@@ -16,21 +16,21 @@ paddlex.load_model(model_dir)
* **paddlex.cv.models**, 模型类。
### 示例
> 1. [点击下载](https://bj.bcebos.com/paddlex/models/garbage_epoch_12.tar.gz)PaddleX在垃圾分拣数据上训练的MaskRCNN模型
> 2. [点击下载](https://bj.bcebos.com/paddlex/datasets/garbage_ins_det.tar.gz)垃圾分拣数据集
> 1. [点击下载](https://bj.bcebos.com/paddlex/models/xiaoduxiong_epoch_12.tar.gz)PaddleX在小度熊分拣数据上训练的MaskRCNN模型
> 2. [点击下载](https://bj.bcebos.com/paddlex/datasets/xiaoduxiong_ins_det.tar.gz)小度熊分拣数据集
```
import paddlex as pdx
model_dir = './garbage_epoch_12'
data_dir = './garbage_ins_det/JPEGImages'
ann_file = './garbage_ins_det/val.json'
model_dir = './xiaoduxiong_epoch_12'
data_dir = './xiaoduxiong_ins_det/JPEGImages'
ann_file = './xiaoduxiong_ins_det/val.json'
# 加载垃圾分拣模型
model = pdx.load_model(model_dir)
# 预测
pred_result = model.predict('./garbage_ins_det/JPEGImages/000114.bmp')
pred_result = model.predict('./xiaoduxiong_ins_det/JPEGImages/WechatIMG114.jpeg')
# 在验证集上进行评估
eval_reader = pdx.cv.datasets.CocoDetection(data_dir=data_dir,
......
......@@ -14,13 +14,13 @@ paddlex.det.visualize(image, result, threshold=0.5, save_dir=None)
> * **save_dir**(str): 可视化结果保存路径。若为None,则表示不保存,该函数将可视化的结果以np.ndarray的形式返回;若设为目录路径,则将可视化结果保存至该目录下
### 使用示例
> 点击下载如下示例中的[模型](https://bj.bcebos.com/paddlex/models/garbage_epoch_12.tar.gz)和[测试图片](https://bj.bcebos.com/paddlex/datasets/garbage.bmp)
> 点击下载如下示例中的[模型](https://bj.bcebos.com/paddlex/models/xiaoduxiong_epoch_12.tar.gz)和[测试图片](https://bj.bcebos.com/paddlex/datasets/xiaoduxiong.jpeg)
```
import paddlex as pdx
model = pdx.load_model('garbage_epoch_12')
result = model.predict('garbage.bmp')
pdx.det.visualize('garbage.bmp', result, save_dir='./')
# 预测结果保存在./visualize_garbage.bmp
model = pdx.load_model('xiaoduxiong_epoch_12')
result = model.predict('xiaoduxiong.jpeg')
pdx.det.visualize('xiaoduxiong.jpeg', result, save_dir='./')
# 预测结果保存在./visualize_xiaoduxiong.jpeg
```
## 语义分割预测结果可视化
......
......@@ -2,7 +2,7 @@
------
更多检测模型在VOC数据集或COCO数据集上的训练代码可参考[代码tutorials/train/detection/faster_rcnn_r50_fpn.py](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/detection/faster_rcnn_r50_fpn.py)[代码tutorials/train/detection/yolov3_mobilenetv1.py](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/detection/yolov3_mobilenetv1.py)
更多检测模型在VOC数据集或COCO数据集上的训练代码可参考[代码tutorials/train/detection/faster_rcnn_r50_fpn.py](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/detection/faster_rcnn_r50_fpn.py)[代码tutorials/train/detection/yolov3_darknet53.py](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/detection/yolov3_darknet53.py)
**1.下载并解压训练所需的数据集**
......@@ -116,4 +116,4 @@ predict_result = model.predict('./insect_det/JPEGImages/1968.jpg')
pdx.det.visualize('./insect_det/JPEGImages/1968.jpg', predict_result, threshold=0.5, save_dir='./output/faster_rcnn_r50_fpn')
```
![](../images/visualized_fasterrcnn.jpg)
![](../../images/visualized_fasterrcnn.jpg)
......@@ -14,11 +14,11 @@ os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import paddlex as pdx
```
> 这里使用垃圾分拣数据集,训练集、验证集和测试共包含283个样本,6个类别。
> 这里使用小度熊分拣数据集,训练集、验证集和测试共包含21个样本,1个类别。
```python
garbage_dataset = 'https://bj.bcebos.com/paddlex/datasets/garbage_ins_det.tar.gz'
pdx.utils.download_and_decompress(garbage_dataset, path='./')
xiaoduxiong_dataset = 'https://bj.bcebos.com/paddlex/datasets/xiaoduxiong_ins_det.tar.gz'
pdx.utils.download_and_decompress(xiaoduxiong_dataset, path='./')
```
**2.定义训练和验证过程中的数据处理和增强操作**
......@@ -47,19 +47,19 @@ eval_transforms = transforms.Compose([
```python
train_dataset = pdx.datasets.CocoDetection(
data_dir='garbage_ins_det/JPEGImages',
ann_file='garbage_ins_det/train.json',
data_dir='xiaoduxiong_ins_det/JPEGImages',
ann_file='xiaoduxiong_ins_det/train.json',
transforms=train_transforms,
shuffle=True)
eval_dataset = pdx.datasets.CocoDetection(
data_dir='garbage_ins_det/JPEGImages',
ann_file='garbage_ins_det/val.json',
data_dir='xiaoduxiong_ins_det/JPEGImages',
ann_file='xiaoduxiong_ins_det/val.json',
transforms=eval_transforms)
```
**4.创建Mask RCNN模型,并进行训练**
> 创建带FPN结构的Mask RCNN模型,`num_classes` 需要设置为包含背景类的类别数,即: 目标类别数量(6) + 1。
> 创建带FPN结构的Mask RCNN模型,`num_classes` 需要设置为包含背景类的类别数,即: 目标类别数量(1) + 1。
```python
num_classes = len(train_dataset.labels)
......@@ -75,6 +75,7 @@ model.train(
train_batch_size=1,
eval_dataset=eval_dataset,
learning_rate=0.00125,
warmup_steps=10,
lr_decay_epochs=[8, 11],
save_dir='output/mask_rcnn_r50_fpn',
use_vdl=True)
......@@ -98,19 +99,19 @@ print("eval_metrics:", eval_metrics)
> 结果输出:
```python
eval_metrics: {'bbox_mmap': 0.858306, 'segm_mmap': 0.864278}
eval_metrics: OrderedDict([('bbox_mmap', 0.5038283828382838), ('segm_mmap', 0.7025202520252025)])
```
> 训练完用模型对图片进行测试。
```python
predict_result = model.predict('./garbage_ins_det/JPEGImages/000114.bmp')
predict_result = model.predict('./xiaoduxiong_ins_det/JPEGImages/WechatIMG114.jpeg')
```
> 可视化测试结果:
```python
pdx.det.visualize('./garbage_ins_det/JPEGImages/000114.bmp', predict_result, threshold=0.7, save_dir='./output/mask_rcnn_r50_fpn')
pdx.det.visualize('./xiaoduxiong_ins_det/JPEGImages/WechatIMG114.jpeg', predict_result, threshold=0.7, save_dir='./output/mask_rcnn_r50_fpn')
```
![](../../images/visualized_maskrcnn.bmp)
![](../../images/visualized_maskrcnn.jpeg)
......@@ -5,9 +5,9 @@ os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from paddlex.det import transforms
import paddlex as pdx
# 下载和解压垃圾分拣数据集
garbage_dataset = 'https://bj.bcebos.com/paddlex/datasets/garbage_ins_det.tar.gz'
pdx.utils.download_and_decompress(garbage_dataset, path='./')
# 下载和解压小度熊分拣数据集
xiaoduxiong_dataset = 'https://bj.bcebos.com/paddlex/datasets/xiaoduxiong_ins_det.tar.gz'
pdx.utils.download_and_decompress(xiaoduxiong_dataset, path='./')
# 定义训练和验证时的transforms
train_transforms = transforms.Compose([
......@@ -25,13 +25,13 @@ eval_transforms = transforms.Compose([
# 定义训练和验证所用的数据集
train_dataset = pdx.datasets.CocoDetection(
data_dir='garbage_ins_det/JPEGImages',
ann_file='garbage_ins_det/train.json',
data_dir='xiaoduxiong_ins_det/JPEGImages',
ann_file='xiaoduxiong_ins_det/train.json',
transforms=train_transforms,
shuffle=True)
eval_dataset = pdx.datasets.CocoDetection(
data_dir='garbage_ins_det/JPEGImages',
ann_file='garbage_ins_det/val.json',
data_dir='xiaoduxiong_ins_det/JPEGImages',
ann_file='xiaoduxiong_ins_det/val.json',
transforms=eval_transforms)
# 初始化模型,并进行训练
......@@ -48,6 +48,7 @@ model.train(
train_batch_size=1,
eval_dataset=eval_dataset,
learning_rate=0.00125,
warmup_steps=10,
lr_decay_epochs=[8, 11],
save_dir='output/mask_rcnn_r50_fpn',
use_vdl=True)
......@@ -44,7 +44,7 @@ eval_dataset = pdx.datasets.VOCDetection(
# 浏览器打开 https://0.0.0.0:8001即可
# 其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP
num_classes = len(train_dataset.labels)
model = pdx.det.YOLOv3(num_classes=num_classes)
model = pdx.det.YOLOv3(num_classes=num_classes, backbone='DarkNet53')
model.train(
num_epochs=270,
train_dataset=train_dataset,
......@@ -52,5 +52,5 @@ model.train(
eval_dataset=eval_dataset,
learning_rate=0.000125,
lr_decay_epochs=[210, 240],
save_dir='output/yolov3_mobilenetv1',
save_dir='output/yolov3_darknet53',
use_vdl=True)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册