未验证 提交 6a69dd8b 编写于 作者: Z zhengya01 提交者: GitHub

Merge pull request #13 from PaddlePaddle/develop

update
paddle/operators/check_t.save
paddle/operators/check_tensor.ls
paddle/operators/tensor.save
python/paddle/v2/fluid/tests/book/image_classification_resnet.inference.model/
python/paddle/v2/fluid/tests/book/image_classification_vgg.inference.model/
python/paddle/v2/fluid/tests/book/label_semantic_roles.inference.model/
*.DS_Store
*.vs
build/
build_doc/
*.user
.vscode
.idea
.project
.cproject
.pydevproject
.settings/
*.pyc
CMakeSettings.json
Makefile
.test_env/
third_party/
*~
bazel-*
third_party/
build_*
# clion workspace.
cmake-build-*
model_test
\ No newline at end of file
......@@ -26,7 +26,8 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化
[SE-ResNeXt](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|图像分类模型|ResNeXt中加入了SE block,提高了模型准确率|[Squeeze-and-excitation networks](https://arxiv.org/abs/1709.01507)
[SSD](https://github.com/PaddlePaddle/models/blob/develop/fluid/PaddleCV/object_detection/README_cn.md)|单阶段目标检测器|在不同尺度的特征图上检测对应尺度的目标,可以方便地插入到任何一种标准卷积网络中|[SSD: Single Shot MultiBox Detector](https://arxiv.org/abs/1512.02325)
[Face Detector: PyramidBox](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/face_detection/README_cn.md)|基于SSD的单阶段人脸检测器|利用上下文信息解决困难人脸的检测问题,网络表达能力高,鲁棒性强|[PyramidBox: A Context-assisted Single Shot Face Detector](https://arxiv.org/pdf/1803.07737.pdf)
[Faster RCNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/faster_rcnn/README_cn.md)|典型的两阶段目标检测器|创造性地采用卷积网络自行产生建议框,并且和目标检测网络共享卷积网络,建议框数目减少,质量提高|[Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks](https://arxiv.org/abs/1506.01497)
[Faster RCNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/rcnn/README_cn.md)|典型的两阶段目标检测器|创造性地采用卷积网络自行产生建议框,并且和目标检测网络共享卷积网络,建议框数目减少,质量提高|[Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks](https://arxiv.org/abs/1506.01497)
[Mask RCNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/rcnn/README_cn.md)|基于Faster RCNN模型的经典实例分割模型|在原有Faster RCNN模型基础上添加分割分支,得到掩码结果,实现了掩码和类别预测关系的解藕。|[Mask R-CNN](https://arxiv.org/abs/1703.06870)
[ICNet](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/icnet)|图像实时语义分割模型|即考虑了速度,也考虑了准确性,在高分辨率图像的准确性和低复杂度网络的效率之间获得平衡|[ICNet for Real-Time Semantic Segmentation on High-Resolution Images](https://arxiv.org/abs/1704.08545)
[DCGAN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/gan/c_gan)|图像生成模型|深度卷积生成对抗网络,将GAN和卷积网络结合起来,以解决GAN训练不稳定的问题|[Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks](https://arxiv.org/pdf/1511.06434.pdf)
[ConditionalGAN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/gan/c_gan)|图像生成模型|条件生成对抗网络,一种带条件约束的GAN,使用额外信息对模型增加条件,可以指导数据生成过程|[Conditional Generative Adversarial Nets](https://arxiv.org/abs/1411.1784)
......
# LRC Local Rademachar Complexity Regularization
Regularization of Deep Neural Networks(DNNs) for the sake of improving their generalization capability is important and chllenging. This directory contains image classification model based on a novel regularizer rooted in Local Rademacher Complexity (LRC). We appreciate the contribution by [DARTS](https://arxiv.org/abs/1806.09055) for our research. The regularization by LRC and DARTS are combined in this model on CIFAR-10 dataset. Code accompanying the paper
> [An Empirical Study on Regularization of Deep Neural Networks by Local Rademacher Complexity](https://arxiv.org/abs/1902.00873)\
> Yingzhen Yang, Xingjian Li, Jun Huan.\
> _arXiv:1902.00873_.
---
# Table of Contents
- [Installation](#installation)
- [Data preparation](#data-preparation)
- [Training](#training)
## Installation
Running sample code in this directory requires PaddelPaddle Fluid v.1.2.0 and later. If the PaddlePaddle on your device is lower than this version, please follow the instructions in [installation document](http://www.paddlepaddle.org/documentation/docs/zh/1.2/beginners_guide/install/index_cn.html#paddlepaddle) and make an update.
## Data preparation
When you want to use the cifar-10 dataset for the first time, you can download the dataset as:
sh ./dataset/download.sh
Please make sure your environment has an internet connection.
The dataset will be downloaded to `dataset/cifar/cifar-10-batches-py` in the same directory as the `train.py`. If automatic download fails, you can download cifar-10-python.tar.gz from https://www.cs.toronto.edu/~kriz/cifar.html and decompress it to the location mentioned above.
## Training
After data preparation, one can start the training step by:
python -u train_mixup.py \
--batch_size=80 \
--auxiliary \
--weight_decay=0.0003 \
--learning_rate=0.025 \
--lrc_loss_lambda=0.7 \
--cutout
- Set ```export CUDA_VISIBLE_DEVICES=0``` to specifiy one GPU to train.
- For more help on arguments:
python train_mixup.py --help
**data reader introduction:**
* Data reader is defined in `reader.py`.
* Reshape the images to 32 * 32.
* In training stage, images are padding to 40 * 40 and cropped randomly to the original size.
* In training stage, images are horizontally random flipped.
* Images are standardized to (0, 1).
* In training stage, cutout images randomly.
* Shuffle the order of the input images during training.
**model configuration:**
* Use auxiliary loss and auxiliary\_weight=0.4.
* Use dropout and drop\_path\_prob=0.2.
* Set lrc\_loss\_lambda=0.7.
**training strategy:**
* Use momentum optimizer with momentum=0.9.
* Weight decay is 0.0003.
* Use cosine decay with init\_lr=0.025.
* Total epoch is 600.
* Use Xaiver initalizer to weight in conv2d, Constant initalizer to weight in batch norm and Normal initalizer to weight in fc.
* Initalize bias in batch norm and fc to zero constant and do not add bias to conv2d.
## Reference
- DARTS: Differentiable Architecture Search [`paper`](https://arxiv.org/abs/1806.09055)
- Differentiable architecture search in PyTorch [`code`](https://github.com/quark0/darts)
# LRC 局部Rademachar复杂度正则化
为了在深度神经网络中提升泛化能力,正则化的选择十分重要也具有挑战性。本目录包括了一种基于局部rademacher复杂度的新型正则(LRC)的图像分类模型。十分感谢[DARTS](https://arxiv.org/abs/1806.09055)模型对本研究提供的帮助。该模型将LRC正则和DARTS网络相结合,在CIFAR-10数据集中得到了很出色的效果。代码和文章一同发布
> [An Empirical Study on Regularization of Deep Neural Networks by Local Rademacher Complexity](https://arxiv.org/abs/1902.00873)\
> Yingzhen Yang, Xingjian Li, Jun Huan.\
> _arXiv:1902.00873_.
---
# 内容
- [安装](#安装)
- [数据准备](#数据准备)
- [模型训练](#模型训练)
## 安装
在当前目录下运行样例代码需要PadddlePaddle Fluid的v.1.2.0或以上的版本。如果你的运行环境中的PaddlePaddle低于此版本,请根据[安装文档](http://www.paddlepaddle.org/documentation/docs/zh/1.2/beginners_guide/install/index_cn.html#paddlepaddle)中的说明来更新PaddlePaddle。
## 数据准备
第一次使用CIFAR-10数据集时,您可以通过如果命令下载:
sh ./dataset/download.sh
请确保您的环境有互联网连接。数据会下载到`train.py`同目录下的`dataset/cifar/cifar-10-batches-py`。如果下载失败,您可以自行从https://www.cs.toronto.edu/~kriz/cifar.html上下载cifar-10-python.tar.gz并解压到上述位置。
## 模型训练
数据准备好后,可以通过如下命令开始训练:
python -u train_mixup.py \
--batch_size=80 \
--auxiliary \
--weight_decay=0.0003 \
--learning_rate=0.025 \
--lrc_loss_lambda=0.7 \
--cutout
- 通过设置 ```export CUDA_VISIBLE_DEVICES=0```指定单张GPU训练。
- 可选参数见:
python train_mixup.py --help
**数据读取器说明:**
* 数据读取器定义在`reader.py`
* 输入图像尺寸统一变换为32 * 32
* 训练时将图像填充为40 * 40然后随机剪裁为原输入图像大小
* 训练时图像随机水平翻转
* 对图像每个像素做归一化处理
* 训练时对图像做随机遮挡
* 训练时对输入图像做随机洗牌
**模型配置:**
* 使用辅助损失,辅助损失权重为0.4
* 使用dropout,随机丢弃率为0.2
* 设置lrc\_loss\_lambda为0.7
**训练策略:**
* 采用momentum优化算法训练,momentum=0.9
* 权重衰减系数为0.0001
* 采用正弦学习率衰减,初始学习率为0.025
* 总共训练600轮
* 对卷积权重采用Xaiver初始化,对batch norm权重采用固定初始化,对全连接层权重采用高斯初始化
* 对batch norm和全连接层偏差采用固定初始化,不对卷积设置偏差
## 引用
- DARTS: Differentiable Architecture Search [`论文`](https://arxiv.org/abs/1806.09055)
- Differentiable Architecture Search in PyTorch [`代码`](https://github.com/quark0/darts)
DIR="$( cd "$(dirname "$0")" ; pwd -P )"
cd "$DIR"
mkdir cifar
cd cifar
# Download the data.
echo "Downloading..."
wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
# Extract the data.
echo "Extracting..."
tar zvxf cifar-10-python.tar.gz
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on:
# --------------------------------------------------------
# DARTS
# Copyright (c) 2018, Hanxiao Liu.
# Licensed under the Apache License, Version 2.0;
# --------------------------------------------------------
from collections import namedtuple
Genotype = namedtuple('Genotype', 'normal normal_concat reduce reduce_concat')
PRIMITIVES = [
'none', 'max_pool_3x3', 'avg_pool_3x3', 'skip_connect', 'sep_conv_3x3',
'sep_conv_5x5', 'dil_conv_3x3', 'dil_conv_5x5'
]
NASNet = Genotype(
normal=[
('sep_conv_5x5', 1),
('sep_conv_3x3', 0),
('sep_conv_5x5', 0),
('sep_conv_3x3', 0),
('avg_pool_3x3', 1),
('skip_connect', 0),
('avg_pool_3x3', 0),
('avg_pool_3x3', 0),
('sep_conv_3x3', 1),
('skip_connect', 1),
],
normal_concat=[2, 3, 4, 5, 6],
reduce=[
('sep_conv_5x5', 1),
('sep_conv_7x7', 0),
('max_pool_3x3', 1),
('sep_conv_7x7', 0),
('avg_pool_3x3', 1),
('sep_conv_5x5', 0),
('skip_connect', 3),
('avg_pool_3x3', 2),
('sep_conv_3x3', 2),
('max_pool_3x3', 1),
],
reduce_concat=[4, 5, 6], )
AmoebaNet = Genotype(
normal=[
('avg_pool_3x3', 0),
('max_pool_3x3', 1),
('sep_conv_3x3', 0),
('sep_conv_5x5', 2),
('sep_conv_3x3', 0),
('avg_pool_3x3', 3),
('sep_conv_3x3', 1),
('skip_connect', 1),
('skip_connect', 0),
('avg_pool_3x3', 1),
],
normal_concat=[4, 5, 6],
reduce=[
('avg_pool_3x3', 0),
('sep_conv_3x3', 1),
('max_pool_3x3', 0),
('sep_conv_7x7', 2),
('sep_conv_7x7', 0),
('avg_pool_3x3', 1),
('max_pool_3x3', 0),
('max_pool_3x3', 1),
('conv_7x1_1x7', 0),
('sep_conv_3x3', 5),
],
reduce_concat=[3, 4, 6])
DARTS_V1 = Genotype(
normal=[('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 0),
('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1),
('sep_conv_3x3', 0), ('skip_connect', 2)],
normal_concat=[2, 3, 4, 5],
reduce=[('max_pool_3x3', 0), ('max_pool_3x3', 1), ('skip_connect', 2),
('max_pool_3x3', 0), ('max_pool_3x3', 0), ('skip_connect', 2),
('skip_connect', 2), ('avg_pool_3x3', 0)],
reduce_concat=[2, 3, 4, 5])
DARTS_V2 = Genotype(
normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0),
('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('skip_connect', 0),
('skip_connect', 0), ('dil_conv_3x3', 2)],
normal_concat=[2, 3, 4, 5],
reduce=[('max_pool_3x3', 0), ('max_pool_3x3', 1), ('skip_connect', 2),
('max_pool_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 2),
('skip_connect', 2), ('max_pool_3x3', 1)],
reduce_concat=[2, 3, 4, 5])
MY_DARTS = Genotype(
normal=[('sep_conv_3x3', 0), ('skip_connect', 1), ('skip_connect', 0),
('dil_conv_5x5', 1), ('skip_connect', 0), ('sep_conv_3x3', 1),
('skip_connect', 0), ('sep_conv_3x3', 1)],
normal_concat=range(2, 6),
reduce=[('max_pool_3x3', 0), ('max_pool_3x3', 1), ('max_pool_3x3', 0),
('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 2),
('skip_connect', 2), ('skip_connect', 3)],
reduce_concat=range(2, 6))
DARTS = MY_DARTS
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on:
# --------------------------------------------------------
# DARTS
# Copyright (c) 2018, Hanxiao Liu.
# Licensed under the Apache License, Version 2.0;
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
import paddle.fluid as fluid
import paddle.fluid.layers.ops as ops
from paddle.fluid.layers.learning_rate_scheduler import _decay_step_counter
import math
from paddle.fluid.initializer import init_on_cpu
def cosine_decay(learning_rate, num_epoch, steps_one_epoch):
"""Applies cosine decay to the learning rate.
lr = 0.5 * (math.cos(epoch * (math.pi / 120)) + 1)
"""
global_step = _decay_step_counter()
with init_on_cpu():
decayed_lr = learning_rate * \
(ops.cos((global_step / steps_one_epoch) \
* math.pi / num_epoch) + 1)/2
return decayed_lr
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
#
# Based on:
# --------------------------------------------------------
# DARTS
# Copyright (c) 2018, Hanxiao Liu.
# Licensed under the Apache License, Version 2.0;
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import numpy as np
import time
import functools
import paddle
import paddle.fluid as fluid
from operations import *
class Cell():
def __init__(self, genotype, C_prev_prev, C_prev, C, reduction,
reduction_prev):
print(C_prev_prev, C_prev, C)
if reduction_prev:
self.preprocess0 = functools.partial(FactorizedReduce, C_out=C)
else:
self.preprocess0 = functools.partial(
ReLUConvBN, C_out=C, kernel_size=1, stride=1, padding=0)
self.preprocess1 = functools.partial(
ReLUConvBN, C_out=C, kernel_size=1, stride=1, padding=0)
if reduction:
op_names, indices = zip(*genotype.reduce)
concat = genotype.reduce_concat
else:
op_names, indices = zip(*genotype.normal)
concat = genotype.normal_concat
print(op_names, indices, concat, reduction)
self._compile(C, op_names, indices, concat, reduction)
def _compile(self, C, op_names, indices, concat, reduction):
assert len(op_names) == len(indices)
self._steps = len(op_names) // 2
self._concat = concat
self.multiplier = len(concat)
self._ops = []
for name, index in zip(op_names, indices):
stride = 2 if reduction and index < 2 else 1
op = functools.partial(OPS[name], C=C, stride=stride, affine=True)
self._ops += [op]
self._indices = indices
def forward(self, s0, s1, drop_prob, is_train, name):
self.training = is_train
preprocess0_name = name + 'preprocess0.'
preprocess1_name = name + 'preprocess1.'
s0 = self.preprocess0(s0, name=preprocess0_name)
s1 = self.preprocess1(s1, name=preprocess1_name)
out = [s0, s1]
for i in range(self._steps):
h1 = out[self._indices[2 * i]]
h2 = out[self._indices[2 * i + 1]]
op1 = self._ops[2 * i]
op2 = self._ops[2 * i + 1]
h3 = op1(h1, name=name + '_ops.' + str(2 * i) + '.')
h4 = op2(h2, name=name + '_ops.' + str(2 * i + 1) + '.')
if self.training and drop_prob > 0.:
if h3 != h1:
h3 = fluid.layers.dropout(
h3,
drop_prob,
dropout_implementation='upscale_in_train')
if h4 != h2:
h4 = fluid.layers.dropout(
h4,
drop_prob,
dropout_implementation='upscale_in_train')
s = h3 + h4
out += [s]
return fluid.layers.concat([out[i] for i in self._concat], axis=1)
def AuxiliaryHeadCIFAR(input, num_classes, aux_name='auxiliary_head'):
relu_a = fluid.layers.relu(input)
pool_a = fluid.layers.pool2d(relu_a, 5, 'avg', 3)
conv2d_a = fluid.layers.conv2d(
pool_a,
128,
1,
name=aux_name + '.features.2',
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=aux_name + '.features.2.weight'),
bias_attr=False)
bn_a_name = aux_name + '.features.3'
bn_a = fluid.layers.batch_norm(
conv2d_a,
act='relu',
name=bn_a_name,
param_attr=ParamAttr(
initializer=Constant(1.), name=bn_a_name + '.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=bn_a_name + '.bias'),
moving_mean_name=bn_a_name + '.running_mean',
moving_variance_name=bn_a_name + '.running_var')
conv2d_b = fluid.layers.conv2d(
bn_a,
768,
2,
name=aux_name + '.features.5',
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=aux_name + '.features.5.weight'),
bias_attr=False)
bn_b_name = aux_name + '.features.6'
bn_b = fluid.layers.batch_norm(
conv2d_b,
act='relu',
name=bn_b_name,
param_attr=ParamAttr(
initializer=Constant(1.), name=bn_b_name + '.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=bn_b_name + '.bias'),
moving_mean_name=bn_b_name + '.running_mean',
moving_variance_name=bn_b_name + '.running_var')
fc_name = aux_name + '.classifier'
fc = fluid.layers.fc(bn_b,
num_classes,
name=fc_name,
param_attr=ParamAttr(
initializer=Normal(scale=1e-3),
name=fc_name + '.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=fc_name + '.bias'))
return fc
def StemConv(input, C_out, kernel_size, padding):
conv_a = fluid.layers.conv2d(
input,
C_out,
kernel_size,
padding=padding,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0), name='stem.0.weight'),
bias_attr=False)
bn_a = fluid.layers.batch_norm(
conv_a,
param_attr=ParamAttr(
initializer=Constant(1.), name='stem.1.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name='stem.1.bias'),
moving_mean_name='stem.1.running_mean',
moving_variance_name='stem.1.running_var')
return bn_a
class NetworkCIFAR(object):
def __init__(self, C, class_num, layers, auxiliary, genotype):
self.class_num = class_num
self._layers = layers
self._auxiliary = auxiliary
stem_multiplier = 3
self.drop_path_prob = 0
C_curr = stem_multiplier * C
C_prev_prev, C_prev, C_curr = C_curr, C_curr, C
self.cells = []
reduction_prev = False
for i in range(layers):
if i in [layers // 3, 2 * layers // 3]:
C_curr *= 2
reduction = True
else:
reduction = False
cell = Cell(genotype, C_prev_prev, C_prev, C_curr, reduction,
reduction_prev)
reduction_prev = reduction
self.cells += [cell]
C_prev_prev, C_prev = C_prev, cell.multiplier * C_curr
if i == 2 * layers // 3:
C_to_auxiliary = C_prev
def forward(self, init_channel, is_train):
self.training = is_train
self.logits_aux = None
num_channel = init_channel * 3
s0 = StemConv(self.image, num_channel, kernel_size=3, padding=1)
s1 = s0
for i, cell in enumerate(self.cells):
name = 'cells.' + str(i) + '.'
s0, s1 = s1, cell.forward(s0, s1, self.drop_path_prob, is_train,
name)
if i == int(2 * self._layers // 3):
if self._auxiliary and self.training:
self.logits_aux = AuxiliaryHeadCIFAR(s1, self.class_num)
out = fluid.layers.adaptive_pool2d(s1, (1, 1), "avg")
self.logits = fluid.layers.fc(out,
size=self.class_num,
param_attr=ParamAttr(
initializer=Normal(scale=1e-3),
name='classifier.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.),
name='classifier.bias'))
return self.logits, self.logits_aux
def build_input(self, image_shape, batch_size, is_train):
if is_train:
py_reader = fluid.layers.py_reader(
capacity=64,
shapes=[[-1] + image_shape, [-1, 1], [-1, 1], [-1, 1], [-1, 1],
[-1, 1], [-1, batch_size, self.class_num - 1]],
lod_levels=[0, 0, 0, 0, 0, 0, 0],
dtypes=[
"float32", "int64", "int64", "float32", "int32", "int32",
"float32"
],
use_double_buffer=True,
name='train_reader')
else:
py_reader = fluid.layers.py_reader(
capacity=64,
shapes=[[-1] + image_shape, [-1, 1]],
lod_levels=[0, 0],
dtypes=["float32", "int64"],
use_double_buffer=True,
name='test_reader')
return py_reader
def train_model(self, py_reader, init_channels, aux, aux_w, batch_size,
loss_lambda):
self.image, self.ya, self.yb, self.lam, self.label_reshape,\
self.non_label_reshape, self.rad_var = fluid.layers.read_file(py_reader)
self.logits, self.logits_aux = self.forward(init_channels, True)
self.mixup_loss = self.mixup_loss(aux, aux_w)
self.lrc_loss = self.lrc_loss(batch_size)
return self.mixup_loss + loss_lambda * self.lrc_loss
def test_model(self, py_reader, init_channels):
self.image, self.ya = fluid.layers.read_file(py_reader)
self.logits, _ = self.forward(init_channels, False)
prob = fluid.layers.softmax(self.logits, use_cudnn=False)
loss = fluid.layers.cross_entropy(prob, self.ya)
acc_1 = fluid.layers.accuracy(self.logits, self.ya, k=1)
acc_5 = fluid.layers.accuracy(self.logits, self.ya, k=5)
return loss, acc_1, acc_5
def mixup_loss(self, auxiliary, auxiliary_weight):
prob = fluid.layers.softmax(self.logits, use_cudnn=False)
loss_a = fluid.layers.cross_entropy(prob, self.ya)
loss_b = fluid.layers.cross_entropy(prob, self.yb)
loss_a_mean = fluid.layers.reduce_mean(loss_a)
loss_b_mean = fluid.layers.reduce_mean(loss_b)
loss = self.lam * loss_a_mean + (1 - self.lam) * loss_b_mean
if auxiliary:
prob_aux = fluid.layers.softmax(self.logits_aux, use_cudnn=False)
loss_a_aux = fluid.layers.cross_entropy(prob_aux, self.ya)
loss_b_aux = fluid.layers.cross_entropy(prob_aux, self.yb)
loss_a_aux_mean = fluid.layers.reduce_mean(loss_a_aux)
loss_b_aux_mean = fluid.layers.reduce_mean(loss_b_aux)
loss_aux = self.lam * loss_a_aux_mean + (1 - self.lam
) * loss_b_aux_mean
return loss + auxiliary_weight * loss_aux
def lrc_loss(self, batch_size):
y_diff_reshape = fluid.layers.reshape(self.logits, shape=(-1, 1))
label_reshape = fluid.layers.squeeze(self.label_reshape, axes=[1])
non_label_reshape = fluid.layers.squeeze(
self.non_label_reshape, axes=[1])
label_reshape.stop_gradient = True
non_label_reshape.stop_graident = True
y_diff_label_reshape = fluid.layers.gather(y_diff_reshape,
label_reshape)
y_diff_non_label_reshape = fluid.layers.gather(y_diff_reshape,
non_label_reshape)
y_diff_label = fluid.layers.reshape(
y_diff_label_reshape, shape=(-1, batch_size, 1))
y_diff_non_label = fluid.layers.reshape(
y_diff_non_label_reshape,
shape=(-1, batch_size, self.class_num - 1))
y_diff_ = y_diff_non_label - y_diff_label
y_diff_ = fluid.layers.transpose(y_diff_, perm=[1, 2, 0])
rad_var_trans = fluid.layers.transpose(self.rad_var, perm=[1, 2, 0])
rad_y_diff_trans = rad_var_trans * y_diff_
lrc_loss_sum = fluid.layers.reduce_sum(rad_y_diff_trans, dim=[0, 1])
lrc_loss_ = fluid.layers.abs(lrc_loss_sum) / (batch_size *
(self.class_num - 1))
lrc_loss_mean = fluid.layers.reduce_mean(lrc_loss_)
return lrc_loss_mean
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
#
# Based on:
# --------------------------------------------------------
# DARTS
# Copyright (c) 2018, Hanxiao Liu.
# Licensed under the Apache License, Version 2.0;
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import numpy as np
import time
import paddle
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
from paddle.fluid.initializer import Xavier
from paddle.fluid.initializer import Normal
from paddle.fluid.initializer import Constant
OPS = {
'none' : lambda input, C, stride, name, affine: Zero(input, stride, name),
'avg_pool_3x3' : lambda input, C, stride, name, affine: fluid.layers.pool2d(input, 3, 'avg', pool_stride=stride, pool_padding=1, name=name),
'max_pool_3x3' : lambda input, C, stride, name, affine: fluid.layers.pool2d(input, 3, 'max', pool_stride=stride, pool_padding=1, name=name),
'skip_connect' : lambda input,C, stride, name, affine: Identity(input, name) if stride == 1 else FactorizedReduce(input, C, name=name, affine=affine),
'sep_conv_3x3' : lambda input,C, stride, name, affine: SepConv(input, C, C, 3, stride, 1, name=name, affine=affine),
'sep_conv_5x5' : lambda input,C, stride, name, affine: SepConv(input, C, C, 5, stride, 2, name=name, affine=affine),
'sep_conv_7x7' : lambda input,C, stride, name, affine: SepConv(input, C, C, 7, stride, 3, name=name, affine=affine),
'dil_conv_3x3' : lambda input,C, stride, name, affine: DilConv(input, C, C, 3, stride, 2, 2, name=name, affine=affine),
'dil_conv_5x5' : lambda input,C, stride, name, affine: DilConv(input, C, C, 5, stride, 4, 2, name=name, affine=affine),
'conv_7x1_1x7' : lambda input,C, stride, name, affine: SevenConv(input, C, name=name, affine=affine)
}
def ReLUConvBN(input, C_out, kernel_size, stride, padding, name='',
affine=True):
relu_a = fluid.layers.relu(input)
conv2d_a = fluid.layers.conv2d(
relu_a,
C_out,
kernel_size,
stride,
padding,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.1.weight'),
bias_attr=False)
if affine:
reluconvbn_out = fluid.layers.batch_norm(
conv2d_a,
param_attr=ParamAttr(
initializer=Constant(1.), name=name + 'op.2.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=name + 'op.2.bias'),
moving_mean_name=name + 'op.2.running_mean',
moving_variance_name=name + 'op.2.running_var')
else:
reluconvbn_out = fluid.layers.batch_norm(
conv2d_a,
param_attr=ParamAttr(
initializer=Constant(1.),
learning_rate=0.,
name=name + 'op.2.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.),
learning_rate=0.,
name=name + 'op.2.bias'),
moving_mean_name=name + 'op.2.running_mean',
moving_variance_name=name + 'op.2.running_var')
return reluconvbn_out
def DilConv(input,
C_in,
C_out,
kernel_size,
stride,
padding,
dilation,
name='',
affine=True):
relu_a = fluid.layers.relu(input)
conv2d_a = fluid.layers.conv2d(
relu_a,
C_in,
kernel_size,
stride,
padding,
dilation,
groups=C_in,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.1.weight'),
bias_attr=False,
use_cudnn=False)
conv2d_b = fluid.layers.conv2d(
conv2d_a,
C_out,
1,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.2.weight'),
bias_attr=False)
if affine:
dilconv_out = fluid.layers.batch_norm(
conv2d_b,
param_attr=ParamAttr(
initializer=Constant(1.), name=name + 'op.3.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=name + 'op.3.bias'),
moving_mean_name=name + 'op.3.running_mean',
moving_variance_name=name + 'op.3.running_var')
else:
dilconv_out = fluid.layers.batch_norm(
conv2d_b,
param_attr=ParamAttr(
initializer=Constant(1.),
learning_rate=0.,
name=name + 'op.3.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.),
learning_rate=0.,
name=name + 'op.3.bias'),
moving_mean_name=name + 'op.3.running_mean',
moving_variance_name=name + 'op.3.running_var')
return dilconv_out
def SepConv(input,
C_in,
C_out,
kernel_size,
stride,
padding,
name='',
affine=True):
relu_a = fluid.layers.relu(input)
conv2d_a = fluid.layers.conv2d(
relu_a,
C_in,
kernel_size,
stride,
padding,
groups=C_in,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.1.weight'),
bias_attr=False,
use_cudnn=False)
conv2d_b = fluid.layers.conv2d(
conv2d_a,
C_in,
1,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.2.weight'),
bias_attr=False)
if affine:
bn_a = fluid.layers.batch_norm(
conv2d_b,
param_attr=ParamAttr(
initializer=Constant(1.), name=name + 'op.3.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=name + 'op.3.bias'),
moving_mean_name=name + 'op.3.running_mean',
moving_variance_name=name + 'op.3.running_var')
else:
bn_a = fluid.layers.batch_norm(
conv2d_b,
param_attr=ParamAttr(
initializer=Constant(1.),
learning_rate=0.,
name=name + 'op.3.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.),
learning_rate=0.,
name=name + 'op.3.bias'),
moving_mean_name=name + 'op.3.running_mean',
moving_variance_name=name + 'op.3.running_var')
relu_b = fluid.layers.relu(bn_a)
conv2d_d = fluid.layers.conv2d(
relu_b,
C_in,
kernel_size,
1,
padding,
groups=C_in,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.5.weight'),
bias_attr=False,
use_cudnn=False)
conv2d_e = fluid.layers.conv2d(
conv2d_d,
C_out,
1,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.6.weight'),
bias_attr=False)
if affine:
sepconv_out = fluid.layers.batch_norm(
conv2d_e,
param_attr=ParamAttr(
initializer=Constant(1.), name=name + 'op.7.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=name + 'op.7.bias'),
moving_mean_name=name + 'op.7.running_mean',
moving_variance_name=name + 'op.7.running_var')
else:
sepconv_out = fluid.layers.batch_norm(
conv2d_e,
param_attr=ParamAttr(
initializer=Constant(1.),
learning_rate=0.,
name=name + 'op.7.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.),
learning_rate=0.,
name=name + 'op.7.bias'),
moving_mean_name=name + 'op.7.running_mean',
moving_variance_name=name + 'op.7.running_var')
return sepconv_out
def SevenConv(input, C_out, stride, name='', affine=True):
relu_a = fluid.layers.relu(input)
conv2d_a = fluid.layers.conv2d(
relu_a,
C_out, (1, 7), (1, stride), (0, 3),
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.1.weight'),
bias_attr=False)
conv2d_b = fluid.layers.conv2d(
conv2d_a,
C_out, (7, 1), (stride, 1), (3, 0),
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'op.2.weight'),
bias_attr=False)
if affine:
out = fluid.layers.batch_norm(
conv2d_b,
param_attr=ParamAttr(
initializer=Constant(1.), name=name + 'op.3.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=name + 'op.3.bias'),
moving_mean_name=name + 'op.3.running_mean',
moving_variance_name=name + 'op.3.running_var')
else:
out = fluid.layers.batch_norm(
conv2d_b,
param_attr=ParamAttr(
initializer=Constant(1.),
learning_rate=0.,
name=name + 'op.3.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.),
learning_rate=0.,
name=name + 'op.3.bias'),
moving_mean_name=name + 'op.3.running_mean',
moving_variance_name=name + 'op.3.running_var')
def Identity(input, name=''):
return input
def Zero(input, stride, name=''):
ones = np.ones(input.shape[-2:])
ones[::stride, ::stride] = 0
ones = fluid.layers.assign(ones)
return input * ones
def FactorizedReduce(input, C_out, name='', affine=True):
relu_a = fluid.layers.relu(input)
conv2d_a = fluid.layers.conv2d(
relu_a,
C_out // 2,
1,
2,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'conv_1.weight'),
bias_attr=False)
h_end = relu_a.shape[2]
w_end = relu_a.shape[3]
slice_a = fluid.layers.slice(relu_a, [2, 3], [1, 1], [h_end, w_end])
conv2d_b = fluid.layers.conv2d(
slice_a,
C_out // 2,
1,
2,
param_attr=ParamAttr(
initializer=Xavier(
uniform=False, fan_in=0),
name=name + 'conv_2.weight'),
bias_attr=False)
out = fluid.layers.concat([conv2d_a, conv2d_b], axis=1)
if affine:
out = fluid.layers.batch_norm(
out,
param_attr=ParamAttr(
initializer=Constant(1.), name=name + 'bn.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.), name=name + 'bn.bias'),
moving_mean_name=name + 'bn.running_mean',
moving_variance_name=name + 'bn.running_var')
else:
out = fluid.layers.batch_norm(
out,
param_attr=ParamAttr(
initializer=Constant(1.),
learning_rate=0.,
name=name + 'bn.weight'),
bias_attr=ParamAttr(
initializer=Constant(0.),
learning_rate=0.,
name=name + 'bn.bias'),
moving_mean_name=name + 'bn.running_mean',
moving_variance_name=name + 'bn.running_var')
return out
# Copyright (c) 2019 PaddlePaddle Authors. All Rig hts Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on:
# --------------------------------------------------------
# DARTS
# Copyright (c) 2018, Hanxiao Liu.
# Licensed under the Apache License, Version 2.0;
# --------------------------------------------------------
"""
CIFAR-10 dataset.
This module will download dataset from
https://www.cs.toronto.edu/~kriz/cifar.html and parse train/test set into
paddle reader creators.
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes,
with 6000 images per class. There are 50000 training images and 10000 test images.
"""
from PIL import Image
from PIL import ImageOps
import numpy as np
import cPickle
import random
import utils
import paddle.fluid as fluid
import time
import os
import functools
import paddle.reader
__all__ = ['train10', 'test10']
image_size = 32
image_depth = 3
half_length = 8
CIFAR_MEAN = [0.4914, 0.4822, 0.4465]
CIFAR_STD = [0.24703233, 0.24348505, 0.26158768]
def generate_reshape_label(label, batch_size, CIFAR_CLASSES=10):
reshape_label = np.zeros((batch_size, 1), dtype='int32')
reshape_non_label = np.zeros(
(batch_size * (CIFAR_CLASSES - 1), 1), dtype='int32')
num = 0
for i in range(batch_size):
label_i = label[i]
reshape_label[i] = label_i + i * CIFAR_CLASSES
for j in range(CIFAR_CLASSES):
if label_i != j:
reshape_non_label[num] = \
j + i * CIFAR_CLASSES
num += 1
return reshape_label, reshape_non_label
def generate_bernoulli_number(batch_size, CIFAR_CLASSES=10):
rcc_iters = 50
rad_var = np.zeros((rcc_iters, batch_size, CIFAR_CLASSES - 1))
for i in range(rcc_iters):
bernoulli_num = np.random.binomial(size=batch_size, n=1, p=0.5)
bernoulli_map = np.array([])
ones = np.ones((CIFAR_CLASSES - 1, 1))
for batch_id in range(batch_size):
num = bernoulli_num[batch_id]
var_id = 2 * ones * num - 1
bernoulli_map = np.append(bernoulli_map, var_id)
rad_var[i] = bernoulli_map.reshape((batch_size, CIFAR_CLASSES - 1))
return rad_var.astype('float32')
def preprocess(sample, is_training, args):
image_array = sample.reshape(3, image_size, image_size)
rgb_array = np.transpose(image_array, (1, 2, 0))
img = Image.fromarray(rgb_array, 'RGB')
if is_training:
# pad and ramdom crop
img = ImageOps.expand(img, (4, 4, 4, 4), fill=0) # pad to 40 * 40 * 3
left_top = np.random.randint(9, size=2) # rand 0 - 8
img = img.crop((left_top[0], left_top[1], left_top[0] + image_size,
left_top[1] + image_size))
if np.random.randint(2):
img = img.transpose(Image.FLIP_LEFT_RIGHT)
img = np.array(img).astype(np.float32)
# per_image_standardization
img_float = img / 255.0
img = (img_float - CIFAR_MEAN) / CIFAR_STD
if is_training and args.cutout:
center = np.random.randint(image_size, size=2)
offset_width = max(0, center[0] - half_length)
offset_height = max(0, center[1] - half_length)
target_width = min(center[0] + half_length, image_size)
target_height = min(center[1] + half_length, image_size)
for i in range(offset_height, target_height):
for j in range(offset_width, target_width):
img[i][j][:] = 0.0
img = np.transpose(img, (2, 0, 1))
return img
def reader_creator_filepath(filename, sub_name, is_training, args):
files = os.listdir(filename)
names = [each_item for each_item in files if sub_name in each_item]
names.sort()
datasets = []
for name in names:
print("Reading file " + name)
batch = cPickle.load(open(filename + name, 'rb'))
data = batch['data']
labels = batch.get('labels', batch.get('fine_labels', None))
assert labels is not None
dataset = zip(data, labels)
datasets.extend(dataset)
random.shuffle(datasets)
def read_batch(datasets, args):
for sample, label in datasets:
im = preprocess(sample, is_training, args)
yield im, [int(label)]
def reader():
batch_data = []
batch_label = []
for data, label in read_batch(datasets, args):
batch_data.append(data)
batch_label.append(label)
if len(batch_data) == args.batch_size:
batch_data = np.array(batch_data, dtype='float32')
batch_label = np.array(batch_label, dtype='int64')
if is_training:
flatten_label, flatten_non_label = \
generate_reshape_label(batch_label, args.batch_size)
rad_var = generate_bernoulli_number(args.batch_size)
mixed_x, y_a, y_b, lam = utils.mixup_data(
batch_data, batch_label, args.batch_size,
args.mix_alpha)
batch_out = [[mixed_x, y_a, y_b, lam, flatten_label, \
flatten_non_label, rad_var]]
yield batch_out
else:
batch_out = [[batch_data, batch_label]]
yield batch_out
batch_data = []
batch_label = []
return reader
def train10(args):
"""
CIFAR-10 training set creator.
It returns a reader creator, each sample in the reader is image pixels in
[0, 1] and label in [0, 9].
:return: Training reader creator
:rtype: callable
"""
return reader_creator_filepath(args.data, 'data_batch', True, args)
def test10(args):
"""
CIFAR-10 test set creator.
It returns a reader creator, each sample in the reader is image pixels in
[0, 1] and label in [0, 9].
:return: Test reader creator.
:rtype: callable
"""
return reader_creator_filepath(args.data, 'test_batch', False, args)
CUDA_VISIBLE_DEVICES=0 python -u train_mixup.py \
--batch_size=80 \
--auxiliary \
--weight_decay=0.0003 \
--learning_rate=0.025 \
--lrc_loss_lambda=0.7 \
--cutout
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
#
# Based on:
# --------------------------------------------------------
# DARTS
# Copyright (c) 2018, Hanxiao Liu.
# Licensed under the Apache License, Version 2.0;
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from learning_rate import cosine_decay
import numpy as np
import argparse
from model import NetworkCIFAR as Network
import reader
import sys
import os
import time
import logging
import genotypes
import paddle.fluid as fluid
import shutil
import utils
import cPickle as cp
parser = argparse.ArgumentParser("cifar")
parser.add_argument(
'--data',
type=str,
default='./dataset/cifar/cifar-10-batches-py/',
help='location of the data corpus')
parser.add_argument('--batch_size', type=int, default=96, help='batch size')
parser.add_argument(
'--learning_rate', type=float, default=0.025, help='init learning rate')
parser.add_argument('--momentum', type=float, default=0.9, help='momentum')
parser.add_argument(
'--weight_decay', type=float, default=3e-4, help='weight decay')
parser.add_argument(
'--report_freq', type=float, default=50, help='report frequency')
parser.add_argument(
'--epochs', type=int, default=600, help='num of training epochs')
parser.add_argument(
'--init_channels', type=int, default=36, help='num of init channels')
parser.add_argument(
'--layers', type=int, default=20, help='total number of layers')
parser.add_argument(
'--model_path',
type=str,
default='saved_models',
help='path to save the model')
parser.add_argument(
'--auxiliary',
action='store_true',
default=False,
help='use auxiliary tower')
parser.add_argument(
'--auxiliary_weight',
type=float,
default=0.4,
help='weight for auxiliary loss')
parser.add_argument(
'--cutout', action='store_true', default=False, help='use cutout')
parser.add_argument(
'--cutout_length', type=int, default=16, help='cutout length')
parser.add_argument(
'--drop_path_prob', type=float, default=0.2, help='drop path probability')
parser.add_argument('--save', type=str, default='EXP', help='experiment name')
parser.add_argument(
'--arch', type=str, default='DARTS', help='which architecture to use')
parser.add_argument(
'--grad_clip', type=float, default=5, help='gradient clipping')
parser.add_argument(
'--lr_exp_decay',
action='store_true',
default=False,
help='use exponential_decay learning_rate')
parser.add_argument('--mix_alpha', type=float, default=0.5, help='mixup alpha')
parser.add_argument(
'--lrc_loss_lambda', default=0, type=float, help='lrc_loss_lambda')
parser.add_argument(
'--loss_type',
default=1,
type=float,
help='loss_type 0: cross entropy 1: multi margin loss 2: max margin loss')
args = parser.parse_args()
CIFAR_CLASSES = 10
dataset_train_size = 50000
image_size = 32
def main():
image_shape = [3, image_size, image_size]
devices = os.getenv("CUDA_VISIBLE_DEVICES") or ""
devices_num = len(devices.split(","))
logging.info("args = %s", args)
genotype = eval("genotypes.%s" % args.arch)
model = Network(args.init_channels, CIFAR_CLASSES, args.layers,
args.auxiliary, genotype)
steps_one_epoch = dataset_train_size / (devices_num * args.batch_size)
train(model, args, image_shape, steps_one_epoch)
def build_program(main_prog, startup_prog, args, is_train, model, im_shape,
steps_one_epoch):
out = []
with fluid.program_guard(main_prog, startup_prog):
py_reader = model.build_input(im_shape, args.batch_size, is_train)
if is_train:
with fluid.unique_name.guard():
loss = model.train_model(py_reader, args.init_channels,
args.auxiliary, args.auxiliary_weight,
args.batch_size, args.lrc_loss_lambda)
optimizer = fluid.optimizer.Momentum(
learning_rate=cosine_decay(args.learning_rate, \
args.epochs, steps_one_epoch),
regularization=fluid.regularizer.L2Decay(\
args.weight_decay),
momentum=args.momentum)
optimizer.minimize(loss)
out = [py_reader, loss]
else:
with fluid.unique_name.guard():
loss, acc_1, acc_5 = model.test_model(py_reader,
args.init_channels)
out = [py_reader, loss, acc_1, acc_5]
return out
def train(model, args, im_shape, steps_one_epoch):
train_startup_prog = fluid.Program()
test_startup_prog = fluid.Program()
train_prog = fluid.Program()
test_prog = fluid.Program()
train_py_reader, loss_train = build_program(train_prog, train_startup_prog,
args, True, model, im_shape,
steps_one_epoch)
test_py_reader, loss_test, acc_1, acc_5 = build_program(
test_prog, test_startup_prog, args, False, model, im_shape,
steps_one_epoch)
test_prog = test_prog.clone(for_test=True)
place = fluid.CUDAPlace(0)
exe = fluid.Executor(place)
exe.run(train_startup_prog)
exe.run(test_startup_prog)
exec_strategy = fluid.ExecutionStrategy()
exec_strategy.num_threads = 1
train_exe = fluid.ParallelExecutor(
main_program=train_prog,
use_cuda=True,
loss_name=loss_train.name,
exec_strategy=exec_strategy)
train_reader = reader.train10(args)
test_reader = reader.test10(args)
train_py_reader.decorate_paddle_reader(train_reader)
test_py_reader.decorate_paddle_reader(test_reader)
fluid.clip.set_gradient_clip(fluid.clip.GradientClipByNorm(args.grad_clip))
fluid.memory_optimize(fluid.default_main_program())
def save_model(postfix, main_prog):
model_path = os.path.join(args.model_path, postfix)
if os.path.isdir(model_path):
shutil.rmtree(model_path)
fluid.io.save_persistables(exe, model_path, main_program=main_prog)
def test(epoch_id):
test_fetch_list = [loss_test, acc_1, acc_5]
objs = utils.AvgrageMeter()
top1 = utils.AvgrageMeter()
top5 = utils.AvgrageMeter()
test_py_reader.start()
test_start_time = time.time()
step_id = 0
try:
while True:
prev_test_start_time = test_start_time
test_start_time = time.time()
loss_test_v, acc_1_v, acc_5_v = exe.run(
test_prog, fetch_list=test_fetch_list)
objs.update(np.array(loss_test_v), args.batch_size)
top1.update(np.array(acc_1_v), args.batch_size)
top5.update(np.array(acc_5_v), args.batch_size)
if step_id % args.report_freq == 0:
print("Epoch {}, Step {}, acc_1 {}, acc_5 {}, time {}".
format(epoch_id, step_id,
np.array(acc_1_v),
np.array(acc_5_v), test_start_time -
prev_test_start_time))
step_id += 1
except fluid.core.EOFException:
test_py_reader.reset()
print("Epoch {0}, top1 {1}, top5 {2}".format(epoch_id, top1.avg,
top5.avg))
train_fetch_list = [loss_train]
epoch_start_time = time.time()
for epoch_id in range(args.epochs):
model.drop_path_prob = args.drop_path_prob * epoch_id / args.epochs
train_py_reader.start()
epoch_end_time = time.time()
if epoch_id > 0:
print("Epoch {}, total time {}".format(epoch_id - 1, epoch_end_time
- epoch_start_time))
epoch_start_time = epoch_end_time
epoch_end_time
start_time = time.time()
step_id = 0
try:
while True:
prev_start_time = start_time
start_time = time.time()
loss_v, = train_exe.run(
fetch_list=[v.name for v in train_fetch_list])
print("Epoch {}, Step {}, loss {}, time {}".format(epoch_id, step_id, \
np.array(loss_v).mean(), start_time-prev_start_time))
step_id += 1
sys.stdout.flush()
except fluid.core.EOFException:
train_py_reader.reset()
if epoch_id % 50 == 0 or epoch_id == args.epochs - 1:
save_model(str(epoch_id), train_prog)
test(epoch_id)
if __name__ == '__main__':
main()
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on:
# --------------------------------------------------------
# DARTS
# Copyright (c) 2018, Hanxiao Liu.
# Licensed under the Apache License, Version 2.0;
# --------------------------------------------------------
import os
import sys
import time
import math
import numpy as np
def mixup_data(x, y, batch_size, alpha=1.0):
'''Compute the mixup data. Return mixed inputs, pairs of targets, and lambda'''
if alpha > 0.:
lam = np.random.beta(alpha, alpha)
else:
lam = 1.
index = np.random.permutation(batch_size)
mixed_x = lam * x + (1 - lam) * x[index, :]
y_a, y_b = y, y[index]
return mixed_x.astype('float32'), y_a.astype('int64'),\
y_b.astype('int64'), np.array(lam, dtype='float32')
class AvgrageMeter(object):
def __init__(self):
self.reset()
def reset(self):
self.avg = 0
self.sum = 0
self.cnt = 0
def update(self, val, n=1):
self.sum += val * n
self.cnt += n
self.avg = self.sum / self.cnt
#-*- coding: utf-8 -*-
import math
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
import numpy as np
import math
from tqdm import tqdm
from utils import fluid_flatten
class DQNModel(object):
......@@ -39,34 +38,51 @@ class DQNModel(object):
name='isOver', shape=[], dtype='bool')
def _build_net(self):
state, action, reward, next_s, isOver = self._get_inputs()
self.pred_value = self.get_DQN_prediction(state)
self.predict_program = fluid.default_main_program().clone()
self.predict_program = fluid.Program()
self.train_program = fluid.Program()
self._sync_program = fluid.Program()
reward = fluid.layers.clip(reward, min=-1.0, max=1.0)
with fluid.program_guard(self.predict_program):
state, action, reward, next_s, isOver = self._get_inputs()
self.pred_value = self.get_DQN_prediction(state)
action_onehot = fluid.layers.one_hot(action, self.action_dim)
action_onehot = fluid.layers.cast(action_onehot, dtype='float32')
with fluid.program_guard(self.train_program):
state, action, reward, next_s, isOver = self._get_inputs()
pred_value = self.get_DQN_prediction(state)
pred_action_value = fluid.layers.reduce_sum(
fluid.layers.elementwise_mul(action_onehot, self.pred_value), dim=1)
reward = fluid.layers.clip(reward, min=-1.0, max=1.0)
targetQ_predict_value = self.get_DQN_prediction(next_s, target=True)
best_v = fluid.layers.reduce_max(targetQ_predict_value, dim=1)
best_v.stop_gradient = True
action_onehot = fluid.layers.one_hot(action, self.action_dim)
action_onehot = fluid.layers.cast(action_onehot, dtype='float32')
target = reward + (1.0 - fluid.layers.cast(
isOver, dtype='float32')) * self.gamma * best_v
cost = fluid.layers.square_error_cost(pred_action_value, target)
cost = fluid.layers.reduce_mean(cost)
pred_action_value = fluid.layers.reduce_sum(
fluid.layers.elementwise_mul(action_onehot, pred_value), dim=1)
self._sync_program = self._build_sync_target_network()
targetQ_predict_value = self.get_DQN_prediction(next_s, target=True)
best_v = fluid.layers.reduce_max(targetQ_predict_value, dim=1)
best_v.stop_gradient = True
optimizer = fluid.optimizer.Adam(1e-3 * 0.5, epsilon=1e-3)
optimizer.minimize(cost)
target = reward + (1.0 - fluid.layers.cast(
isOver, dtype='float32')) * self.gamma * best_v
cost = fluid.layers.square_error_cost(pred_action_value, target)
cost = fluid.layers.reduce_mean(cost)
# define program
self.train_program = fluid.default_main_program()
optimizer = fluid.optimizer.Adam(1e-3 * 0.5, epsilon=1e-3)
optimizer.minimize(cost)
vars = list(self.train_program.list_vars())
policy_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'policy' in x.name, vars))
target_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'target' in x.name, vars))
policy_vars.sort(key=lambda x: x.name)
target_vars.sort(key=lambda x: x.name)
with fluid.program_guard(self._sync_program):
sync_ops = []
for i, var in enumerate(policy_vars):
sync_op = fluid.layers.assign(policy_vars[i], target_vars[i])
sync_ops.append(sync_op)
# fluid exe
place = fluid.CUDAPlace(0) if self.use_cuda else fluid.CPUPlace()
......@@ -81,50 +97,50 @@ class DQNModel(object):
conv1 = fluid.layers.conv2d(
input=image,
num_filters=32,
filter_size=[5, 5],
stride=[1, 1],
padding=[2, 2],
filter_size=5,
stride=1,
padding=2,
act='relu',
param_attr=ParamAttr(name='{}_conv1'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv1_b'.format(variable_field)))
max_pool1 = fluid.layers.pool2d(
input=conv1, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv1, pool_size=2, pool_stride=2, pool_type='max')
conv2 = fluid.layers.conv2d(
input=max_pool1,
num_filters=32,
filter_size=[5, 5],
stride=[1, 1],
padding=[2, 2],
filter_size=5,
stride=1,
padding=2,
act='relu',
param_attr=ParamAttr(name='{}_conv2'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv2_b'.format(variable_field)))
max_pool2 = fluid.layers.pool2d(
input=conv2, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv2, pool_size=2, pool_stride=2, pool_type='max')
conv3 = fluid.layers.conv2d(
input=max_pool2,
num_filters=64,
filter_size=[4, 4],
stride=[1, 1],
padding=[1, 1],
filter_size=4,
stride=1,
padding=1,
act='relu',
param_attr=ParamAttr(name='{}_conv3'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv3_b'.format(variable_field)))
max_pool3 = fluid.layers.pool2d(
input=conv3, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv3, pool_size=2, pool_stride=2, pool_type='max')
conv4 = fluid.layers.conv2d(
input=max_pool3,
num_filters=64,
filter_size=[3, 3],
stride=[1, 1],
padding=[1, 1],
filter_size=3,
stride=1,
padding=1,
act='relu',
param_attr=ParamAttr(name='{}_conv4'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv4_b'.format(variable_field)))
flatten = fluid_flatten(conv4)
flatten = fluid.layers.flatten(conv4, axis=1)
out = fluid.layers.fc(
input=flatten,
......@@ -133,23 +149,6 @@ class DQNModel(object):
bias_attr=ParamAttr(name='{}_fc1_b'.format(variable_field)))
return out
def _build_sync_target_network(self):
vars = list(fluid.default_main_program().list_vars())
policy_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'policy' in x.name, vars))
target_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'target' in x.name, vars))
policy_vars.sort(key=lambda x: x.name)
target_vars.sort(key=lambda x: x.name)
sync_program = fluid.default_main_program().clone()
with fluid.program_guard(sync_program):
sync_ops = []
for i, var in enumerate(policy_vars):
sync_op = fluid.layers.assign(policy_vars[i], target_vars[i])
sync_ops.append(sync_op)
sync_program = sync_program.prune(sync_ops)
return sync_program
def act(self, state, train_or_test):
sample = np.random.random()
......
#-*- coding: utf-8 -*-
import math
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
import numpy as np
from tqdm import tqdm
import math
from utils import fluid_argmax, fluid_flatten
class DoubleDQNModel(object):
......@@ -39,41 +38,59 @@ class DoubleDQNModel(object):
name='isOver', shape=[], dtype='bool')
def _build_net(self):
state, action, reward, next_s, isOver = self._get_inputs()
self.pred_value = self.get_DQN_prediction(state)
self.predict_program = fluid.default_main_program().clone()
self.predict_program = fluid.Program()
self.train_program = fluid.Program()
self._sync_program = fluid.Program()
reward = fluid.layers.clip(reward, min=-1.0, max=1.0)
with fluid.program_guard(self.predict_program):
state, action, reward, next_s, isOver = self._get_inputs()
self.pred_value = self.get_DQN_prediction(state)
action_onehot = fluid.layers.one_hot(action, self.action_dim)
action_onehot = fluid.layers.cast(action_onehot, dtype='float32')
with fluid.program_guard(self.train_program):
state, action, reward, next_s, isOver = self._get_inputs()
pred_value = self.get_DQN_prediction(state)
pred_action_value = fluid.layers.reduce_sum(
fluid.layers.elementwise_mul(action_onehot, self.pred_value), dim=1)
reward = fluid.layers.clip(reward, min=-1.0, max=1.0)
targetQ_predict_value = self.get_DQN_prediction(next_s, target=True)
action_onehot = fluid.layers.one_hot(action, self.action_dim)
action_onehot = fluid.layers.cast(action_onehot, dtype='float32')
next_s_predcit_value = self.get_DQN_prediction(next_s)
greedy_action = fluid_argmax(next_s_predcit_value)
pred_action_value = fluid.layers.reduce_sum(
fluid.layers.elementwise_mul(action_onehot, pred_value), dim=1)
predict_onehot = fluid.layers.one_hot(greedy_action, self.action_dim)
best_v = fluid.layers.reduce_sum(
fluid.layers.elementwise_mul(predict_onehot, targetQ_predict_value),
dim=1)
best_v.stop_gradient = True
targetQ_predict_value = self.get_DQN_prediction(next_s, target=True)
target = reward + (1.0 - fluid.layers.cast(
isOver, dtype='float32')) * self.gamma * best_v
cost = fluid.layers.square_error_cost(pred_action_value, target)
cost = fluid.layers.reduce_mean(cost)
next_s_predcit_value = self.get_DQN_prediction(next_s)
greedy_action = fluid.layers.argmax(next_s_predcit_value, axis=1)
greedy_action = fluid.layers.unsqueeze(greedy_action, axes=[1])
self._sync_program = self._build_sync_target_network()
predict_onehot = fluid.layers.one_hot(greedy_action, self.action_dim)
best_v = fluid.layers.reduce_sum(
fluid.layers.elementwise_mul(predict_onehot, targetQ_predict_value),
dim=1)
best_v.stop_gradient = True
optimizer = fluid.optimizer.Adam(1e-3 * 0.5, epsilon=1e-3)
optimizer.minimize(cost)
target = reward + (1.0 - fluid.layers.cast(
isOver, dtype='float32')) * self.gamma * best_v
cost = fluid.layers.square_error_cost(pred_action_value, target)
cost = fluid.layers.reduce_mean(cost)
# define program
self.train_program = fluid.default_main_program()
optimizer = fluid.optimizer.Adam(1e-3 * 0.5, epsilon=1e-3)
optimizer.minimize(cost)
vars = list(self.train_program.list_vars())
policy_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'policy' in x.name, vars))
target_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'target' in x.name, vars))
policy_vars.sort(key=lambda x: x.name)
target_vars.sort(key=lambda x: x.name)
with fluid.program_guard(self._sync_program):
sync_ops = []
for i, var in enumerate(policy_vars):
sync_op = fluid.layers.assign(policy_vars[i], target_vars[i])
sync_ops.append(sync_op)
# fluid exe
place = fluid.CUDAPlace(0) if self.use_cuda else fluid.CPUPlace()
......@@ -88,50 +105,50 @@ class DoubleDQNModel(object):
conv1 = fluid.layers.conv2d(
input=image,
num_filters=32,
filter_size=[5, 5],
stride=[1, 1],
padding=[2, 2],
filter_size=5,
stride=1,
padding=2,
act='relu',
param_attr=ParamAttr(name='{}_conv1'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv1_b'.format(variable_field)))
max_pool1 = fluid.layers.pool2d(
input=conv1, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv1, pool_size=2, pool_stride=2, pool_type='max')
conv2 = fluid.layers.conv2d(
input=max_pool1,
num_filters=32,
filter_size=[5, 5],
stride=[1, 1],
padding=[2, 2],
filter_size=5,
stride=1,
padding=2,
act='relu',
param_attr=ParamAttr(name='{}_conv2'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv2_b'.format(variable_field)))
max_pool2 = fluid.layers.pool2d(
input=conv2, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv2, pool_size=2, pool_stride=2, pool_type='max')
conv3 = fluid.layers.conv2d(
input=max_pool2,
num_filters=64,
filter_size=[4, 4],
stride=[1, 1],
padding=[1, 1],
filter_size=4,
stride=1,
padding=1,
act='relu',
param_attr=ParamAttr(name='{}_conv3'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv3_b'.format(variable_field)))
max_pool3 = fluid.layers.pool2d(
input=conv3, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv3, pool_size=2, pool_stride=2, pool_type='max')
conv4 = fluid.layers.conv2d(
input=max_pool3,
num_filters=64,
filter_size=[3, 3],
stride=[1, 1],
padding=[1, 1],
filter_size=3,
stride=1,
padding=1,
act='relu',
param_attr=ParamAttr(name='{}_conv4'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv4_b'.format(variable_field)))
flatten = fluid_flatten(conv4)
flatten = fluid.layers.flatten(conv4, axis=1)
out = fluid.layers.fc(
input=flatten,
......@@ -140,23 +157,6 @@ class DoubleDQNModel(object):
bias_attr=ParamAttr(name='{}_fc1_b'.format(variable_field)))
return out
def _build_sync_target_network(self):
vars = list(fluid.default_main_program().list_vars())
policy_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'policy' in x.name, vars))
target_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'target' in x.name, vars))
policy_vars.sort(key=lambda x: x.name)
target_vars.sort(key=lambda x: x.name)
sync_program = fluid.default_main_program().clone()
with fluid.program_guard(sync_program):
sync_ops = []
for i, var in enumerate(policy_vars):
sync_op = fluid.layers.assign(policy_vars[i], target_vars[i])
sync_ops.append(sync_op)
sync_program = sync_program.prune(sync_ops)
return sync_program
def act(self, state, train_or_test):
sample = np.random.random()
......
#-*- coding: utf-8 -*-
import math
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
import numpy as np
from tqdm import tqdm
import math
from utils import fluid_flatten
class DuelingDQNModel(object):
......@@ -39,34 +38,51 @@ class DuelingDQNModel(object):
name='isOver', shape=[], dtype='bool')
def _build_net(self):
state, action, reward, next_s, isOver = self._get_inputs()
self.pred_value = self.get_DQN_prediction(state)
self.predict_program = fluid.default_main_program().clone()
self.predict_program = fluid.Program()
self.train_program = fluid.Program()
self._sync_program = fluid.Program()
reward = fluid.layers.clip(reward, min=-1.0, max=1.0)
with fluid.program_guard(self.predict_program):
state, action, reward, next_s, isOver = self._get_inputs()
self.pred_value = self.get_DQN_prediction(state)
action_onehot = fluid.layers.one_hot(action, self.action_dim)
action_onehot = fluid.layers.cast(action_onehot, dtype='float32')
with fluid.program_guard(self.train_program):
state, action, reward, next_s, isOver = self._get_inputs()
pred_value = self.get_DQN_prediction(state)
pred_action_value = fluid.layers.reduce_sum(
fluid.layers.elementwise_mul(action_onehot, self.pred_value), dim=1)
reward = fluid.layers.clip(reward, min=-1.0, max=1.0)
targetQ_predict_value = self.get_DQN_prediction(next_s, target=True)
best_v = fluid.layers.reduce_max(targetQ_predict_value, dim=1)
best_v.stop_gradient = True
action_onehot = fluid.layers.one_hot(action, self.action_dim)
action_onehot = fluid.layers.cast(action_onehot, dtype='float32')
target = reward + (1.0 - fluid.layers.cast(
isOver, dtype='float32')) * self.gamma * best_v
cost = fluid.layers.square_error_cost(pred_action_value, target)
cost = fluid.layers.reduce_mean(cost)
pred_action_value = fluid.layers.reduce_sum(
fluid.layers.elementwise_mul(action_onehot, pred_value), dim=1)
self._sync_program = self._build_sync_target_network()
targetQ_predict_value = self.get_DQN_prediction(next_s, target=True)
best_v = fluid.layers.reduce_max(targetQ_predict_value, dim=1)
best_v.stop_gradient = True
optimizer = fluid.optimizer.Adam(1e-3 * 0.5, epsilon=1e-3)
optimizer.minimize(cost)
target = reward + (1.0 - fluid.layers.cast(
isOver, dtype='float32')) * self.gamma * best_v
cost = fluid.layers.square_error_cost(pred_action_value, target)
cost = fluid.layers.reduce_mean(cost)
# define program
self.train_program = fluid.default_main_program()
optimizer = fluid.optimizer.Adam(1e-3 * 0.5, epsilon=1e-3)
optimizer.minimize(cost)
vars = list(self.train_program.list_vars())
policy_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'policy' in x.name, vars))
target_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'target' in x.name, vars))
policy_vars.sort(key=lambda x: x.name)
target_vars.sort(key=lambda x: x.name)
with fluid.program_guard(self._sync_program):
sync_ops = []
for i, var in enumerate(policy_vars):
sync_op = fluid.layers.assign(policy_vars[i], target_vars[i])
sync_ops.append(sync_op)
# fluid exe
place = fluid.CUDAPlace(0) if self.use_cuda else fluid.CPUPlace()
......@@ -81,50 +97,50 @@ class DuelingDQNModel(object):
conv1 = fluid.layers.conv2d(
input=image,
num_filters=32,
filter_size=[5, 5],
stride=[1, 1],
padding=[2, 2],
filter_size=5,
stride=1,
padding=2,
act='relu',
param_attr=ParamAttr(name='{}_conv1'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv1_b'.format(variable_field)))
max_pool1 = fluid.layers.pool2d(
input=conv1, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv1, pool_size=2, pool_stride=2, pool_type='max')
conv2 = fluid.layers.conv2d(
input=max_pool1,
num_filters=32,
filter_size=[5, 5],
stride=[1, 1],
padding=[2, 2],
filter_size=5,
stride=1,
padding=2,
act='relu',
param_attr=ParamAttr(name='{}_conv2'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv2_b'.format(variable_field)))
max_pool2 = fluid.layers.pool2d(
input=conv2, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv2, pool_size=2, pool_stride=2, pool_type='max')
conv3 = fluid.layers.conv2d(
input=max_pool2,
num_filters=64,
filter_size=[4, 4],
stride=[1, 1],
padding=[1, 1],
filter_size=4,
stride=1,
padding=1,
act='relu',
param_attr=ParamAttr(name='{}_conv3'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv3_b'.format(variable_field)))
max_pool3 = fluid.layers.pool2d(
input=conv3, pool_size=[2, 2], pool_stride=[2, 2], pool_type='max')
input=conv3, pool_size=2, pool_stride=2, pool_type='max')
conv4 = fluid.layers.conv2d(
input=max_pool3,
num_filters=64,
filter_size=[3, 3],
stride=[1, 1],
padding=[1, 1],
filter_size=3,
stride=1,
padding=1,
act='relu',
param_attr=ParamAttr(name='{}_conv4'.format(variable_field)),
bias_attr=ParamAttr(name='{}_conv4_b'.format(variable_field)))
flatten = fluid_flatten(conv4)
flatten = fluid.layers.flatten(conv4, axis=1)
value = fluid.layers.fc(
input=flatten,
......@@ -143,24 +159,6 @@ class DuelingDQNModel(object):
advantage, dim=1, keep_dim=True))
return Q
def _build_sync_target_network(self):
vars = list(fluid.default_main_program().list_vars())
policy_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'policy' in x.name, vars))
target_vars = list(filter(
lambda x: 'GRAD' not in x.name and 'target' in x.name, vars))
policy_vars.sort(key=lambda x: x.name)
target_vars.sort(key=lambda x: x.name)
sync_program = fluid.default_main_program().clone()
with fluid.program_guard(sync_program):
sync_ops = []
for i, var in enumerate(policy_vars):
sync_op = fluid.layers.assign(policy_vars[i], target_vars[i])
sync_ops.append(sync_op)
# The prune API is deprecated, please don't use it any more.
sync_program = sync_program._prune(sync_ops)
return sync_program
def act(self, state, train_or_test):
sample = np.random.random()
......@@ -186,12 +184,14 @@ class DuelingDQNModel(object):
self.global_step += 1
action = np.expand_dims(action, -1)
self.exe.run(self.train_program, \
feed={'state': state.astype('float32'), \
'action': action.astype('int32'), \
'reward': reward, \
'next_s': next_state.astype('float32'), \
'isOver': isOver})
self.exe.run(self.train_program,
feed={
'state': state.astype('float32'),
'action': action.astype('int32'),
'reward': reward,
'next_s': next_state.astype('float32'),
'isOver': isOver
})
def sync_target_network(self):
self.exe.run(self._sync_program)
......@@ -29,7 +29,7 @@ The average game rewards that can be obtained for the three models as the number
+ gym
+ tqdm
+ opencv-python
+ paddlepaddle-gpu>=0.12.0
+ paddlepaddle-gpu>=1.0.0
+ ale_python_interface
### Install Dependencies:
......
......@@ -28,7 +28,7 @@
+ gym
+ tqdm
+ opencv-python
+ paddlepaddle-gpu>=0.12.0
+ paddlepaddle-gpu>=1.0.0
+ ale_python_interface
### 下载依赖:
......
#-*- coding: utf-8 -*-
#File: utils.py
import paddle.fluid as fluid
import numpy as np
def fluid_argmax(x):
"""
Get index of max value for the last dimension
"""
_, max_index = fluid.layers.topk(x, k=1)
return max_index
def fluid_flatten(x):
"""
Flatten fluid variable along the first dimension
"""
return fluid.layers.reshape(x, shape=[-1, np.prod(x.shape[1:])])
......@@ -121,7 +121,7 @@ def detect_face(image, shrink):
return_numpy=False)
detection = np.array(detection)
# layout: xmin, ymin, xmax. ymax, score
if detection.shape == (1, ):
if np.prod(detection.shape) == 1:
print("No face detected")
return np.array([[0, 0, 0, 0, 0]])
det_conf = detection[:, 1]
......
......@@ -103,7 +103,7 @@ python infer.py \
## 其他信息
|数据集 | pretrained model |
|---|---|
|CityScape | [Model]()[md: ] |
|CityScape | [pretrained_model](https://paddle-icnet-models.bj.bcebos.com/model_1000.tar.gz) |
## 参考
......
......@@ -209,6 +209,7 @@ Models are trained by starting with learning rate ```0.1``` and decaying it by `
|[VGG16](https://paddle-imagenet-models-name.bj.bcebos.com/VGG16_pretrained.zip) | 72.08%/90.63% | 71.65%/90.57% |
|[VGG19](https://paddle-imagenet-models-name.bj.bcebos.com/VGG19_pretrained.zip) | 72.56%/90.83% | 72.32%/90.98% |
|[MobileNetV1](http://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_pretrained.zip) | 70.91%/89.54% | 70.51%/89.35% |
|[MobileNetV2](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_pretrained.zip) | 71.90%/90.55% | 71.53%/90.41% |
|[ResNet50](http://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_pretrained.zip) | 76.35%/92.80% | 76.22%/92.92% |
|[ResNet101](http://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_pretrained.zip) | 77.49%/93.57% | 77.56%/93.64% |
|[ResNet152](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet152_pretrained.zip) | 78.12%/93.93% | 77.92%/93.87% |
......
......@@ -204,6 +204,7 @@ Models包括两种模型:带有参数名字的模型,和不带有参数名
|[VGG16](https://paddle-imagenet-models-name.bj.bcebos.com/VGG16_pretrained.zip) | 72.08%/90.63% | 71.65%/90.57% |
|[VGG19](https://paddle-imagenet-models-name.bj.bcebos.com/VGG19_pretrained.zip) | 72.56%/90.83% | 72.32%/90.98% |
|[MobileNetV1](http://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV1_pretrained.zip) | 70.91%/89.54% | 70.51%/89.35% |
|[MobileNetV2](https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV2_pretrained.zip) | 71.90%/90.55% | 71.53%/90.41% |
|[ResNet50](http://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_pretrained.zip) | 76.35%/92.80% | 76.22%/92.92% |
|[ResNet101](http://paddle-imagenet-models-name.bj.bcebos.com/ResNet101_pretrained.zip) | 77.49%/93.57% | 77.56%/93.64% |
|[ResNet152](https://paddle-imagenet-models-name.bj.bcebos.com/ResNet152_pretrained.zip) | 78.12%/93.93% | 77.92%/93.87% |
......
......@@ -6,7 +6,7 @@ python train.py \
--class_dim=1000 \
--image_shape=3,224,224 \
--model_save_dir=output/ \
--with_mem_opt=False \
--with_mem_opt=True \
--lr_strategy=piecewise_decay \
--lr=0.1
# >log_SE_ResNeXt50_32x4d.txt 2>&1 &
......@@ -19,7 +19,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=False \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.01
......@@ -32,7 +32,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=False \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.1
......@@ -46,12 +46,22 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=False \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.1
#python train.py \
# --model=MobileNetV2 \
# --batch_size=500 \
# --total_images=1281167 \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=cosine_decay \
# --num_epochs=200 \
# --lr=0.1
#ResNet50:
#python train.py \
# --model=ResNet50 \
......@@ -60,7 +70,7 @@ python train.py \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=False \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.1
......@@ -87,7 +97,7 @@ python train.py \
# --lr_strategy=piecewise_decay \
# --lr=0.1 \
# --num_epochs=120 \
# --l2_decay=1e-4 \(TODO)
# --l2_decay=1e-4
#SE_ResNeXt50:
......@@ -99,7 +109,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --l2_decay=12e-5 \(TODO)
# --l2_decay=12e-5
#SE_ResNeXt101:
#python train.py \
......@@ -110,7 +120,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=200 \
# --l2_decay=15e-5 \(TODO)
# --l2_decay=15e-5
#VGG11:
#python train.py \
......@@ -121,7 +131,7 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.1 \
# --num_epochs=90 \
# --l2_decay=2e-4 \(TODO)
# --l2_decay=2e-4
#VGG13:
#python train.py
......@@ -132,4 +142,4 @@ python train.py \
# --lr_strategy=cosine_decay \
# --lr=0.01 \
# --num_epochs=90 \
# --l2_decay=3e-4 \(TODO)
# --l2_decay=3e-4
......@@ -10,7 +10,6 @@ import math
import paddle
import paddle.fluid as fluid
import paddle.dataset.flowers as flowers
import models
import reader
import argparse
import functools
......@@ -19,8 +18,8 @@ import utils
from utils.learning_rate import cosine_decay
from utils.fp16_utils import create_master_params_grads, master_param_to_train_param
from utility import add_arguments, print_arguments
import models
import models_name
IMAGENET1000 = 1281167
parser = argparse.ArgumentParser(description=__doc__)
add_arg = functools.partial(add_arguments, argparser=parser)
......@@ -40,25 +39,32 @@ add_arg('lr_strategy', str, "piecewise_decay", "Set the learning rate
add_arg('model', str, "SE_ResNeXt50_32x4d", "Set the network to use.")
add_arg('enable_ce', bool, False, "If set True, enable continuous evaluation job.")
add_arg('data_dir', str, "./data/ILSVRC2012", "The ImageNet dataset root dir.")
add_arg('model_category', str, "models", "Whether to use models_name or not, valid value:'models','models_name'" )
add_arg('model_category', str, "models", "Whether to use models_name or not, valid value:'models','models_name'." )
add_arg('fp16', bool, False, "Enable half precision training with fp16." )
add_arg('scale_loss', float, 1.0, "Scale loss for fp16." )
add_arg('l2_decay', float, 1e-4, "L2_decay parameter.")
add_arg('momentum_rate', float, 0.9, "momentum_rate.")
# yapf: enable
def set_models(model):
def set_models(model_category):
global models
if model == "models":
models = models
assert model_category in ["models", "models_name"
], "{} is not in lists: {}".format(
model_category, ["models", "models_name"])
if model_category == "models_name":
import models_name as models
else:
models = models_name
import models as models
def optimizer_setting(params):
ls = params["learning_strategy"]
l2_decay = params["l2_decay"]
momentum_rate = params["momentum_rate"]
if ls["name"] == "piecewise_decay":
if "total_images" not in params:
total_images = 1281167
total_images = IMAGENET1000
else:
total_images = params["total_images"]
batch_size = ls["batch_size"]
......@@ -71,16 +77,17 @@ def optimizer_setting(params):
optimizer = fluid.optimizer.Momentum(
learning_rate=fluid.layers.piecewise_decay(
boundaries=bd, values=lr),
momentum=0.9,
regularization=fluid.regularizer.L2Decay(1e-4))
momentum=momentum_rate,
regularization=fluid.regularizer.L2Decay(l2_decay))
elif ls["name"] == "cosine_decay":
if "total_images" not in params:
total_images = 1281167
total_images = IMAGENET1000
else:
total_images = params["total_images"]
batch_size = ls["batch_size"]
l2_decay = params["l2_decay"]
momentum_rate = params["momentum_rate"]
step = int(total_images / batch_size + 1)
lr = params["lr"]
......@@ -89,43 +96,42 @@ def optimizer_setting(params):
optimizer = fluid.optimizer.Momentum(
learning_rate=cosine_decay(
learning_rate=lr, step_each_epoch=step, epochs=num_epochs),
momentum=0.9,
regularization=fluid.regularizer.L2Decay(4e-5))
elif ls["name"] == "exponential_decay":
momentum=momentum_rate,
regularization=fluid.regularizer.L2Decay(l2_decay))
elif ls["name"] == "linear_decay":
if "total_images" not in params:
total_images = 1281167
total_images = IMAGENET1000
else:
total_images = params["total_images"]
batch_size = ls["batch_size"]
step = int(total_images / batch_size +1)
lr = params["lr"]
num_epochs = params["num_epochs"]
learning_decay_rate_factor=ls["learning_decay_rate_factor"]
num_epochs_per_decay = ls["num_epochs_per_decay"]
NUM_GPUS = 1
start_lr = params["lr"]
l2_decay = params["l2_decay"]
momentum_rate = params["momentum_rate"]
end_lr = 0
total_step = int((total_images / batch_size) * num_epochs)
lr = fluid.layers.polynomial_decay(
start_lr, total_step, end_lr, power=1)
optimizer = fluid.optimizer.Momentum(
learning_rate=fluid.layers.exponential_decay(
learning_rate = lr * NUM_GPUS,
decay_steps = step * num_epochs_per_decay / NUM_GPUS,
decay_rate = learning_decay_rate_factor),
momentum=0.9,
regularization = fluid.regularizer.L2Decay(4e-5))
learning_rate=lr,
momentum=momentum_rate,
regularization=fluid.regularizer.L2Decay(l2_decay))
else:
lr = params["lr"]
l2_decay = params["l2_decay"]
momentum_rate = params["momentum_rate"]
optimizer = fluid.optimizer.Momentum(
learning_rate=lr,
momentum=0.9,
regularization=fluid.regularizer.L2Decay(1e-4))
momentum=momentum_rate,
regularization=fluid.regularizer.L2Decay(l2_decay))
return optimizer
def net_config(image, label, model, args):
model_list = [m for m in dir(models) if "__" not in m]
assert args.model in model_list,"{} is not lists: {}".format(
args.model, model_list)
assert args.model in model_list, "{} is not lists: {}".format(args.model,
model_list)
class_dim = args.class_dim
model_name = args.model
......@@ -148,8 +154,9 @@ def net_config(image, label, model, args):
acc_top1 = fluid.layers.accuracy(input=out0, label=label, k=1)
acc_top5 = fluid.layers.accuracy(input=out0, label=label, k=5)
else:
out = model.net(input=image, class_dim=class_dim)
cost, pred = fluid.layers.softmax_with_cross_entropy(out, label, return_softmax=True)
out = model.net(input=image, class_dim=class_dim)
cost, pred = fluid.layers.softmax_with_cross_entropy(
out, label, return_softmax=True)
if args.scale_loss > 1:
avg_cost = fluid.layers.mean(x=cost) * float(args.scale_loss)
else:
......@@ -190,19 +197,25 @@ def build_program(is_train, main_prog, startup_prog, args):
params["num_epochs"] = args.num_epochs
params["learning_strategy"]["batch_size"] = args.batch_size
params["learning_strategy"]["name"] = args.lr_strategy
params["l2_decay"] = args.l2_decay
params["momentum_rate"] = args.momentum_rate
optimizer = optimizer_setting(params)
if args.fp16:
params_grads = optimizer.backward(avg_cost)
master_params_grads = create_master_params_grads(
params_grads, main_prog, startup_prog, args.scale_loss)
optimizer.apply_gradients(master_params_grads)
master_param_to_train_param(master_params_grads, params_grads, main_prog)
master_param_to_train_param(master_params_grads,
params_grads, main_prog)
else:
optimizer.minimize(avg_cost)
global_lr = optimizer._global_learning_rate()
return py_reader, avg_cost, acc_top1, acc_top5
if is_train:
return py_reader, avg_cost, acc_top1, acc_top5, global_lr
else:
return py_reader, avg_cost, acc_top1, acc_top5
def train(args):
......@@ -220,7 +233,7 @@ def train(args):
startup_prog.random_seed = 1000
train_prog.random_seed = 1000
train_py_reader, train_cost, train_acc1, train_acc5 = build_program(
train_py_reader, train_cost, train_acc1, train_acc5, global_lr = build_program(
is_train=True,
main_prog=train_prog,
startup_prog=startup_prog,
......@@ -255,7 +268,8 @@ def train(args):
if visible_device:
device_num = len(visible_device.split(','))
else:
device_num = subprocess.check_output(['nvidia-smi', '-L']).decode().count('\n')
device_num = subprocess.check_output(
['nvidia-smi', '-L']).decode().count('\n')
train_batch_size = args.batch_size / device_num
test_batch_size = 16
......@@ -283,11 +297,12 @@ def train(args):
use_cuda=bool(args.use_gpu),
loss_name=train_cost.name)
train_fetch_list = [train_cost.name, train_acc1.name, train_acc5.name]
train_fetch_list = [
train_cost.name, train_acc1.name, train_acc5.name, global_lr.name
]
test_fetch_list = [test_cost.name, test_acc1.name, test_acc5.name]
params = models.__dict__[args.model]().params
for pass_id in range(params["num_epochs"]):
train_py_reader.start()
......@@ -299,7 +314,9 @@ def train(args):
try:
while True:
t1 = time.time()
loss, acc1, acc5 = train_exe.run(fetch_list=train_fetch_list)
loss, acc1, acc5, lr = train_exe.run(
fetch_list=train_fetch_list)
t2 = time.time()
period = t2 - t1
loss = np.mean(np.array(loss))
......@@ -308,12 +325,14 @@ def train(args):
train_info[0].append(loss)
train_info[1].append(acc1)
train_info[2].append(acc5)
lr = np.mean(np.array(lr))
train_time.append(period)
if batch_id % 10 == 0:
print("Pass {0}, trainbatch {1}, loss {2}, \
acc1 {3}, acc5 {4} time {5}"
.format(pass_id, batch_id, loss, acc1, acc5,
"%2.2f sec" % period))
acc1 {3}, acc5 {4}, lr{5}, time {6}"
.format(pass_id, batch_id, loss, acc1, acc5, "%.5f" %
lr, "%2.2f sec" % period))
sys.stdout.flush()
batch_id += 1
except fluid.core.EOFException:
......@@ -322,7 +341,8 @@ def train(args):
train_loss = np.array(train_info[0]).mean()
train_acc1 = np.array(train_info[1]).mean()
train_acc5 = np.array(train_info[2]).mean()
train_speed = np.array(train_time).mean() / (train_batch_size * device_num)
train_speed = np.array(train_time).mean() / (train_batch_size *
device_num)
test_py_reader.start()
......@@ -394,10 +414,7 @@ def train(args):
def main():
args = parser.parse_args()
models_now = args.model_category
assert models_now in ["models", "models_name"], "{} is not in lists: {}".format(
models_now, ["models", "models_name"])
set_models(models_now)
set_models(args.model_category)
print_arguments(args)
train(args)
......
......@@ -202,5 +202,5 @@ env CUDA_VISIBLE_DEVICE=0 python infer.py \
|模型| 错误率|
|- |:-: |
|[ocr_ctc_params](https://drive.google.com/open?id=1gsg2ODO2_F2pswXwW5MXpf8RY8-BMRyZ) | 22.3% |
|[ocr_attention_params](https://drive.google.com/open?id=1Bx7-94mngyTaMA5kVjzYHDPAdXxOYbRm) | 15.8%|
|[ocr_ctc_params](https://paddle-ocr-models.bj.bcebos.com/ocr_ctc.zip) | 22.3% |
|[ocr_attention_params](https://paddle-ocr-models.bj.bcebos.com/ocr_attention.zip) | 15.8%|
# Faster RCNN Objective Detection
# RCNN Objective Detection
---
## Table of Contents
......@@ -9,7 +9,6 @@
- [Training](#training)
- [Evaluation](#evaluation)
- [Inference and Visualization](#inference-and-visualization)
- [Appendix](#appendix)
## Installation
......@@ -17,17 +16,20 @@ Running sample code in this directory requires PaddelPaddle Fluid v.1.0.0 and la
## Introduction
[Faster Rcnn](https://arxiv.org/abs/1506.01497) is a typical two stage detector. The total framework of network can be divided into four parts, as shown below:
<p align="center">
<img src="image/Faster_RCNN.jpg" height=400 width=400 hspace='10'/> <br />
Faster RCNN model
</p>
Region Convolutional Neural Network (RCNN) models are two stages detector. According to proposals and feature extraction, obtain class and more precise proposals.
Now RCNN model contains two typical models: Faster RCNN and Mask RCNN.
[Faster RCNN](https://arxiv.org/abs/1506.01497), The total framework of network can be divided into four parts:
1. Base conv layer. As a CNN objective dection, Faster RCNN extract feature maps using a basic convolutional network. The feature maps then can be shared by RPN and fc layers. This sampel uses [ResNet-50](https://arxiv.org/abs/1512.03385) as base conv layer.
2. Region Proposal Network (RPN). RPN generates proposals for detection。This block generates anchors by a set of size and ratio and classifies anchors into fore-ground and back-ground by softmax. Then refine anchors to obtain more precise proposals using box regression.
3. RoI Align. This layer takes feature maps and proposals as input. The proposals are mapped to feature maps and pooled to the same size. The output are sent to fc layers for classification and regression. RoIPool and RoIAlign are used separately to this layer and it can be set in roi\_func in config.py.
4. Detection layer. Using the output of roi pooling to compute the class and locatoin of each proposal in two fc layers.
[Mask RCNN](https://arxiv.org/abs/1703.06870) is a classical instance segmentation model and an extension of Faster RCNN
Mask RCNN is a two stage model as well. At the first stage, it generates proposals from input images. At the second stage, it obtains class result, bbox and mask which is the result from segmentation branch on original Faster RCNN model. It decouples the relation between mask and classification.
## Data preparation
Train the model on [MS-COCO dataset](http://cocodataset.org/#download), download dataset as below:
......@@ -62,12 +64,24 @@ To train the model, [cocoapi](https://github.com/cocodataset/cocoapi) is needed.
After data preparation, one can start the training step by:
- Faster RCNN
python train.py \
--model_save_dir=output/ \
--pretrained_model=${path_to_pretrain_model}
--data_dir=${path_to_data}
--pretrained_model=${path_to_pretrain_model} \
--data_dir=${path_to_data} \
--MASK_ON=False
- Mask RCNN
python train.py \
--model_save_dir=output/ \
--pretrained_model=${path_to_pretrain_model} \
--data_dir=${path_to_data} \
--MASK_ON=True
- Set ```export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7``` to specifiy 8 GPU to train.
- Set ```MASK_ON``` to choose Faster RCNN or Mask RCNN model.
- For more help on arguments:
python train.py --help
......@@ -93,7 +107,6 @@ After data preparation, one can start the training step by:
* In first 500 iteration, the learning rate increases linearly from 0.00333 to 0.01. Then lr is decayed at 120000, 160000 iteration with multiplier 0.1, 0.01. The maximum iteration is 180000. Also, we released a 2x model which has 360000 iterations and lr is decayed at 240000, 320000. These configuration can be set by max_iter and lr_steps in config.py.
* Set the learning rate of bias to two times as global lr in non basic convolutional layers.
* In basic convolutional layers, parameters of affine layers and res body do not update.
* Use Nvidia Tesla V100 8GPU, total time for training is about 40 hours.
## Evaluation
......@@ -101,14 +114,27 @@ Evaluation is to evaluate the performance of a trained model. This sample provid
`eval_coco_map.py` is the main executor for evalution, one can start evalution step by:
- Faster RCNN
python eval_coco_map.py \
--dataset=coco2017 \
--pretrained_model=${path_to_pretrain_model} \
--MASK_ON=False
- Mask RCNN
python eval_coco_map.py \
--dataset=coco2017 \
--pretrained_model=${path_to_pretrain_model} \
--MASK_ON=True
- Set ```export CUDA_VISIBLE_DEVICES=0``` to specifiy one GPU to eval.
- Set ```MASK_ON``` to choose Faster RCNN or Mask RCNN model.
Evalutaion result is shown as below:
Faster RCNN:
| Model | RoI function | Batch size | Max iteration | mAP |
| :--------------- | :--------: | :------------: | :------------------: |------: |
| [Fluid RoIPool minibatch padding](http://paddlemodels.bj.bcebos.com/faster_rcnn/model_pool_minibatch_padding.tar.gz) | RoIPool | 8 | 180000 | 0.316 |
......@@ -121,6 +147,14 @@ Evalutaion result is shown as below:
* Fluid RoIAlign no padding: Images without padding.
* Fluid RoIAlign no padding 2x: Images without padding, train for 360000 iterations, learning rate is decayed at 240000, 320000.
Mask RCNN:
| Model | Batch size | Max iteration | box mAP | mask mAP |
| :--------------- | :--------: | :------------: | :--------: |------: |
| [Fluid mask no padding](https://paddlemodels.bj.bcebos.com/faster_rcnn/Fluid_mask_no_padding.tar.gz) | 8 | 180000 | 0.359 | 0.314 |
* Fluid mask no padding: Use RoIAlign. Images without padding.
## Inference and Visualization
Inference is used to get prediction score or image features based on trained models. `infer.py` is the main executor for inference, one can start infer step by:
......@@ -135,8 +169,12 @@ Inference is used to get prediction score or image features based on trained mod
Visualization of infer result is shown as below:
<p align="center">
<img src="image/000000000139.jpg" height=300 width=400 hspace='10'/>
<img src="image/000000127517.jpg" height=300 width=400 hspace='10'/>
<img src="image/000000203864.jpg" height=300 width=400 hspace='10'/>
<img src="image/000000515077.jpg" height=300 width=400 hspace='10'/> <br />
<img src="image/000000127517.jpg" height=300 width=400 hspace='10'/> <br />
Faster RCNN Visualization Examples
</p>
<p align="center">
<img src="image/000000000139_mask.jpg" height=300 width=400 hspace='10'/>
<img src="image/000000127517_mask.jpg" height=300 width=400 hspace='10'/> <br />
Mask RCNN Visualization Examples
</p>
# Faster RCNN 目标检测
# RCNN 系列目标检测
---
## 内容
......@@ -9,25 +9,27 @@
- [模型训练](#模型训练)
- [模型评估](#模型评估)
- [模型推断及可视化](#模型推断及可视化)
- [附录](#附录)
## 安装
在当前目录下运行样例代码需要PadddlePaddle Fluid的v.1.0.0或以上的版本。如果你的运行环境中的PaddlePaddle低于此版本,请根据[安装文档](http://www.paddlepaddle.org/documentation/docs/zh/0.15.0/beginners_guide/install/install_doc.html#paddlepaddle)中的说明来更新PaddlePaddle。
## 简介
区域卷积神经网络(RCNN)系列模型为两阶段目标检测器。通过对图像生成候选区域,提取特征,判别特征类别并修正候选框位置。
RCNN系列目前包含两个代表模型:Faster RCNN,Mask RCNN
[Faster Rcnn](https://arxiv.org/abs/1506.01497) 是典型的两阶段目标检测器。如下图所示,整体网络可以分为4个主要内容:
<p align="center">
<img src="image/Faster_RCNN.jpg" height=400 width=400 hspace='10'/> <br />
Faster RCNN 目标检测模型
</p>
[Faster RCNN](https://arxiv.org/abs/1506.01497) 整体网络可以分为4个主要内容:
1. 基础卷积层。作为一种卷积神经网络目标检测方法,Faster RCNN首先使用一组基础的卷积网络提取图像的特征图。特征图被后续RPN层和全连接层共享。本示例采用[ResNet-50](https://arxiv.org/abs/1512.03385)作为基础卷积层。
2. 区域生成网络(RPN)。RPN网络用于生成候选区域(proposals)。该层通过一组固定的尺寸和比例得到一组锚点(anchors), 通过softmax判断锚点属于前景或者背景,再利用区域回归修正锚点从而获得精确的候选区域。
3. RoI Align。该层收集输入的特征图和候选区域,将候选区域映射到特征图中并池化为统一大小的区域特征图,送入全连接层判定目标类别, 该层可选用RoIPool和RoIAlign两种方式,在config.py中设置roi\_func。
4. 检测层。利用区域特征图计算候选区域的类别,同时再次通过区域回归获得检测框最终的精确位置。
[Mask RCNN](https://arxiv.org/abs/1703.06870) 扩展自Faster RCNN,是经典的实例分割模型。
Mask RCNN同样为两阶段框架,第一阶段扫描图像生成候选框;第二阶段根据候选框得到分类结果,边界框,同时在原有Faster RCNN模型基础上添加分割分支,得到掩码结果,实现了掩码和类别预测关系的解藕。
## 数据准备
[MS-COCO数据集](http://cocodataset.org/#download)上进行训练,通过如下方式下载数据集。
......@@ -61,12 +63,24 @@ Faster RCNN 目标检测模型
数据准备完毕后,可以通过如下的方式启动训练:
- Faster RCNN
python train.py \
--model_save_dir=output/ \
--pretrained_model=${path_to_pretrain_model}
--data_dir=${path_to_data}
--pretrained_model=${path_to_pretrain_model} \
--data_dir=${path_to_data} \
--MASK_ON=False
- Mask RCNN
python train.py \
--model_save_dir=output/ \
--pretrained_model=${path_to_pretrain_model} \
--data_dir=${path_to_data} \
--MASK_ON=True
- 通过设置export CUDA\_VISIBLE\_DEVICES=0,1,2,3,4,5,6,7指定8卡GPU训练。
- 通过设置```MASK_ON```选择Faster RCNN和Mask RCNN模型。
- 可选参数见:
python train.py --help
......@@ -83,11 +97,10 @@ Faster RCNN 目标检测模型
**训练策略:**
* 采用momentum优化算法训练Faster RCNN,momentum=0.9。
* 采用momentum优化算法训练,momentum=0.9。
* 权重衰减系数为0.0001,前500轮学习率从0.00333线性增加至0.01。在120000,160000轮时使用0.1,0.01乘子进行学习率衰减,最大训练180000轮。同时我们也提供了2x模型,该模型采用更多的迭代轮数进行训练,训练360000轮,学习率在240000,320000轮衰减,其他参数不变,训练最大轮数和学习率策略可以在config.py中对max_iter和lr_steps进行设置。
* 非基础卷积层卷积bias学习率为整体学习率2倍。
* 基础卷积层中,affine_layers参数不更新,res2层参数不更新。
* 使用Nvidia Tesla V100 8卡并行,总共训练时长大约40小时。
## 模型评估
......@@ -95,14 +108,27 @@ Faster RCNN 目标检测模型
`eval_coco_map.py`是评估模块的主要执行程序,调用示例如下:
- Faster RCNN
python eval_coco_map.py \
--dataset=coco2017 \
--pretrained_model=${path_to_pretrain_model} \
--MASK_ON=False
- Mask RCNN
python eval_coco_map.py \
--dataset=coco2017 \
--pretrained_model=${path_to_pretrain_model} \
--MASK_ON=True
- 通过设置export CUDA\_VISIBLE\_DEVICES=0指定单卡GPU评估。
- 通过设置```MASK_ON```选择Faster RCNN和Mask RCNN模型。
下表为模型评估结果:
Faster RCNN
| 模型 | RoI处理方式 | 批量大小 | 迭代次数 | mAP |
| :--------------- | :--------: | :------------: | :------------------: |------: |
| [Fluid RoIPool minibatch padding](http://paddlemodels.bj.bcebos.com/faster_rcnn/model_pool_minibatch_padding.tar.gz) | RoIPool | 8 | 180000 | 0.316 |
......@@ -117,6 +143,14 @@ Faster RCNN 目标检测模型
* Fluid RoIAlign no padding: 使用RoIAlign,不对图像做填充处理。
* Fluid RoIAlign no padding 2x: 使用RoIAlign,不对图像做填充处理。训练360000轮,学习率在240000,320000轮衰减。
Mask RCNN:
| 模型 | 批量大小 | 迭代次数 | box mAP | mask mAP |
| :--------------- | :--------: | :------------: | :--------: |------: |
| [Fluid mask no padding](https://paddlemodels.bj.bcebos.com/faster_rcnn/Fluid_mask_no_padding.tar.gz) | 8 | 180000 | 0.359 | 0.314 |
* Fluid mask no padding: 使用RoIAlign,不对图像做填充处理
## 模型推断及可视化
模型推断可以获取图像中的物体及其对应的类别,`infer.py`是主要执行程序,调用示例如下:
......@@ -131,8 +165,12 @@ Faster RCNN 目标检测模型
下图为模型可视化预测结果:
<p align="center">
<img src="image/000000000139.jpg" height=300 width=400 hspace='10'/>
<img src="image/000000127517.jpg" height=300 width=400 hspace='10'/>
<img src="image/000000203864.jpg" height=300 width=400 hspace='10'/>
<img src="image/000000515077.jpg" height=300 width=400 hspace='10'/> <br />
<img src="image/000000127517.jpg" height=300 width=400 hspace='10'/> <br />
Faster RCNN 预测可视化
</p>
<p align="center">
<img src="image/000000000139_mask.jpg" height=300 width=400 hspace='10'/>
<img src="image/000000127517_mask.jpg" height=300 width=400 hspace='10'/> <br />
Mask RCNN 预测可视化
</p>
......@@ -6,18 +6,19 @@ sys.path.append(os.environ['ceroot'])
from kpi import CostKpi
from kpi import DurationKpi
each_pass_duration_card1_kpi = DurationKpi('each_pass_duration_card1', 0.08, 0, actived=True)
each_pass_duration_card1_kpi = DurationKpi(
'each_pass_duration_card1', 0.08, 0, actived=True)
train_loss_card1_kpi = CostKpi('train_loss_card1', 0.08, 0)
each_pass_duration_card4_kpi = DurationKpi('each_pass_duration_card4', 0.08, 0, actived=True)
each_pass_duration_card4_kpi = DurationKpi(
'each_pass_duration_card4', 0.08, 0, actived=True)
train_loss_card4_kpi = CostKpi('train_loss_card4', 0.08, 0)
tracking_kpis = [
each_pass_duration_card1_kpi,
train_loss_card1_kpi,
each_pass_duration_card4_kpi,
train_loss_card4_kpi,
]
each_pass_duration_card1_kpi,
train_loss_card1_kpi,
each_pass_duration_card4_kpi,
train_loss_card4_kpi,
]
def parse_log(log):
......
......@@ -69,6 +69,7 @@ def clip_xyxy_to_image(x1, y1, x2, y2, height, width):
y2 = np.minimum(height - 1., np.maximum(0., y2))
return x1, y1, x2, y2
def nms(dets, thresh):
"""Apply classic DPM-style greedy NMS."""
if dets.shape[0] == 0:
......@@ -123,3 +124,21 @@ def nms(dets, thresh):
return np.where(suppressed == 0)[0]
def expand_boxes(boxes, scale):
"""Expand an array of boxes by a given scale."""
w_half = (boxes[:, 2] - boxes[:, 0]) * .5
h_half = (boxes[:, 3] - boxes[:, 1]) * .5
x_c = (boxes[:, 2] + boxes[:, 0]) * .5
y_c = (boxes[:, 3] + boxes[:, 1]) * .5
w_half *= scale
h_half *= scale
boxes_exp = np.zeros(boxes.shape)
boxes_exp[:, 0] = x_c - w_half
boxes_exp[:, 2] = x_c + w_half
boxes_exp[:, 1] = y_c - h_half
boxes_exp[:, 3] = y_c + h_half
return boxes_exp
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
#
# Based on:
# --------------------------------------------------------
# Detectron
# Copyright (c) 2017-present, Facebook, Inc.
# Licensed under the Apache License, Version 2.0;
# Written by Ross Girshick
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
def colormap(rgb=False):
color_list = np.array([
0.000, 0.447, 0.741, 0.850, 0.325, 0.098, 0.929, 0.694, 0.125, 0.494,
0.184, 0.556, 0.466, 0.674, 0.188, 0.301, 0.745, 0.933, 0.635, 0.078,
0.184, 0.300, 0.300, 0.300, 0.600, 0.600, 0.600, 1.000, 0.000, 0.000,
1.000, 0.500, 0.000, 0.749, 0.749, 0.000, 0.000, 1.000, 0.000, 0.000,
0.000, 1.000, 0.667, 0.000, 1.000, 0.333, 0.333, 0.000, 0.333, 0.667,
0.000, 0.333, 1.000, 0.000, 0.667, 0.333, 0.000, 0.667, 0.667, 0.000,
0.667, 1.000, 0.000, 1.000, 0.333, 0.000, 1.000, 0.667, 0.000, 1.000,
1.000, 0.000, 0.000, 0.333, 0.500, 0.000, 0.667, 0.500, 0.000, 1.000,
0.500, 0.333, 0.000, 0.500, 0.333, 0.333, 0.500, 0.333, 0.667, 0.500,
0.333, 1.000, 0.500, 0.667, 0.000, 0.500, 0.667, 0.333, 0.500, 0.667,
0.667, 0.500, 0.667, 1.000, 0.500, 1.000, 0.000, 0.500, 1.000, 0.333,
0.500, 1.000, 0.667, 0.500, 1.000, 1.000, 0.500, 0.000, 0.333, 1.000,
0.000, 0.667, 1.000, 0.000, 1.000, 1.000, 0.333, 0.000, 1.000, 0.333,
0.333, 1.000, 0.333, 0.667, 1.000, 0.333, 1.000, 1.000, 0.667, 0.000,
1.000, 0.667, 0.333, 1.000, 0.667, 0.667, 1.000, 0.667, 1.000, 1.000,
1.000, 0.000, 1.000, 1.000, 0.333, 1.000, 1.000, 0.667, 1.000, 0.167,
0.000, 0.000, 0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000,
0.000, 0.833, 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.167, 0.000,
0.000, 0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000, 0.000,
0.833, 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.167, 0.000, 0.000,
0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000, 0.000, 0.833,
0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.143, 0.143, 0.143, 0.286,
0.286, 0.286, 0.429, 0.429, 0.429, 0.571, 0.571, 0.571, 0.714, 0.714,
0.714, 0.857, 0.857, 0.857, 1.000, 1.000, 1.000
]).astype(np.float32)
color_list = color_list.reshape((-1, 3)) * 255
if not rgb:
color_list = color_list[:, ::-1]
return color_list
......@@ -90,6 +90,9 @@ _C.TRAIN.freeze_at = 2
# min area of ground truth box
_C.TRAIN.gt_min_area = -1
# Use horizontally-flipped images during training?
_C.TRAIN.use_flipped = True
#
# Inference options
#
......@@ -120,7 +123,7 @@ _C.TEST.rpn_post_nms_top_n = 1000
_C.TEST.rpn_min_size = 0.0
# max number of detections
_C.TEST.detectiions_per_im = 100
_C.TEST.detections_per_im = 100
# NMS threshold used on RPN proposals
_C.TEST.rpn_nms_thresh = 0.7
......@@ -129,6 +132,9 @@ _C.TEST.rpn_nms_thresh = 0.7
# Model options
#
# Whether use mask rcnn head
_C.MASK_ON = True
# weight for bbox regression targets
_C.bbox_reg_weights = [0.1, 0.1, 0.2, 0.2]
......@@ -156,6 +162,15 @@ _C.roi_resolution = 14
# spatial scale
_C.spatial_scale = 1. / 16.
# resolution to represent mask labels
_C.resolution = 14
# Number of channels in the mask head
_C.dim_reduced = 256
# Threshold for converting soft masks to hard masks
_C.mrcnn_thresh_binarize = 0.5
#
# SOLVER options
#
......@@ -204,12 +219,6 @@ _C.pixel_means = [102.9801, 115.9465, 122.7717]
# clip box to prevent overflowing
_C.bbox_clip = np.log(1000. / 16.)
# dataset path
_C.train_file_list = 'annotations/instances_train2017.json'
_C.train_data_dir = 'train2017'
_C.val_file_list = 'annotations/instances_val2017.json'
_C.val_data_dir = 'val2017'
def merge_cfg_from_args(args, mode):
"""Merge config keys, values in args into the global config."""
......
......@@ -18,8 +18,7 @@ from __future__ import print_function
import os
import time
import numpy as np
from eval_helper import get_nmsed_box
from eval_helper import get_dt_res
from eval_helper import *
import paddle
import paddle.fluid as fluid
import reader
......@@ -30,21 +29,21 @@ import json
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval, Params
from config import cfg
from roidbs import DatasetPath
def eval():
if '2014' in cfg.dataset:
test_list = 'annotations/instances_val2014.json'
elif '2017' in cfg.dataset:
test_list = 'annotations/instances_val2017.json'
data_path = DatasetPath('val')
test_list = data_path.get_file_list()
image_shape = [3, cfg.TEST.max_size, cfg.TEST.max_size]
class_nums = cfg.class_num
devices = os.getenv("CUDA_VISIBLE_DEVICES") or ""
devices_num = len(devices.split(","))
total_batch_size = devices_num * cfg.TRAIN.im_per_batch
cocoGt = COCO(os.path.join(cfg.data_dir, test_list))
numId_to_catId_map = {i + 1: v for i, v in enumerate(cocoGt.getCatIds())}
cocoGt = COCO(test_list)
num_id_to_cat_id_map = {i + 1: v for i, v in enumerate(cocoGt.getCatIds())}
category_ids = cocoGt.getCatIds()
label_list = {
item['id']: item['name']
......@@ -52,51 +51,82 @@ def eval():
}
label_list[0] = ['background']
model = model_builder.FasterRCNN(
model = model_builder.RCNN(
add_conv_body_func=resnet.add_ResNet50_conv4_body,
add_roi_box_head_func=resnet.add_ResNet_roi_conv5_head,
use_pyreader=False,
is_train=False)
model.build_model(image_shape)
rpn_rois, confs, locs = model.eval_out()
pred_boxes = model.eval_bbox_out()
if cfg.MASK_ON:
masks = model.eval_mask_out()
place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
# yapf: disable
if cfg.pretrained_model:
def if_exist(var):
return os.path.exists(os.path.join(cfg.pretrained_model, var.name))
fluid.io.load_vars(exe, cfg.pretrained_model, predicate=if_exist)
# yapf: enable
test_reader = reader.test(total_batch_size)
feeder = fluid.DataFeeder(place=place, feed_list=model.feeds())
dts_res = []
fetch_list = [rpn_rois, confs, locs]
segms_res = []
if cfg.MASK_ON:
fetch_list = [pred_boxes, masks]
else:
fetch_list = [pred_boxes]
eval_start = time.time()
for batch_id, batch_data in enumerate(test_reader()):
start = time.time()
im_info = []
for data in batch_data:
im_info.append(data[1])
rpn_rois_v, confs_v, locs_v = exe.run(
fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(batch_data),
return_numpy=False)
new_lod, nmsed_out = get_nmsed_box(rpn_rois_v, confs_v, locs_v,
class_nums, im_info,
numId_to_catId_map)
result = exe.run(fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(batch_data),
return_numpy=False)
pred_boxes_v = result[0]
if cfg.MASK_ON:
masks_v = result[1]
dts_res += get_dt_res(total_batch_size, new_lod, nmsed_out, batch_data)
new_lod = pred_boxes_v.lod()
nmsed_out = pred_boxes_v
dts_res += get_dt_res(total_batch_size, new_lod[0], nmsed_out,
batch_data, num_id_to_cat_id_map)
if cfg.MASK_ON and np.array(masks_v).shape != (1, 1):
segms_out = segm_results(nmsed_out, masks_v, im_info)
segms_res += get_segms_res(total_batch_size, new_lod[0], segms_out,
batch_data, num_id_to_cat_id_map)
end = time.time()
print('batch id: {}, time: {}'.format(batch_id, end - start))
with open("detection_result.json", 'w') as outfile:
eval_end = time.time()
total_time = eval_end - eval_start
print('average time of eval is: {}'.format(total_time / (batch_id + 1)))
with open("detection_bbox_result.json", 'w') as outfile:
json.dump(dts_res, outfile)
print("start evaluate using coco api")
cocoDt = cocoGt.loadRes("detection_result.json")
print("start evaluate bbox using coco api")
cocoDt = cocoGt.loadRes("detection_bbox_result.json")
cocoEval = COCOeval(cocoGt, cocoDt, 'bbox')
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
if cfg.MASK_ON:
with open("detection_segms_result.json", 'w') as outfile:
json.dump(segms_res, outfile)
print("start evaluate mask using coco api")
cocoDt = cocoGt.loadRes("detection_segms_result.json")
cocoEval = COCOeval(cocoGt, cocoDt, 'segm')
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
if __name__ == '__main__':
args = parse_args()
......
......@@ -21,6 +21,10 @@ from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
from config import cfg
import pycocotools.mask as mask_util
import six
from colormap import colormap
import cv2
def box_decoder(deltas, boxes, weights):
......@@ -80,8 +84,7 @@ def clip_tiled_boxes(boxes, im_shape):
return boxes
def get_nmsed_box(rpn_rois, confs, locs, class_nums, im_info,
numId_to_catId_map):
def get_nmsed_box(rpn_rois, confs, locs, class_nums, im_info):
lod = rpn_rois.lod()[0]
rpn_rois_v = np.array(rpn_rois)
variance_v = np.array(cfg.bbox_reg_weights)
......@@ -106,38 +109,41 @@ def get_nmsed_box(rpn_rois, confs, locs, class_nums, im_info,
inds = np.where(scores_n[:, j] > cfg.TEST.score_thresh)[0]
scores_j = scores_n[inds, j]
rois_j = rois_n[inds, j * 4:(j + 1) * 4]
dets_j = np.hstack((rois_j, scores_j[:, np.newaxis])).astype(
dets_j = np.hstack((scores_j[:, np.newaxis], rois_j)).astype(
np.float32, copy=False)
keep = box_utils.nms(dets_j, cfg.TEST.nms_thresh)
nms_dets = dets_j[keep, :]
#add labels
cat_id = numId_to_catId_map[j]
label = np.array([cat_id for _ in range(len(keep))])
label = np.array([j for _ in range(len(keep))])
nms_dets = np.hstack((nms_dets, label[:, np.newaxis])).astype(
np.float32, copy=False)
cls_boxes[j] = nms_dets
# Limit to max_per_image detections **over all classes**
image_scores = np.hstack(
[cls_boxes[j][:, -2] for j in range(1, class_nums)])
if len(image_scores) > cfg.TEST.detectiions_per_im:
image_thresh = np.sort(image_scores)[-cfg.TEST.detectiions_per_im]
[cls_boxes[j][:, 1] for j in range(1, class_nums)])
if len(image_scores) > cfg.TEST.detections_per_im:
image_thresh = np.sort(image_scores)[-cfg.TEST.detections_per_im]
for j in range(1, class_nums):
keep = np.where(cls_boxes[j][:, -2] >= image_thresh)[0]
keep = np.where(cls_boxes[j][:, 1] >= image_thresh)[0]
cls_boxes[j] = cls_boxes[j][keep, :]
im_results_n = np.vstack([cls_boxes[j] for j in range(1, class_nums)])
im_results[i] = im_results_n
new_lod.append(len(im_results_n) + new_lod[-1])
boxes = im_results_n[:, :-2]
scores = im_results_n[:, -2]
labels = im_results_n[:, -1]
boxes = im_results_n[:, 2:]
scores = im_results_n[:, 1]
labels = im_results_n[:, 0]
im_results = np.vstack([im_results[k] for k in range(len(lod) - 1)])
return new_lod, im_results
def get_dt_res(batch_size, lod, nmsed_out, data):
def get_dt_res(batch_size, lod, nmsed_out, data, num_id_to_cat_id_map):
dts_res = []
nmsed_out_v = np.array(nmsed_out)
if nmsed_out_v.shape == (
1,
1, ):
return dts_res
assert (len(lod) == batch_size + 1), \
"Error Lod Tensor offset dimension. Lod({}) vs. batch_size({})"\
.format(len(lod), batch_size)
......@@ -150,7 +156,8 @@ def get_dt_res(batch_size, lod, nmsed_out, data):
for j in range(dt_num_this_img):
dt = nmsed_out_v[k]
k = k + 1
xmin, ymin, xmax, ymax, score, category_id = dt.tolist()
num_id, score, xmin, ymin, xmax, ymax = dt.tolist()
category_id = num_id_to_cat_id_map[num_id]
w = xmax - xmin + 1
h = ymax - ymin + 1
bbox = [xmin, ymin, w, h]
......@@ -164,24 +171,131 @@ def get_dt_res(batch_size, lod, nmsed_out, data):
return dts_res
def draw_bounding_box_on_image(image_path, nms_out, draw_threshold, label_list):
image = Image.open(image_path)
def get_segms_res(batch_size, lod, segms_out, data, num_id_to_cat_id_map):
segms_res = []
segms_out_v = np.array(segms_out)
k = 0
for i in range(batch_size):
dt_num_this_img = lod[i + 1] - lod[i]
image_id = int(data[i][-1])
for j in range(dt_num_this_img):
dt = segms_out_v[k]
k = k + 1
segm, num_id, score = dt.tolist()
cat_id = num_id_to_cat_id_map[num_id]
if six.PY3:
if 'counts' in segm:
segm['counts'] = segm['counts'].decode("utf8")
segm_res = {
'image_id': image_id,
'category_id': cat_id,
'segmentation': segm,
'score': score
}
segms_res.append(segm_res)
return segms_res
def draw_bounding_box_on_image(image_path,
nms_out,
draw_threshold,
label_list,
num_id_to_cat_id_map,
image=None):
if image is None:
image = Image.open(image_path)
draw = ImageDraw.Draw(image)
im_width, im_height = image.size
for dt in nms_out:
xmin, ymin, xmax, ymax, score, category_id = dt.tolist()
for dt in np.array(nms_out):
num_id, score, xmin, ymin, xmax, ymax = dt.tolist()
category_id = num_id_to_cat_id_map[num_id]
if score < draw_threshold:
continue
bbox = dt[:4]
xmin, ymin, xmax, ymax = bbox
draw.line(
[(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin),
(xmin, ymin)],
width=4,
width=2,
fill='red')
if image.mode == 'RGB':
draw.text((xmin, ymin), label_list[int(category_id)], (255, 255, 0))
image_name = image_path.split('/')[-1]
print("image with bbox drawed saved as {}".format(image_name))
image.save(image_name)
def draw_mask_on_image(image_path, segms_out, draw_threshold, alpha=0.7):
image = Image.open(image_path)
draw = ImageDraw.Draw(image)
im_width, im_height = image.size
mask_color_id = 0
w_ratio = .4
image = np.array(image).astype('float32')
for dt in np.array(segms_out):
segm, num_id, score = dt.tolist()
if score < draw_threshold:
continue
mask = mask_util.decode(segm) * 255
color_list = colormap(rgb=True)
color_mask = color_list[mask_color_id % len(color_list), 0:3]
mask_color_id += 1
for c in range(3):
color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio * 255
idx = np.nonzero(mask)
image[idx[0], idx[1], :] *= 1.0 - alpha
image[idx[0], idx[1], :] += alpha * color_mask
image = Image.fromarray(image.astype('uint8'))
return image
def segm_results(im_results, masks, im_info):
im_results = np.array(im_results)
class_num = cfg.class_num
M = cfg.resolution
scale = (M + 2.0) / M
lod = masks.lod()[0]
masks_v = np.array(masks)
boxes = im_results[:, 2:]
labels = im_results[:, 0]
segms_results = [[] for _ in range(len(lod) - 1)]
sum = 0
for i in range(len(lod) - 1):
im_results_n = im_results[lod[i]:lod[i + 1]]
cls_segms = []
masks_n = masks_v[lod[i]:lod[i + 1]]
boxes_n = boxes[lod[i]:lod[i + 1]]
labels_n = labels[lod[i]:lod[i + 1]]
im_h = int(round(im_info[i][0] / im_info[i][2]))
im_w = int(round(im_info[i][1] / im_info[i][2]))
boxes_n = box_utils.expand_boxes(boxes_n, scale)
boxes_n = boxes_n.astype(np.int32)
padded_mask = np.zeros((M + 2, M + 2), dtype=np.float32)
for j in range(len(im_results_n)):
class_id = int(labels_n[j])
padded_mask[1:-1, 1:-1] = masks_n[j, class_id, :, :]
ref_box = boxes_n[j, :]
w = ref_box[2] - ref_box[0] + 1
h = ref_box[3] - ref_box[1] + 1
w = np.maximum(w, 1)
h = np.maximum(h, 1)
mask = cv2.resize(padded_mask, (w, h))
mask = np.array(mask > cfg.mrcnn_thresh_binarize, dtype=np.uint8)
im_mask = np.zeros((im_h, im_w), dtype=np.uint8)
x_0 = max(ref_box[0], 0)
x_1 = min(ref_box[2] + 1, im_w)
y_0 = max(ref_box[1], 0)
y_1 = min(ref_box[3] + 1, im_h)
im_mask[y_0:y_1, x_0:x_1] = mask[(y_0 - ref_box[1]):(y_1 - ref_box[
1]), (x_0 - ref_box[0]):(x_1 - ref_box[0])]
sum += im_mask.sum()
rle = mask_util.encode(
np.array(
im_mask[:, :, np.newaxis], order='F'))[0]
cls_segms.append(rle)
segms_results[i] = np.array(cls_segms)[:, np.newaxis]
segms_results = np.vstack([segms_results[k] for k in range(len(lod) - 1)])
im_results = np.hstack([segms_results, im_results])
return im_results[:, :3]
import os
import time
import numpy as np
from eval_helper import get_nmsed_box
from eval_helper import get_dt_res
from eval_helper import draw_bounding_box_on_image
from eval_helper import *
import paddle
import paddle.fluid as fluid
import reader
......@@ -14,17 +12,16 @@ import json
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval, Params
from config import cfg
from roidbs import DatasetPath
def infer():
if '2014' in cfg.dataset:
test_list = 'annotations/instances_val2014.json'
elif '2017' in cfg.dataset:
test_list = 'annotations/instances_val2017.json'
data_path = DatasetPath('val')
test_list = data_path.get_file_list()
cocoGt = COCO(os.path.join(cfg.data_dir, test_list))
numId_to_catId_map = {i + 1: v for i, v in enumerate(cocoGt.getCatIds())}
cocoGt = COCO(test_list)
num_id_to_cat_id_map = {i + 1: v for i, v in enumerate(cocoGt.getCatIds())}
category_ids = cocoGt.getCatIds()
label_list = {
item['id']: item['name']
......@@ -34,13 +31,15 @@ def infer():
image_shape = [3, cfg.TEST.max_size, cfg.TEST.max_size]
class_nums = cfg.class_num
model = model_builder.FasterRCNN(
model = model_builder.RCNN(
add_conv_body_func=resnet.add_ResNet50_conv4_body,
add_roi_box_head_func=resnet.add_ResNet_roi_conv5_head,
use_pyreader=False,
is_train=False)
model.build_model(image_shape)
rpn_rois, confs, locs = model.eval_out()
pred_boxes = model.eval_bbox_out()
if cfg.MASK_ON:
masks = model.eval_mask_out()
place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()
exe = fluid.Executor(place)
# yapf: disable
......@@ -53,17 +52,29 @@ def infer():
feeder = fluid.DataFeeder(place=place, feed_list=model.feeds())
dts_res = []
fetch_list = [rpn_rois, confs, locs]
segms_res = []
if cfg.MASK_ON:
fetch_list = [pred_boxes, masks]
else:
fetch_list = [pred_boxes]
data = next(infer_reader())
im_info = [data[0][1]]
rpn_rois_v, confs_v, locs_v = exe.run(
fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(data),
return_numpy=False)
new_lod, nmsed_out = get_nmsed_box(rpn_rois_v, confs_v, locs_v, class_nums,
im_info, numId_to_catId_map)
result = exe.run(fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(data),
return_numpy=False)
pred_boxes_v = result[0]
if cfg.MASK_ON:
masks_v = result[1]
new_lod = pred_boxes_v.lod()
nmsed_out = pred_boxes_v
path = os.path.join(cfg.image_path, cfg.image_name)
draw_bounding_box_on_image(path, nmsed_out, cfg.draw_threshold, label_list)
image = None
if cfg.MASK_ON:
segms_out = segm_results(nmsed_out, masks_v, im_info)
image = draw_mask_on_image(path, segms_out, cfg.draw_threshold)
draw_bounding_box_on_image(path, nmsed_out, cfg.draw_threshold, label_list,
num_id_to_cat_id_map, image)
if __name__ == '__main__':
......
......@@ -16,11 +16,12 @@ import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
from paddle.fluid.initializer import Constant
from paddle.fluid.initializer import Normal
from paddle.fluid.initializer import MSRA
from paddle.fluid.regularizer import L2Decay
from config import cfg
class FasterRCNN(object):
class RCNN(object):
def __init__(self,
add_conv_body_func=None,
add_roi_box_head_func=None,
......@@ -32,7 +33,6 @@ class FasterRCNN(object):
self.is_train = is_train
self.use_pyreader = use_pyreader
self.use_random = use_random
#self.py_reader = None
def build_model(self, image_shape):
self.build_input(image_shape)
......@@ -41,31 +41,62 @@ class FasterRCNN(object):
self.rpn_heads(body_conv)
# Fast RCNN
self.fast_rcnn_heads(body_conv)
if not self.is_train:
self.eval_bbox()
# Mask RCNN
if cfg.MASK_ON:
self.mask_rcnn_heads(body_conv)
def loss(self):
losses = []
# Fast RCNN loss
loss_cls, loss_bbox = self.fast_rcnn_loss()
# RPN loss
rpn_cls_loss, rpn_reg_loss = self.rpn_loss()
return loss_cls, loss_bbox, rpn_cls_loss, rpn_reg_loss,
losses = [loss_cls, loss_bbox, rpn_cls_loss, rpn_reg_loss]
rkeys = ['loss', 'loss_cls', 'loss_bbox', \
'loss_rpn_cls', 'loss_rpn_bbox',]
if cfg.MASK_ON:
loss_mask = self.mask_rcnn_loss()
losses = losses + [loss_mask]
rkeys = rkeys + ["loss_mask"]
loss = fluid.layers.sum(losses)
rloss = [loss] + losses
return rloss, rkeys
def eval_out(self):
cls_prob = fluid.layers.softmax(self.cls_score, use_cudnn=False)
return [self.rpn_rois, cls_prob, self.bbox_pred]
def eval_mask_out(self):
return self.mask_fcn_logits
def eval_bbox_out(self):
return self.pred_result
def build_input(self, image_shape):
if self.use_pyreader:
in_shapes = [[-1] + image_shape, [-1, 4], [-1, 1], [-1, 1],
[-1, 3], [-1, 1]]
lod_levels = [0, 1, 1, 1, 0, 0]
dtypes = [
'float32', 'float32', 'int32', 'int32', 'float32', 'int32'
]
if cfg.MASK_ON:
in_shapes.append([-1, 2])
lod_levels.append(3)
dtypes.append('float32')
self.py_reader = fluid.layers.py_reader(
capacity=64,
shapes=[[-1] + image_shape, [-1, 4], [-1, 1], [-1, 1], [-1, 3],
[-1, 1]],
lod_levels=[0, 1, 1, 1, 0, 0],
dtypes=[
"float32", "float32", "int32", "int32", "float32", "int32"
],
shapes=in_shapes,
lod_levels=lod_levels,
dtypes=dtypes,
use_double_buffer=True)
self.image, self.gt_box, self.gt_label, self.is_crowd, \
self.im_info, self.im_id = fluid.layers.read_file(self.py_reader)
ins = fluid.layers.read_file(self.py_reader)
self.image = ins[0]
self.gt_box = ins[1]
self.gt_label = ins[2]
self.is_crowd = ins[3]
self.im_info = ins[4]
self.im_id = ins[5]
if cfg.MASK_ON:
self.gt_masks = ins[6]
else:
self.image = fluid.layers.data(
name='image', shape=image_shape, dtype='float32')
......@@ -74,24 +105,55 @@ class FasterRCNN(object):
self.gt_label = fluid.layers.data(
name='gt_label', shape=[1], dtype='int32', lod_level=1)
self.is_crowd = fluid.layers.data(
name='is_crowd',
shape=[-1],
dtype='int32',
lod_level=1,
append_batch_size=False)
name='is_crowd', shape=[1], dtype='int32', lod_level=1)
self.im_info = fluid.layers.data(
name='im_info', shape=[3], dtype='float32')
self.im_id = fluid.layers.data(
name='im_id', shape=[1], dtype='int32')
if cfg.MASK_ON:
self.gt_masks = fluid.layers.data(
name='gt_masks', shape=[2], dtype='float32', lod_level=3)
def feeds(self):
if not self.is_train:
return [self.image, self.im_info, self.im_id]
if not cfg.MASK_ON:
return [
self.image, self.gt_box, self.gt_label, self.is_crowd,
self.im_info, self.im_id
]
return [
self.image, self.gt_box, self.gt_label, self.is_crowd, self.im_info,
self.im_id
self.im_id, self.gt_masks
]
def eval_bbox(self):
self.im_scale = fluid.layers.slice(
self.im_info, [1], starts=[2], ends=[3])
im_scale_lod = fluid.layers.sequence_expand(self.im_scale,
self.rpn_rois)
boxes = self.rpn_rois / im_scale_lod
cls_prob = fluid.layers.softmax(self.cls_score, use_cudnn=False)
bbox_pred_reshape = fluid.layers.reshape(self.bbox_pred,
(-1, cfg.class_num, 4))
decoded_box = fluid.layers.box_coder(
prior_box=boxes,
prior_box_var=cfg.bbox_reg_weights,
target_box=bbox_pred_reshape,
code_type='decode_center_size',
box_normalized=False,
axis=1)
cliped_box = fluid.layers.box_clip(
input=decoded_box, im_info=self.im_info)
self.pred_result = fluid.layers.multiclass_nms(
bboxes=cliped_box,
scores=cls_prob,
score_threshold=cfg.TEST.score_thresh,
nms_top_k=-1,
nms_threshold=cfg.TEST.nms_thresh,
keep_top_k=cfg.TEST.detections_per_im,
normalized=False)
def rpn_heads(self, rpn_input):
# RPN hidden representation
dim_out = rpn_input.shape[1]
......@@ -157,7 +219,7 @@ class FasterRCNN(object):
nms_thresh = param_obj.rpn_nms_thresh
min_size = param_obj.rpn_min_size
eta = param_obj.rpn_eta
rpn_rois, rpn_roi_probs = fluid.layers.generate_proposals(
self.rpn_rois, self.rpn_roi_probs = fluid.layers.generate_proposals(
scores=rpn_cls_score_prob,
bbox_deltas=self.rpn_bbox_pred,
im_info=self.im_info,
......@@ -168,10 +230,9 @@ class FasterRCNN(object):
nms_thresh=nms_thresh,
min_size=min_size,
eta=eta)
self.rpn_rois = rpn_rois
if self.is_train:
outs = fluid.layers.generate_proposal_labels(
rpn_rois=rpn_rois,
rpn_rois=self.rpn_rois,
gt_classes=self.gt_label,
is_crowd=self.is_crowd,
gt_boxes=self.gt_box,
......@@ -191,27 +252,28 @@ class FasterRCNN(object):
self.bbox_inside_weights = outs[3]
self.bbox_outside_weights = outs[4]
if cfg.MASK_ON:
mask_out = fluid.layers.generate_mask_labels(
im_info=self.im_info,
gt_classes=self.gt_label,
is_crowd=self.is_crowd,
gt_segms=self.gt_masks,
rois=self.rois,
labels_int32=self.labels_int32,
num_classes=cfg.class_num,
resolution=cfg.resolution)
self.mask_rois = mask_out[0]
self.roi_has_mask_int32 = mask_out[1]
self.mask_int32 = mask_out[2]
def fast_rcnn_heads(self, roi_input):
if self.is_train:
pool_rois = self.rois
else:
pool_rois = self.rpn_rois
if cfg.roi_func == 'RoIPool':
pool = fluid.layers.roi_pool(
input=roi_input,
rois=pool_rois,
pooled_height=cfg.roi_resolution,
pooled_width=cfg.roi_resolution,
spatial_scale=cfg.spatial_scale)
elif cfg.roi_func == 'RoIAlign':
pool = fluid.layers.roi_align(
input=roi_input,
rois=pool_rois,
pooled_height=cfg.roi_resolution,
pooled_width=cfg.roi_resolution,
spatial_scale=cfg.spatial_scale,
sampling_ratio=cfg.sampling_ratio)
rcnn_out = self.add_roi_box_head_func(pool)
self.res5_2_sum = self.add_roi_box_head_func(roi_input, pool_rois)
rcnn_out = fluid.layers.pool2d(
self.res5_2_sum, pool_type='avg', pool_size=7, name='res5_pool')
self.cls_score = fluid.layers.fc(input=rcnn_out,
size=cfg.class_num,
act=None,
......@@ -237,15 +299,87 @@ class FasterRCNN(object):
learning_rate=2.,
regularizer=L2Decay(0.)))
def SuffixNet(self, conv5):
mask_out = fluid.layers.conv2d_transpose(
input=conv5,
num_filters=cfg.dim_reduced,
filter_size=2,
stride=2,
act='relu',
param_attr=ParamAttr(
name='conv5_mask_w', initializer=MSRA(uniform=False)),
bias_attr=ParamAttr(
name='conv5_mask_b', learning_rate=2., regularizer=L2Decay(0.)))
act_func = None
if not self.is_train:
act_func = 'sigmoid'
mask_fcn_logits = fluid.layers.conv2d(
input=mask_out,
num_filters=cfg.class_num,
filter_size=1,
act=act_func,
param_attr=ParamAttr(
name='mask_fcn_logits_w', initializer=MSRA(uniform=False)),
bias_attr=ParamAttr(
name="mask_fcn_logits_b",
learning_rate=2.,
regularizer=L2Decay(0.)))
if not self.is_train:
mask_fcn_logits = fluid.layers.lod_reset(mask_fcn_logits,
self.pred_result)
return mask_fcn_logits
def mask_rcnn_heads(self, mask_input):
if self.is_train:
conv5 = fluid.layers.gather(self.res5_2_sum,
self.roi_has_mask_int32)
self.mask_fcn_logits = self.SuffixNet(conv5)
else:
self.eval_bbox()
pred_res_shape = fluid.layers.shape(self.pred_result)
shape = fluid.layers.reduce_prod(pred_res_shape)
shape = fluid.layers.reshape(shape, [1, 1])
ones = fluid.layers.fill_constant([1, 1], value=1, dtype='int32')
cond = fluid.layers.equal(x=shape, y=ones)
ie = fluid.layers.IfElse(cond)
with ie.true_block():
pred_res_null = ie.input(self.pred_result)
ie.output(pred_res_null)
with ie.false_block():
pred_res = ie.input(self.pred_result)
pred_boxes = fluid.layers.slice(
pred_res, [1], starts=[2], ends=[6])
im_scale_lod = fluid.layers.sequence_expand(self.im_scale,
pred_boxes)
mask_rois = pred_boxes * im_scale_lod
conv5 = self.add_roi_box_head_func(mask_input, mask_rois)
mask_fcn = self.SuffixNet(conv5)
ie.output(mask_fcn)
self.mask_fcn_logits = ie()[0]
def mask_rcnn_loss(self):
mask_label = fluid.layers.cast(x=self.mask_int32, dtype='float32')
reshape_dim = cfg.class_num * cfg.resolution * cfg.resolution
mask_fcn_logits_reshape = fluid.layers.reshape(self.mask_fcn_logits,
(-1, reshape_dim))
loss_mask = fluid.layers.sigmoid_cross_entropy_with_logits(
x=mask_fcn_logits_reshape,
label=mask_label,
ignore_index=-1,
normalize=True)
loss_mask = fluid.layers.reduce_sum(loss_mask, name='loss_mask')
return loss_mask
def fast_rcnn_loss(self):
labels_int64 = fluid.layers.cast(x=self.labels_int32, dtype='int64')
labels_int64.stop_gradient = True
#loss_cls = fluid.layers.softmax_with_cross_entropy(
# logits=cls_score,
# label=labels_int64
# )
cls_prob = fluid.layers.softmax(self.cls_score, use_cudnn=False)
loss_cls = fluid.layers.cross_entropy(cls_prob, labels_int64)
loss_cls = fluid.layers.softmax_with_cross_entropy(
logits=self.cls_score,
label=labels_int64,
numeric_stable_mode=True, )
loss_cls = fluid.layers.reduce_mean(loss_cls)
loss_bbox = fluid.layers.smooth_l1(
x=self.bbox_pred,
......@@ -303,5 +437,4 @@ class FasterRCNN(object):
norm = fluid.layers.reduce_prod(score_shape)
norm.stop_gradient = True
rpn_reg_loss = rpn_reg_loss / norm
return rpn_cls_loss, rpn_reg_loss
......@@ -160,8 +160,22 @@ def add_ResNet50_conv4_body(body_input):
return res4
def add_ResNet_roi_conv5_head(head_input):
res5 = layer_warp(bottleneck, head_input, 512, 3, 2, name="res5")
res5_pool = fluid.layers.pool2d(
res5, pool_type='avg', pool_size=7, name='res5_pool')
return res5_pool
def add_ResNet_roi_conv5_head(head_input, rois):
if cfg.roi_func == 'RoIPool':
pool = fluid.layers.roi_pool(
input=head_input,
rois=rois,
pooled_height=cfg.roi_resolution,
pooled_width=cfg.roi_resolution,
spatial_scale=cfg.spatial_scale)
elif cfg.roi_func == 'RoIAlign':
pool = fluid.layers.roi_align(
input=head_input,
rois=rois,
pooled_height=cfg.roi_resolution,
pooled_width=cfg.roi_resolution,
spatial_scale=cfg.spatial_scale,
sampling_ratio=cfg.sampling_ratio)
res5 = layer_warp(bottleneck, pool, 512, 3, 2, name="res5")
return res5
......@@ -37,18 +37,15 @@ def train():
devices = os.getenv("CUDA_VISIBLE_DEVICES") or ""
devices_num = len(devices.split(","))
total_batch_size = devices_num * cfg.TRAIN.im_per_batch
model = model_builder.FasterRCNN(
model = model_builder.RCNN(
add_conv_body_func=resnet.add_ResNet50_conv4_body,
add_roi_box_head_func=resnet.add_ResNet_roi_conv5_head,
use_pyreader=cfg.use_pyreader,
use_random=False)
model.build_model(image_shape)
loss_cls, loss_bbox, rpn_cls_loss, rpn_reg_loss = model.loss()
loss_cls.persistable = True
loss_bbox.persistable = True
rpn_cls_loss.persistable = True
rpn_reg_loss.persistable = True
loss = loss_cls + loss_bbox + rpn_cls_loss + rpn_reg_loss
losses, keys = model.loss()
loss = losses[0]
fetch_list = [loss]
boundaries = cfg.lr_steps
gamma = cfg.lr_gamma
......@@ -95,8 +92,6 @@ def train():
train_reader = reader.train(batch_size=total_batch_size, shuffle=False)
feeder = fluid.DataFeeder(place=place, feed_list=model.feeds())
fetch_list = [loss, loss_cls, loss_bbox, rpn_cls_loss, rpn_reg_loss]
def run(iterations):
reader_time = []
run_time = []
......@@ -109,20 +104,16 @@ def train():
reader_time.append(end_time - start_time)
start_time = time.time()
if cfg.parallel:
losses = train_exe.run(fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(data))
outs = train_exe.run(fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(data))
else:
losses = exe.run(fluid.default_main_program(),
fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(data))
outs = exe.run(fluid.default_main_program(),
fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(data))
end_time = time.time()
run_time.append(end_time - start_time)
total_images += len(data)
lr = np.array(fluid.global_scope().find_var('learning_rate')
.get_tensor())
print("Batch {:d}, lr {:.6f}, loss {:.6f} ".format(batch_id, lr[0],
losses[0][0]))
print("Batch {:d}, loss {:.6f} ".format(batch_id, np.mean(outs[0])))
return reader_time, run_time, total_images
def run_pyreader(iterations):
......@@ -135,18 +126,16 @@ def train():
for batch_id in range(iterations):
start_time = time.time()
if cfg.parallel:
losses = train_exe.run(
outs = train_exe.run(
fetch_list=[v.name for v in fetch_list])
else:
losses = exe.run(fluid.default_main_program(),
fetch_list=[v.name for v in fetch_list])
outs = exe.run(fluid.default_main_program(),
fetch_list=[v.name for v in fetch_list])
end_time = time.time()
run_time.append(end_time - start_time)
total_images += devices_num
lr = np.array(fluid.global_scope().find_var('learning_rate')
.get_tensor())
print("Batch {:d}, lr {:.6f}, loss {:.6f} ".format(batch_id, lr[
0], losses[0][0]))
print("Batch {:d}, loss {:.6f} ".format(batch_id,
np.mean(outs[0])))
except fluid.core.EOFException:
py_reader.reset()
......
......@@ -27,6 +27,46 @@ from collections import deque
from roidbs import JsonDataset
import data_utils
from config import cfg
import segm_utils
def roidb_reader(roidb, mode):
im, im_scales = data_utils.get_image_blob(roidb, mode)
im_id = roidb['id']
im_height = np.round(roidb['height'] * im_scales)
im_width = np.round(roidb['width'] * im_scales)
im_info = np.array([im_height, im_width, im_scales], dtype=np.float32)
if mode == 'val' or mode == 'infer':
return im, im_info, im_id
gt_boxes = roidb['gt_boxes'].astype('float32')
gt_classes = roidb['gt_classes'].astype('int32')
is_crowd = roidb['is_crowd'].astype('int32')
segms = roidb['segms']
outs = (im, gt_boxes, gt_classes, is_crowd, im_info, im_id)
if cfg.MASK_ON:
gt_masks = []
valid = True
segms = roidb['segms']
assert len(segms) == is_crowd.shape[0]
for i in range(len(roidb['segms'])):
segm, iscrowd = segms[i], is_crowd[i]
gt_segm = []
if iscrowd:
gt_segm.append([[0, 0]])
else:
for poly in segm:
if len(poly) == 0:
valid = False
break
gt_segm.append(np.array(poly).reshape(-1, 2))
if (not valid) or len(gt_segm) == 0:
break
gt_masks.append(gt_segm)
outs = outs + (gt_masks, )
return outs
def coco(mode,
......@@ -34,48 +74,16 @@ def coco(mode,
total_batch_size=None,
padding_total=False,
shuffle=False):
if 'coco2014' in cfg.dataset:
cfg.train_file_list = 'annotations/instances_train2014.json'
cfg.train_data_dir = 'train2014'
cfg.val_file_list = 'annotations/instances_val2014.json'
cfg.val_data_dir = 'val2014'
elif 'coco2017' in cfg.dataset:
cfg.train_file_list = 'annotations/instances_train2017.json'
cfg.train_data_dir = 'train2017'
cfg.val_file_list = 'annotations/instances_val2017.json'
cfg.val_data_dir = 'val2017'
else:
raise NotImplementedError('Dataset {} not supported'.format(
cfg.dataset))
cfg.mean_value = np.array(cfg.pixel_means)[np.newaxis,
np.newaxis, :].astype('float32')
total_batch_size = total_batch_size if total_batch_size else batch_size
if mode != 'infer':
assert total_batch_size % batch_size == 0
if mode == 'train':
cfg.train_file_list = os.path.join(cfg.data_dir, cfg.train_file_list)
cfg.train_data_dir = os.path.join(cfg.data_dir, cfg.train_data_dir)
elif mode == 'test' or mode == 'infer':
cfg.val_file_list = os.path.join(cfg.data_dir, cfg.val_file_list)
cfg.val_data_dir = os.path.join(cfg.data_dir, cfg.val_data_dir)
json_dataset = JsonDataset(train=(mode == 'train'))
json_dataset = JsonDataset(mode)
roidbs = json_dataset.get_roidb()
print("{} on {} with {} roidbs".format(mode, cfg.dataset, len(roidbs)))
def roidb_reader(roidb, mode):
im, im_scales = data_utils.get_image_blob(roidb, mode)
im_id = roidb['id']
im_height = np.round(roidb['height'] * im_scales)
im_width = np.round(roidb['width'] * im_scales)
im_info = np.array([im_height, im_width, im_scales], dtype=np.float32)
if mode == 'test' or mode == 'infer':
return im, im_info, im_id
gt_boxes = roidb['gt_boxes'].astype('float32')
gt_classes = roidb['gt_classes'].astype('int32')
is_crowd = roidb['is_crowd'].astype('int32')
return im, gt_boxes, gt_classes, is_crowd, im_info, im_id
def padding_minibatch(batch_data):
if len(batch_data) == 1:
return batch_data
......@@ -93,39 +101,53 @@ def coco(mode,
def reader():
if mode == "train":
roidb_perm = deque(np.random.permutation(roidbs))
if shuffle:
roidb_perm = deque(np.random.permutation(roidbs))
else:
roidb_perm = deque(roidbs)
roidb_cur = 0
count = 0
batch_out = []
device_num = total_batch_size / batch_size
while True:
roidb = roidb_perm[0]
roidb_cur += 1
roidb_perm.rotate(-1)
if roidb_cur >= len(roidbs):
roidb_perm = deque(np.random.permutation(roidbs))
if shuffle:
roidb_perm = deque(np.random.permutation(roidbs))
else:
roidb_perm = deque(roidbs)
roidb_cur = 0
im, gt_boxes, gt_classes, is_crowd, im_info, im_id = roidb_reader(
roidb, mode)
if gt_boxes.shape[0] == 0:
# im, gt_boxes, gt_classes, is_crowd, im_info, im_id, gt_masks
datas = roidb_reader(roidb, mode)
if datas[1].shape[0] == 0:
continue
batch_out.append(
(im, gt_boxes, gt_classes, is_crowd, im_info, im_id))
if cfg.MASK_ON:
if len(datas[-1]) != datas[1].shape[0]:
continue
batch_out.append(datas)
if not padding_total:
if len(batch_out) == batch_size:
yield padding_minibatch(batch_out)
count += 1
batch_out = []
else:
if len(batch_out) == total_batch_size:
batch_out = padding_minibatch(batch_out)
for i in range(total_batch_size / batch_size):
for i in range(device_num):
sub_batch_out = []
for j in range(batch_size):
sub_batch_out.append(batch_out[i * batch_size +
j])
yield sub_batch_out
count += 1
sub_batch_out = []
batch_out = []
elif mode == "test":
iter_id = count // device_num
if iter_id >= cfg.max_iter:
return
elif mode == "val":
batch_out = []
for roidb in roidbs:
im, im_info, im_id = roidb_reader(roidb, mode)
......@@ -153,7 +175,7 @@ def train(batch_size, total_batch_size=None, padding_total=False, shuffle=True):
def test(batch_size, total_batch_size=None, padding_total=False):
return coco('test', batch_size, total_batch_size, shuffle=False)
return coco('val', batch_size, total_batch_size, shuffle=False)
def infer():
......
......@@ -36,24 +36,39 @@ import matplotlib
matplotlib.use('Agg')
from pycocotools.coco import COCO
import box_utils
import segm_utils
from config import cfg
logger = logging.getLogger(__name__)
class DatasetPath(object):
def __init__(self, mode):
self.mode = mode
mode_name = 'train' if mode == 'train' else 'val'
if cfg.dataset != 'coco2014' and cfg.dataset != 'coco2017':
raise NotImplementedError('Dataset {} not supported'.format(
cfg.dataset))
self.sub_name = mode_name + cfg.dataset[-4:]
def get_data_dir(self):
return os.path.join(cfg.data_dir, self.sub_name)
def get_file_list(self):
sfile_list = 'annotations/instances_' + self.sub_name + '.json'
return os.path.join(cfg.data_dir, sfile_list)
class JsonDataset(object):
"""A class representing a COCO json dataset."""
def __init__(self, train=False):
def __init__(self, mode):
print('Creating: {}'.format(cfg.dataset))
self.name = cfg.dataset
self.is_train = train
if self.is_train:
data_dir = cfg.train_data_dir
file_list = cfg.train_file_list
else:
data_dir = cfg.val_data_dir
file_list = cfg.val_file_list
self.is_train = mode == 'train'
data_path = DatasetPath(mode)
data_dir = data_path.get_data_dir()
file_list = data_path.get_file_list()
self.image_directory = data_dir
self.COCO = COCO(file_list)
# Set up dataset classes
......@@ -91,8 +106,9 @@ class JsonDataset(object):
end_time = time.time()
print('_add_gt_annotations took {:.3f}s'.format(end_time -
start_time))
print('Appending horizontally-flipped training examples...')
self._extend_with_flipped_entries(roidb)
if cfg.TRAIN.use_flipped:
print('Appending horizontally-flipped training examples...')
self._extend_with_flipped_entries(roidb)
print('Loaded dataset: {:s}'.format(self.name))
print('{:d} roidb entries'.format(len(roidb)))
if self.is_train:
......@@ -111,6 +127,7 @@ class JsonDataset(object):
entry['gt_classes'] = np.empty((0), dtype=np.int32)
entry['gt_id'] = np.empty((0), dtype=np.int32)
entry['is_crowd'] = np.empty((0), dtype=np.bool)
entry['segms'] = []
# Remove unwanted fields that come from the json file (if they exist)
for k in ['date_captured', 'url', 'license', 'file_name']:
if k in entry:
......@@ -126,9 +143,15 @@ class JsonDataset(object):
objs = self.COCO.loadAnns(ann_ids)
# Sanitize bboxes -- some are invalid
valid_objs = []
valid_segms = []
width = entry['width']
height = entry['height']
for obj in objs:
if isinstance(obj['segmentation'], list):
# Valid polygons have >= 3 points, so require >= 6 coordinates
obj['segmentation'] = [
p for p in obj['segmentation'] if len(p) >= 6
]
if obj['area'] < cfg.TRAIN.gt_min_area:
continue
if 'ignore' in obj and obj['ignore'] == 1:
......@@ -141,6 +164,8 @@ class JsonDataset(object):
if obj['area'] > 0 and x2 > x1 and y2 > y1:
obj['clean_bbox'] = [x1, y1, x2, y2]
valid_objs.append(obj)
valid_segms.append(obj['segmentation'])
num_valid_objs = len(valid_objs)
gt_boxes = np.zeros((num_valid_objs, 4), dtype=entry['gt_boxes'].dtype)
......@@ -158,6 +183,7 @@ class JsonDataset(object):
entry['gt_classes'] = np.append(entry['gt_classes'], gt_classes)
entry['gt_id'] = np.append(entry['gt_id'], gt_id)
entry['is_crowd'] = np.append(entry['is_crowd'], is_crowd)
entry['segms'].extend(valid_segms)
def _extend_with_flipped_entries(self, roidb):
"""Flip each entry in the given roidb and return a new roidb that is the
......@@ -175,11 +201,13 @@ class JsonDataset(object):
gt_boxes[:, 2] = width - oldx1 - 1
assert (gt_boxes[:, 2] >= gt_boxes[:, 0]).all()
flipped_entry = {}
dont_copy = ('gt_boxes', 'flipped')
dont_copy = ('gt_boxes', 'flipped', 'segms')
for k, v in entry.items():
if k not in dont_copy:
flipped_entry[k] = v
flipped_entry['gt_boxes'] = gt_boxes
flipped_entry['segms'] = segm_utils.flip_segms(
entry['segms'], entry['height'], entry['width'])
flipped_entry['flipped'] = True
flipped_roidb.append(flipped_entry)
roidb.extend(flipped_roidb)
......
#!/bin/bash
export CUDA_VISIBLE_DEVICES=0
model=$1 # faster_rcnn, mask_rcnn
if [ "$model" = "faster_rcnn" ]; then
mask_on="--MASK_ON False"
elif [ "$model" = "mask_rcnn" ]; then
mask_on="--MASK_ON True"
else
echo "Invalid model provided. Please use one of {faster_rcnn, mask_rcnn}"
exit 1
fi
python -u ../eval_coco_map.py \
$mask_on \
--pretrained_model=../output/model_iter179999 \
--data_dir=../dataset/coco/ \
#!/bin/bash
export CUDA_VISIBLE_DEVICES=0
model=$1 # faster_rcnn, mask_rcnn
if [ "$model" = "faster_rcnn" ]; then
mask_on="--MASK_ON False"
elif [ "$model" = "mask_rcnn" ]; then
mask_on="--MASK_ON True"
else
echo "Invalid model provided. Please use one of {faster_rcnn, mask_rcnn}"
exit 1
fi
python -u ../infer.py \
$mask_on \
--pretrained_model=../output/model_iter179999 \
--image_path=../dataset/coco/val2017/ \
--image_name=000000000139.jpg \
--draw_threshold=0.6
#!/bin/bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
model=$1 # faster_rcnn, mask_rcnn
if [ "$model" = "faster_rcnn" ]; then
mask_on="--MASK_ON False"
elif [ "$model" = "mask_rcnn" ]; then
mask_on="--MASK_ON True"
else
echo "Invalid model provided. Please use one of {faster_rcnn, mask_rcnn}"
exit 1
fi
python -u ../train.py \
$mask_on \
--model_save_dir=../output/ \
--pretrained_model=../imagenet_resnet50_fusebn/ \
--data_dir=../dataset/coco/ \
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://w_idxw.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on:
# --------------------------------------------------------
# Detectron
# Copyright (c) 2017-present, Facebook, Inc.
# Licensed under the Apache License, Version 2.0;
# Written by Ross Girshick
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import pycocotools.mask as mask_util
import cv2
def is_poly(segm):
"""Determine if segm is a polygon. Valid segm expected (polygon or RLE)."""
assert isinstance(segm, (list, dict)), \
'Invalid segm type: {}'.format(type(segm))
return isinstance(segm, list)
def segms_to_rle(segms, height, width):
rle = segms
if isinstance(segms, list):
# polygon -- a single object might consist of multiple parts
# we merge all parts into one mask rle code
rles = mask_util.frPyObjects(segms, height, width)
rle = mask_util.merge(rles)
elif isinstance(segms['counts'], list):
# uncompressed RLE
rle = mask_util.frPyObjects(segms, height, width)
return rle
def segms_to_mask(segms, iscrowd, height, width):
print('segms: ', segms)
if iscrowd:
return [[0 for i in range(width)] for j in range(height)]
rle = segms_to_rle(segms, height, width)
mask = mask_util.decode(rle)
return mask
def flip_segms(segms, height, width):
"""Left/right flip each mask in a list of masks."""
def _flip_poly(poly, width):
flipped_poly = np.array(poly)
flipped_poly[0::2] = width - np.array(poly[0::2]) - 1
return flipped_poly.tolist()
def _flip_rle(rle, height, width):
if 'counts' in rle and type(rle['counts']) == list:
# Magic RLE format handling painfully discovered by looking at the
# COCO API showAnns function.
rle = mask_util.frPyObjects([rle], height, width)
mask = mask_util.decode(rle)
mask = mask[:, ::-1, :]
rle = mask_util.encode(np.array(mask, order='F', dtype=np.uint8))
return rle
flipped_segms = []
for segm in segms:
if is_poly(segm):
# Polygon format
flipped_segms.append([_flip_poly(poly, width) for poly in segm])
else:
# RLE format
flipped_segms.append(_flip_rle(segm, height, width))
return flipped_segms
......@@ -20,7 +20,8 @@ import sys
import numpy as np
import time
import shutil
from utility import parse_args, print_arguments, SmoothedValue
from utility import parse_args, print_arguments, SmoothedValue, TrainingStats, now_time
import collections
import paddle
import paddle.fluid as fluid
......@@ -35,7 +36,7 @@ def train():
learning_rate = cfg.learning_rate
image_shape = [3, cfg.TRAIN.max_size, cfg.TRAIN.max_size]
if cfg.debug or cfg.enable_ce:
if cfg.enable_ce:
fluid.default_startup_program().random_seed = 1000
fluid.default_main_program().random_seed = 1000
import random
......@@ -49,36 +50,36 @@ def train():
use_random = True
if cfg.enable_ce:
use_random = False
model = model_builder.FasterRCNN(
model = model_builder.RCNN(
add_conv_body_func=resnet.add_ResNet50_conv4_body,
add_roi_box_head_func=resnet.add_ResNet_roi_conv5_head,
use_pyreader=cfg.use_pyreader,
use_random=use_random)
model.build_model(image_shape)
loss_cls, loss_bbox, rpn_cls_loss, rpn_reg_loss = model.loss()
loss_cls.persistable = True
loss_bbox.persistable = True
rpn_cls_loss.persistable = True
rpn_reg_loss.persistable = True
loss = loss_cls + loss_bbox + rpn_cls_loss + rpn_reg_loss
losses, keys = model.loss()
loss = losses[0]
fetch_list = losses
boundaries = cfg.lr_steps
gamma = cfg.lr_gamma
step_num = len(cfg.lr_steps)
values = [learning_rate * (gamma**i) for i in range(step_num + 1)]
lr = exponential_with_warmup_decay(
learning_rate=learning_rate,
boundaries=boundaries,
values=values,
warmup_iter=cfg.warm_up_iter,
warmup_factor=cfg.warm_up_factor)
optimizer = fluid.optimizer.Momentum(
learning_rate=exponential_with_warmup_decay(
learning_rate=learning_rate,
boundaries=boundaries,
values=values,
warmup_iter=cfg.warm_up_iter,
warmup_factor=cfg.warm_up_factor),
learning_rate=lr,
regularization=fluid.regularizer.L2Decay(cfg.weight_decay),
momentum=cfg.momentum)
optimizer.minimize(loss)
fetch_list = fetch_list + [lr]
fluid.memory_optimize(fluid.default_main_program())
fluid.memory_optimize(
fluid.default_main_program(), skip_opt_set=set(fetch_list))
place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()
exe = fluid.Executor(place)
......@@ -107,7 +108,8 @@ def train():
py_reader = model.py_reader
py_reader.decorate_paddle_reader(train_reader)
else:
train_reader = reader.train(batch_size=total_batch_size, shuffle=shuffle)
train_reader = reader.train(
batch_size=total_batch_size, shuffle=shuffle)
feeder = fluid.DataFeeder(place=place, feed_list=model.feeds())
def save_model(postfix):
......@@ -116,88 +118,72 @@ def train():
shutil.rmtree(model_path)
fluid.io.save_persistables(exe, model_path)
fetch_list = [loss, rpn_cls_loss, rpn_reg_loss, loss_cls, loss_bbox]
def train_loop_pyreader():
py_reader.start()
smoothed_loss = SmoothedValue(cfg.log_window)
train_stats = TrainingStats(cfg.log_window, keys)
try:
start_time = time.time()
prev_start_time = start_time
total_time = 0
last_loss = 0
every_pass_loss = []
for iter_id in range(cfg.max_iter):
prev_start_time = start_time
start_time = time.time()
losses = train_exe.run(fetch_list=[v.name for v in fetch_list])
every_pass_loss.append(np.mean(np.array(losses[0])))
smoothed_loss.add_value(np.mean(np.array(losses[0])))
lr = np.array(fluid.global_scope().find_var('learning_rate')
.get_tensor())
print("Iter {:d}, lr {:.6f}, loss {:.6f}, time {:.5f}".format(
iter_id, lr[0],
smoothed_loss.get_median_value(
), start_time - prev_start_time))
end_time = time.time()
total_time += end_time - start_time
last_loss = np.mean(np.array(losses[0]))
outs = train_exe.run(fetch_list=[v.name for v in fetch_list])
stats = {k: np.array(v).mean() for k, v in zip(keys, outs[:-1])}
train_stats.update(stats)
logs = train_stats.log()
strs = '{}, iter: {}, lr: {:.5f}, {}, time: {:.3f}'.format(
now_time(), iter_id,
np.mean(outs[-1]), logs, start_time - prev_start_time)
print(strs)
sys.stdout.flush()
if (iter_id + 1) % cfg.TRAIN.snapshot_iter == 0:
save_model("model_iter{}".format(iter_id))
# only for ce
end_time = time.time()
total_time = end_time - start_time
last_loss = np.array(outs[0]).mean()
if cfg.enable_ce:
gpu_num = devices_num
epoch_idx = iter_id + 1
loss = last_loss
print("kpis\teach_pass_duration_card%s\t%s" %
(gpu_num, total_time / epoch_idx))
print("kpis\ttrain_loss_card%s\t%s" %
(gpu_num, loss))
except fluid.core.EOFException:
(gpu_num, total_time / epoch_idx))
print("kpis\ttrain_loss_card%s\t%s" % (gpu_num, loss))
except (StopIteration, fluid.core.EOFException):
py_reader.reset()
return np.mean(every_pass_loss)
def train_loop():
start_time = time.time()
prev_start_time = start_time
start = start_time
total_time = 0
last_loss = 0
every_pass_loss = []
smoothed_loss = SmoothedValue(cfg.log_window)
train_stats = TrainingStats(cfg.log_window, keys)
for iter_id, data in enumerate(train_reader()):
prev_start_time = start_time
start_time = time.time()
losses = train_exe.run(fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(data))
loss_v = np.mean(np.array(losses[0]))
every_pass_loss.append(loss_v)
smoothed_loss.add_value(loss_v)
lr = np.array(fluid.global_scope().find_var('learning_rate')
.get_tensor())
end_time = time.time()
total_time += end_time - start_time
last_loss = loss_v
print("Iter {:d}, lr {:.6f}, loss {:.6f}, time {:.5f}".format(
iter_id, lr[0],
smoothed_loss.get_median_value(), start_time - prev_start_time))
outs = train_exe.run(fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(data))
stats = {k: np.array(v).mean() for k, v in zip(keys, outs[:-1])}
train_stats.update(stats)
logs = train_stats.log()
strs = '{}, iter: {}, lr: {:.5f}, {}, time: {:.3f}'.format(
now_time(), iter_id,
np.mean(outs[-1]), logs, start_time - prev_start_time)
print(strs)
sys.stdout.flush()
if (iter_id + 1) % cfg.TRAIN.snapshot_iter == 0:
save_model("model_iter{}".format(iter_id))
if (iter_id + 1) == cfg.max_iter:
break
end_time = time.time()
total_time = end_time - start_time
last_loss = np.array(outs[0]).mean()
# only for ce
if cfg.enable_ce:
gpu_num = devices_num
epoch_idx = iter_id + 1
loss = last_loss
print("kpis\teach_pass_duration_card%s\t%s" %
(gpu_num, total_time / epoch_idx))
print("kpis\ttrain_loss_card%s\t%s" %
(gpu_num, loss))
(gpu_num, total_time / epoch_idx))
print("kpis\ttrain_loss_card%s\t%s" % (gpu_num, loss))
return np.mean(every_pass_loss)
......
......@@ -22,7 +22,9 @@ import sys
import distutils.util
import numpy as np
import six
import collections
from collections import deque
import datetime
from paddle.fluid import core
import argparse
import functools
......@@ -85,6 +87,37 @@ class SmoothedValue(object):
return np.median(self.deque)
def now_time():
return datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')
class TrainingStats(object):
def __init__(self, window_size, stats_keys):
self.smoothed_losses_and_metrics = {
key: SmoothedValue(window_size)
for key in stats_keys
}
def update(self, stats):
for k, v in self.smoothed_losses_and_metrics.items():
v.add_value(stats[k])
def get(self, extras=None):
stats = collections.OrderedDict()
if extras:
for k, v in extras.items():
stats[k] = v
for k, v in self.smoothed_losses_and_metrics.items():
stats[k] = round(v.get_median_value(), 3)
return stats
def log(self, extras=None):
d = self.get(extras)
strs = ', '.join(str(dict({x: y})).strip('{}') for x, y in d.items())
return strs
def parse_args():
"""return all args
"""
......@@ -108,14 +141,15 @@ def parse_args():
add_arg('learning_rate', float, 0.01, "Learning rate.")
add_arg('max_iter', int, 180000, "Iter number.")
add_arg('log_window', int, 20, "Log smooth window, set 1 for debug, set 20 for train.")
# FAST RCNN
# RCNN
# RPN
add_arg('anchor_sizes', int, [32,64,128,256,512], "The size of anchors.")
add_arg('aspect_ratios', float, [0.5,1.0,2.0], "The ratio of anchors.")
add_arg('variance', float, [1.,1.,1.,1.], "The variance of anchors.")
add_arg('rpn_stride', float, [16.,16.], "Stride of the feature map that RPN is attached.")
add_arg('rpn_nms_thresh', float, 0.7, "NMS threshold used on RPN proposals")
# TRAIN TEST INFER
# TRAIN VAL INFER
add_arg('MASK_ON', bool, False, "Option for different models. If False, choose faster_rcnn. If True, choose mask_rcnn")
add_arg('im_per_batch', int, 1, "Minibatch size.")
add_arg('max_size', int, 1333, "The resized image height.")
add_arg('scales', int, [800], "The resized image height.")
......@@ -124,7 +158,6 @@ def parse_args():
add_arg('nms_thresh', float, 0.5, "NMS threshold.")
add_arg('score_thresh', float, 0.05, "score threshold for NMS.")
add_arg('snapshot_stride', int, 10000, "save model every snapshot stride.")
add_arg('debug', bool, False, "Debug mode")
# SINGLE EVAL AND DRAW
add_arg('draw_threshold', float, 0.8, "Confidence threshold to draw bbox.")
add_arg('image_path', str, 'dataset/coco/val2017', "The image path used to inference and visualize.")
......@@ -138,5 +171,5 @@ def parse_args():
if 'train' in file_name or 'profile' in file_name:
merge_cfg_from_args(args, 'train')
else:
merge_cfg_from_args(args, 'test')
merge_cfg_from_args(args, 'val')
return args
checkpoints
output*
*.pyc
*.swp
*_result
# VideoClassification
Video Classification
To run train:
bash ./scripts/train/train_${model_name}.sh
To run test:
bash ./scripts/test/test_${model_name}.sh
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
try:
from configparser import ConfigParser
except:
from ConfigParser import ConfigParser
from utils import AttrDict
CONFIG_SECS = [
'train',
'valid',
'test',
'infer',
]
def parse_config(cfg_file):
parser = ConfigParser()
cfg = AttrDict()
parser.read(cfg_file)
for sec in parser.sections():
sec_dict = AttrDict()
for k, v in parser.items(sec):
try:
v = eval(v)
except:
pass
setattr(sec_dict, k, v)
setattr(cfg, sec.upper(), sec_dict)
return cfg
def merge_configs(cfg, sec, args_dict):
assert sec in CONFIG_SECS, "invalid config section {}".format(sec)
sec_dict = getattr(cfg, sec.upper())
for k, v in args_dict.items():
if v is None:
continue
try:
if hasattr(sec_dict, k):
setattr(sec_dict, k, v)
except:
pass
return cfg
[MODEL]
name = "AttentionCluster"
dataset = "YouTube-8M"
bone_network = None
drop_rate = 0.5
feature_num = 2
feature_names = ['rgb', 'audio']
feature_dims = [1024, 128]
seg_num = 100
cluster_nums = [32, 32]
num_classes = 3862
topk = 20
[TRAIN]
epoch = 5
learning_rate = 0.001
pretrain_base = None
batch_size = 2048
use_gpu = True
num_gpus = 8
filelist = "dataset/youtube8m/train.list"
[VALID]
batch_size = 2048
filelist = "dataset/youtube8m/val.list"
[TEST]
batch_size = 256
filelist = "dataset/youtube8m/test.list"
[INFER]
batch_size = 1
filelist = "dataset/youtube8m/infer.list"
[MODEL]
name = "AttentionLSTM"
dataset = "YouTube-8M"
bone_nework = None
drop_rate = 0.5
feature_num = 2
feature_names = ['rgb', 'audio']
feature_dims = [1024, 128]
embedding_size = 512
lstm_size = 1024
num_classes = 3862
topk = 20
[TRAIN]
epoch = 10
learning_rate = 0.001
decay_epochs = [5]
decay_gamma = 0.1
weight_decay = 0.0008
num_samples = 5000000
pretrain_base = None
batch_size = 1024
use_gpu = True
num_gpus = 8
filelist = "dataset/youtube8m/train.list"
[VALID]
batch_size = 1024
filelist = "dataset/youtube8m/val.list"
[TEST]
batch_size = 128
filelist = "dataset/youtube8m/test.list"
[INFER]
batch_size = 1
filelist = "dataset/youtube8m/infer.list"
[MODEL]
name = "NEXTVLAD"
num_classes = 3862
topk = 20
video_feature_size = 1024
audio_feature_size = 128
cluster_size = 128
hidden_size = 2048
groups = 8
expansion = 2
drop_rate = 0.5
gating_reduction = 8
eigen_file = "./dataset/youtube8m/yt8m_pca/eigenvals.npy"
[TRAIN]
epoch = 6
learning_rate = 0.0002
lr_boundary_examples = 2000000
max_iter = 700000
learning_rate_decay = 0.8
l2_penalty = 1e-5
gradient_clip_norm = 1.0
use_gpu = True
num_gpus = 4
batch_size = 160
filelist = "./dataset/youtube8m/train.list"
[VALID]
batch_size = 160
filelist = "./dataset/youtube8m/val.list"
[TEST]
batch_size = 40
filelist = "./dataset/youtube8m/test.list"
[INFER]
batch_size = 1
filelist = "./dataset/youtube8m/infer.list"
[MODEL]
name = "STNET"
format = "pkl"
num_classes = 400
seg_num = 7
seglen = 5
image_mean = [0.485, 0.456, 0.406]
image_std = [0.229, 0.224, 0.225]
num_layers = 50
[TRAIN]
epoch = 60
short_size = 256
target_size = 224
num_reader_threads = 12
buf_size = 1024
batch_size = 128
num_gpus = 8
use_gpu = True
filelist = "./dataset/kinetics/train.list"
learning_rate = 0.01
learning_rate_decay = 0.1
l2_weight_decay = 1e-4
momentum = 0.9
total_videos = 224684
pretrain_base = "./dataset/pretrained/ResNet50_pretrained"
[VALID]
short_size = 256
target_size = 224
num_reader_threads = 12
buf_size = 1024
batch_size = 128
filelist = "./dataset/kinetics/val.list"
[TEST]
short_size = 256
target_size = 256
num_reader_threads = 12
buf_size = 1024
batch_size = 16
filelist = "./dataset/kinetics/test.list"
[INFER]
short_size = 256
target_size = 256
num_reader_threads = 12
buf_size = 1024
batch_size = 1
filelist = "./dataset/kinetics/infer.list"
[MODEL]
name = "TSN"
format = "pkl"
num_classes = 400
seg_num = 3
seglen = 1
image_mean = [0.485, 0.456, 0.406]
image_std = [0.229, 0.224, 0.225]
num_layers = 50
[TRAIN]
epoch = 45
short_size = 256
target_size = 224
num_reader_threads = 12
buf_size = 1024
batch_size = 256
use_gpu = True
num_gpus = 8
filelist = "./dataset/kinetics/train.list"
learning_rate = 0.01
learning_rate_decay = 0.1
l2_weight_decay = 1e-4
momentum = 0.9
total_videos = 224684
[VALID]
short_size = 256
target_size = 224
num_reader_threads = 12
buf_size = 1024
batch_size = 256
filelist = "./dataset/kinetics/val.list"
[TEST]
short_size = 256
target_size = 224
num_reader_threads = 12
buf_size = 1024
batch_size = 32
filelist = "./dataset/kinetics/test.list"
[INFER]
short_size = 256
target_size = 224
num_reader_threads = 12
buf_size = 1024
batch_size = 1
filelist = "./dataset/kinetics/infer.list"
from .reader_utils import regist_reader, get_reader
from .feature_reader import FeatureReader
from .kinetics_reader import KineticsReader
from .nonlocal_reader import NonlocalReader
regist_reader("ATTENTIONCLUSTER", FeatureReader)
regist_reader("NEXTVLAD", FeatureReader)
regist_reader("ATTENTIONLSTM", FeatureReader)
regist_reader("TSN", KineticsReader)
regist_reader("TSM", KineticsReader)
regist_reader("STNET", KineticsReader)
regist_reader("NONLOCAL", NonlocalReader)
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
import sys
from .reader_utils import DataReader
try:
import cPickle as pickle
from cStringIO import StringIO
except ImportError:
import pickle
from io import BytesIO
import numpy as np
import random
python_ver = sys.version_info
class FeatureReader(DataReader):
"""
Data reader for youtube-8M dataset, which was stored as features extracted by prior networks
This is for the three models: lstm, attention cluster, nextvlad
dataset cfg: num_classes
batch_size
list
NextVlad only: eigen_file
"""
def __init__(self, name, mode, cfg):
self.name = name
self.mode = mode
self.num_classes = cfg.MODEL.num_classes
# set batch size and file list
self.batch_size = cfg[mode.upper()]['batch_size']
self.filelist = cfg[mode.upper()]['filelist']
self.eigen_file = cfg.MODEL.get('eigen_file', None)
self.seg_num = cfg.MODEL.get('seg_num', None)
def create_reader(self):
fl = open(self.filelist).readlines()
fl = [line.strip() for line in fl if line.strip() != '']
if self.mode == 'train':
random.shuffle(fl)
def reader():
batch_out = []
for filepath in fl:
if python_ver < (3, 0):
data = pickle.load(open(filepath, 'rb'))
else:
data = pickle.load(open(filepath, 'rb'), encoding='bytes')
indexes = list(range(len(data)))
if self.mode == 'train':
random.shuffle(indexes)
for i in indexes:
record = data[i]
nframes = record[b'nframes']
rgb = record[b'feature'].astype(float)
audio = record[b'audio'].astype(float)
if self.mode != 'infer':
label = record[b'label']
one_hot_label = make_one_hot(label, self.num_classes)
video = record[b'video']
rgb = rgb[0:nframes, :]
audio = audio[0:nframes, :]
rgb = dequantize(
rgb, max_quantized_value=2., min_quantized_value=-2.)
audio = dequantize(
audio, max_quantized_value=2, min_quantized_value=-2)
if self.name == 'NEXTVLAD':
# add the effect of eigen values
eigen_file = self.eigen_file
eigen_val = np.sqrt(np.load(eigen_file)
[:1024, 0]).astype(np.float32)
eigen_val = eigen_val + 1e-4
rgb = (rgb - 4. / 512) * eigen_val
if self.name == 'ATTENTIONCLUSTER':
sample_inds = generate_random_idx(rgb.shape[0],
self.seg_num)
rgb = rgb[sample_inds]
audio = audio[sample_inds]
if self.mode != 'infer':
batch_out.append((rgb, audio, one_hot_label))
else:
batch_out.append((rgb, audio, video))
if len(batch_out) == self.batch_size:
yield batch_out
batch_out = []
return reader
def dequantize(feat_vector, max_quantized_value=2., min_quantized_value=-2.):
"""
Dequantize the feature from the byte format to the float format
"""
assert max_quantized_value > min_quantized_value
quantized_range = max_quantized_value - min_quantized_value
scalar = quantized_range / 255.0
bias = (quantized_range / 512.0) + min_quantized_value
return feat_vector * scalar + bias
def make_one_hot(label, dim=3862):
one_hot_label = np.zeros(dim)
one_hot_label = one_hot_label.astype(float)
for ind in label:
one_hot_label[int(ind)] = 1
return one_hot_label
def generate_random_idx(feature_len, seg_num):
idxs = []
stride = float(feature_len) / seg_num
for i in range(seg_num):
pos = (i + np.random.random()) * stride
idxs.append(min(feature_len - 1, int(pos)))
return idxs
此差异已折叠。
此差异已折叠。
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
import pickle
import cv2
import numpy as np
import random
class ReaderNotFoundError(Exception):
"Error: reader not found"
def __init__(self, reader_name, avail_readers):
super(ReaderNotFoundError, self).__init__()
self.reader_name = reader_name
self.avail_readers = avail_readers
def __str__(self):
msg = "Reader {} Not Found.\nAvailiable readers:\n".format(
self.reader_name)
for reader in self.avail_readers:
msg += " {}\n".format(reader)
return msg
class DataReader(object):
"""data reader for video input"""
def __init__(self, model_name, mode, cfg):
"""Not implemented"""
pass
def create_reader(self):
"""Not implemented"""
pass
class ReaderZoo(object):
def __init__(self):
self.reader_zoo = {}
def regist(self, name, reader):
assert reader.__base__ == DataReader, "Unknow model type {}".format(
type(reader))
self.reader_zoo[name] = reader
def get(self, name, mode, cfg):
for k, v in self.reader_zoo.items():
if k == name:
return v(name, mode, cfg)
raise ReaderNotFoundError(name, self.reader_zoo.keys())
# singleton reader_zoo
reader_zoo = ReaderZoo()
def regist_reader(name, reader):
reader_zoo.regist(name, reader)
def get_reader(name, mode, cfg):
reader_model = reader_zoo.get(name, mode, cfg)
return reader_model.create_reader()
1. download kinetics-400_train.csv and kinetics-400_val.csv
2. ffmpeg is required to decode mp4
3. transfer mp4 video to pkl file, with each pkl stores [video_id, images, label]
python generate_label.py kinetics-400_train.csv kinetics400_label.txt # generate label file
python video2pkl.py kinetics-400_train.csv $Source_dir $Target_dir $NUM_THREADS
此差异已折叠。
此差异已折叠。
1. Tensorflow is required to process tfrecords
2. python tf2pkl.py $Source_dir $Target_dir
此差异已折叠。
此差异已折叠。
from .metrics_util import get_metrics
此差异已折叠。
此差异已折叠。
from .model import regist_model, get_model
from .attention_cluster import AttentionCluster
from .nextvlad import NEXTVLAD
from .tsn import TSN
from .stnet import STNET
from .attention_lstm import AttentionLSTM
# regist models
regist_model("AttentionCluster", AttentionCluster)
regist_model("NEXTVLAD", NEXTVLAD)
regist_model("TSN", TSN)
regist_model("STNET", STNET)
regist_model("AttentionLSTM", AttentionLSTM)
from __future__ import absolute_import
from .attention_cluster import *
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册