提交 8e80e207 编写于 作者: C Cao Ying 提交者: GitHub

Merge pull request #52 from wwhu/in-dev

Add image classification models.
TBD 图像分类
=======================
这里将介绍如何在PaddlePaddle下使用AlexNet、VGG、GoogLeNet和ResNet模型进行图像分类。图像分类问题的描述和这四种模型的介绍可以参考[PaddlePaddle book](https://github.com/PaddlePaddle/book/tree/develop/03.image_classification)
## 训练模型
### 初始化
在初始化阶段需要导入所用的包,并对PaddlePaddle进行初始化。
```python
import gzip
import paddle.v2.dataset.flowers as flowers
import paddle.v2 as paddle
import reader
import vgg
import resnet
import alexnet
import googlenet
# PaddlePaddle init
paddle.init(use_gpu=False, trainer_count=1)
```
### 定义参数和输入
设置算法参数(如数据维度、类别数目和batch size等参数),定义数据输入层`image`和类别标签`lbl`
```python
DATA_DIM = 3 * 224 * 224
CLASS_DIM = 102
BATCH_SIZE = 128
image = paddle.layer.data(
name="image", type=paddle.data_type.dense_vector(DATA_DIM))
lbl = paddle.layer.data(
name="label", type=paddle.data_type.integer_value(CLASS_DIM))
```
### 获得所用模型
这里可以选择使用AlexNet、VGG、GoogLeNet和ResNet模型中的一个模型进行图像分类。通过调用相应的方法可以获得网络最后的Softmax层。
1. 使用AlexNet模型
指定输入层`image`和类别数目`CLASS_DIM`后,可以通过下面的代码得到AlexNet的Softmax层。
```python
out = alexnet.alexnet(image, class_dim=CLASS_DIM)
```
2. 使用VGG模型
根据层数的不同,VGG分为VGG13、VGG16和VGG19。使用VGG16模型的代码如下:
```python
out = vgg.vgg16(image, class_dim=CLASS_DIM)
```
类似地,VGG13和VGG19可以分别通过`vgg.vgg13``vgg.vgg19`方法获得。
3. 使用GoogLeNet模型
GoogLeNet在训练阶段使用两个辅助的分类器强化梯度信息并进行额外的正则化。因此`googlenet.googlenet`共返回三个Softmax层,如下面的代码所示:
```python
out, out1, out2 = googlenet.googlenet(image, class_dim=CLASS_DIM)
loss1 = paddle.layer.cross_entropy_cost(
input=out1, label=lbl, coeff=0.3)
paddle.evaluator.classification_error(input=out1, label=lbl)
loss2 = paddle.layer.cross_entropy_cost(
input=out2, label=lbl, coeff=0.3)
paddle.evaluator.classification_error(input=out2, label=lbl)
extra_layers = [loss1, loss2]
```
对于两个辅助的输出,这里分别对其计算损失函数并评价错误率,然后将损失作为后文SGD的extra_layers。
4. 使用ResNet模型
ResNet模型可以通过下面的代码获取:
```python
out = resnet.resnet_imagenet(image, class_dim=CLASS_DIM)
```
### 定义损失函数
```python
cost = paddle.layer.classification_cost(input=out, label=lbl)
```
### 创建参数和优化方法
```python
# Create parameters
parameters = paddle.parameters.create(cost)
# Create optimizer
optimizer = paddle.optimizer.Momentum(
momentum=0.9,
regularization=paddle.optimizer.L2Regularization(rate=0.0005 *
BATCH_SIZE),
learning_rate=0.001 / BATCH_SIZE,
learning_rate_decay_a=0.1,
learning_rate_decay_b=128000 * 35,
learning_rate_schedule="discexp", )
```
通过 `learning_rate_decay_a` (简写$a$) 、`learning_rate_decay_b` (简写$b$) 和 `learning_rate_schedule` 指定学习率调整策略,这里采用离散指数的方式调节学习率,计算公式如下, $n$ 代表已经处理过的累计总样本数,$lr_{0}$ 即为参数里设置的 `learning_rate`
$$ lr = lr_{0} * a^ {\lfloor \frac{n}{ b}\rfloor} $$
### 定义数据读取
首先以[花卉数据](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html)为例说明如何定义输入。下面的代码定义了花卉数据训练集和验证集的输入:
```python
train_reader = paddle.batch(
paddle.reader.shuffle(
flowers.train(),
buf_size=1000),
batch_size=BATCH_SIZE)
test_reader = paddle.batch(
flowers.valid(),
batch_size=BATCH_SIZE)
```
若需要使用其他数据,则需要先建立图像列表文件。`reader.py`定义了这种文件的读取方式,它从图像列表文件中解析出图像路径和类别标签。
图像列表文件是一个文本文件,其中每一行由一个图像路径和类别标签构成,二者以跳格符(Tab)隔开。类别标签用整数表示,其最小值为0。下面给出一个图像列表文件的片段示例:
```
dataset_100/train_images/n03982430_23191.jpeg 1
dataset_100/train_images/n04461696_23653.jpeg 7
dataset_100/train_images/n02441942_3170.jpeg 8
dataset_100/train_images/n03733281_31716.jpeg 2
dataset_100/train_images/n03424325_240.jpeg 0
dataset_100/train_images/n02643566_75.jpeg 8
```
训练时需要分别指定训练集和验证集的图像列表文件。这里假设这两个文件分别为`train.list``val.list`,数据读取方式如下:
```python
train_reader = paddle.batch(
paddle.reader.shuffle(
reader.train_reader('train.list'),
buf_size=1000),
batch_size=BATCH_SIZE)
test_reader = paddle.batch(
reader.test_reader('val.list'),
batch_size=BATCH_SIZE)
```
### 定义事件处理程序
```python
# End batch and end pass event handler
def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 1 == 0:
print "\nPass %d, Batch %d, Cost %f, %s" % (
event.pass_id, event.batch_id, event.cost, event.metrics)
if isinstance(event, paddle.event.EndPass):
with gzip.open('params_pass_%d.tar.gz' % event.pass_id, 'w') as f:
parameters.to_tar(f)
result = trainer.test(reader=test_reader)
print "\nTest with Pass %d, %s" % (event.pass_id, result.metrics)
```
### 定义训练方法
对于AlexNet、VGG和ResNet,可以按下面的代码定义训练方法:
```python
# Create trainer
trainer = paddle.trainer.SGD(
cost=cost,
parameters=parameters,
update_equation=optimizer)
```
GoogLeNet有两个额外的输出层,因此需要指定`extra_layers`,如下所示:
```python
# Create trainer
trainer = paddle.trainer.SGD(
cost=cost,
parameters=parameters,
update_equation=optimizer,
extra_layers=extra_layers)
```
### 开始训练
```python
trainer.train(
reader=train_reader, num_passes=200, event_handler=event_handler)
```
## 应用模型
模型训练好后,可以使用下面的代码预测给定图片的类别。
```python
# load parameters
with gzip.open('params_pass_10.tar.gz', 'r') as f:
parameters = paddle.parameters.Parameters.from_tar(f)
file_list = [line.strip() for line in open(image_list_file)]
test_data = [(paddle.image.load_and_transform(image_file, 256, 224, False)
.flatten().astype('float32'), )
for image_file in file_list]
probs = paddle.infer(
output_layer=out, parameters=parameters, input=test_data)
lab = np.argsort(-probs)
for file_name, result in zip(file_list, lab):
print "Label of %s is: %d" % (file_name, result[0])
```
首先从文件中加载训练好的模型(代码里以第10轮迭代的结果为例),然后读取`image_list_file`中的图像。`image_list_file`是一个文本文件,每一行为一个图像路径。代码使用`paddle.infer`判断`image_list_file`中每个图像的类别,并进行输出。
import paddle.v2 as paddle
__all__ = ['alexnet']
def alexnet(input, class_dim):
conv1 = paddle.layer.img_conv(
input=input,
filter_size=11,
num_channels=3,
num_filters=96,
stride=4,
padding=1)
cmrnorm1 = paddle.layer.img_cmrnorm(
input=conv1, size=5, scale=0.0001, power=0.75)
pool1 = paddle.layer.img_pool(input=cmrnorm1, pool_size=3, stride=2)
conv2 = paddle.layer.img_conv(
input=pool1,
filter_size=5,
num_filters=256,
stride=1,
padding=2,
groups=1)
cmrnorm2 = paddle.layer.img_cmrnorm(
input=conv2, size=5, scale=0.0001, power=0.75)
pool2 = paddle.layer.img_pool(input=cmrnorm2, pool_size=3, stride=2)
pool3 = paddle.networks.img_conv_group(
input=pool2,
pool_size=3,
pool_stride=2,
conv_num_filter=[384, 384, 256],
conv_filter_size=3,
pool_type=paddle.pooling.Max())
fc1 = paddle.layer.fc(
input=pool3,
size=4096,
act=paddle.activation.Relu(),
layer_attr=paddle.attr.Extra(drop_rate=0.5))
fc2 = paddle.layer.fc(
input=fc1,
size=4096,
act=paddle.activation.Relu(),
layer_attr=paddle.attr.Extra(drop_rate=0.5))
out = paddle.layer.fc(
input=fc2, size=class_dim, act=paddle.activation.Softmax())
return out
import paddle.v2 as paddle
__all__ = ['googlenet']
def inception(name, input, channels, filter1, filter3R, filter3, filter5R,
filter5, proj):
cov1 = paddle.layer.img_conv(
name=name + '_1',
input=input,
filter_size=1,
num_channels=channels,
num_filters=filter1,
stride=1,
padding=0)
cov3r = paddle.layer.img_conv(
name=name + '_3r',
input=input,
filter_size=1,
num_channels=channels,
num_filters=filter3R,
stride=1,
padding=0)
cov3 = paddle.layer.img_conv(
name=name + '_3',
input=cov3r,
filter_size=3,
num_filters=filter3,
stride=1,
padding=1)
cov5r = paddle.layer.img_conv(
name=name + '_5r',
input=input,
filter_size=1,
num_channels=channels,
num_filters=filter5R,
stride=1,
padding=0)
cov5 = paddle.layer.img_conv(
name=name + '_5',
input=cov5r,
filter_size=5,
num_filters=filter5,
stride=1,
padding=2)
pool1 = paddle.layer.img_pool(
name=name + '_max',
input=input,
pool_size=3,
num_channels=channels,
stride=1,
padding=1)
covprj = paddle.layer.img_conv(
name=name + '_proj',
input=pool1,
filter_size=1,
num_filters=proj,
stride=1,
padding=0)
cat = paddle.layer.concat(name=name, input=[cov1, cov3, cov5, covprj])
return cat
def googlenet(input, class_dim):
# stage 1
conv1 = paddle.layer.img_conv(
name="conv1",
input=input,
filter_size=7,
num_channels=3,
num_filters=64,
stride=2,
padding=3)
pool1 = paddle.layer.img_pool(
name="pool1", input=conv1, pool_size=3, num_channels=64, stride=2)
# stage 2
conv2_1 = paddle.layer.img_conv(
name="conv2_1",
input=pool1,
filter_size=1,
num_filters=64,
stride=1,
padding=0)
conv2_2 = paddle.layer.img_conv(
name="conv2_2",
input=conv2_1,
filter_size=3,
num_filters=192,
stride=1,
padding=1)
pool2 = paddle.layer.img_pool(
name="pool2", input=conv2_2, pool_size=3, num_channels=192, stride=2)
# stage 3
ince3a = inception("ince3a", pool2, 192, 64, 96, 128, 16, 32, 32)
ince3b = inception("ince3b", ince3a, 256, 128, 128, 192, 32, 96, 64)
pool3 = paddle.layer.img_pool(
name="pool3", input=ince3b, num_channels=480, pool_size=3, stride=2)
# stage 4
ince4a = inception("ince4a", pool3, 480, 192, 96, 208, 16, 48, 64)
ince4b = inception("ince4b", ince4a, 512, 160, 112, 224, 24, 64, 64)
ince4c = inception("ince4c", ince4b, 512, 128, 128, 256, 24, 64, 64)
ince4d = inception("ince4d", ince4c, 512, 112, 144, 288, 32, 64, 64)
ince4e = inception("ince4e", ince4d, 528, 256, 160, 320, 32, 128, 128)
pool4 = paddle.layer.img_pool(
name="pool4", input=ince4e, num_channels=832, pool_size=3, stride=2)
# stage 5
ince5a = inception("ince5a", pool4, 832, 256, 160, 320, 32, 128, 128)
ince5b = inception("ince5b", ince5a, 832, 384, 192, 384, 48, 128, 128)
pool5 = paddle.layer.img_pool(
name="pool5",
input=ince5b,
num_channels=1024,
pool_size=7,
stride=7,
pool_type=paddle.pooling.Avg())
dropout = paddle.layer.addto(
input=pool5,
layer_attr=paddle.attr.Extra(drop_rate=0.4),
act=paddle.activation.Linear())
out = paddle.layer.fc(
input=dropout, size=class_dim, act=paddle.activation.Softmax())
# fc for output 1
pool_o1 = paddle.layer.img_pool(
name="pool_o1",
input=ince4a,
num_channels=512,
pool_size=5,
stride=3,
pool_type=paddle.pooling.Avg())
conv_o1 = paddle.layer.img_conv(
name="conv_o1",
input=pool_o1,
filter_size=1,
num_filters=128,
stride=1,
padding=0)
fc_o1 = paddle.layer.fc(
name="fc_o1",
input=conv_o1,
size=1024,
layer_attr=paddle.attr.Extra(drop_rate=0.7),
act=paddle.activation.Relu())
out1 = paddle.layer.fc(
input=fc_o1, size=class_dim, act=paddle.activation.Softmax())
# fc for output 2
pool_o2 = paddle.layer.img_pool(
name="pool_o2",
input=ince4d,
num_channels=528,
pool_size=5,
stride=3,
pool_type=paddle.pooling.Avg())
conv_o2 = paddle.layer.img_conv(
name="conv_o2",
input=pool_o2,
filter_size=1,
num_filters=128,
stride=1,
padding=0)
fc_o2 = paddle.layer.fc(
name="fc_o2",
input=conv_o2,
size=1024,
layer_attr=paddle.attr.Extra(drop_rate=0.7),
act=paddle.activation.Relu())
out2 = paddle.layer.fc(
input=fc_o2, size=class_dim, act=paddle.activation.Softmax())
return out, out1, out2
import gzip
import paddle.v2 as paddle
import reader
import vgg
import resnet
import alexnet
import googlenet
import argparse
import os
from PIL import Image
import numpy as np
WIDTH = 224
HEIGHT = 224
DATA_DIM = 3 * WIDTH * HEIGHT
CLASS_DIM = 102
def main():
# parse the argument
parser = argparse.ArgumentParser()
parser.add_argument(
'data_list',
help='The path of data list file, which consists of one image path per line'
)
parser.add_argument(
'model',
help='The model for image classification',
choices=['alexnet', 'vgg13', 'vgg16', 'vgg19', 'resnet', 'googlenet'])
parser.add_argument(
'params_path', help='The file which stores the parameters')
args = parser.parse_args()
# PaddlePaddle init
paddle.init(use_gpu=True, trainer_count=1)
image = paddle.layer.data(
name="image", type=paddle.data_type.dense_vector(DATA_DIM))
if args.model == 'alexnet':
out = alexnet.alexnet(image, class_dim=CLASS_DIM)
elif args.model == 'vgg13':
out = vgg.vgg13(image, class_dim=CLASS_DIM)
elif args.model == 'vgg16':
out = vgg.vgg16(image, class_dim=CLASS_DIM)
elif args.model == 'vgg19':
out = vgg.vgg19(image, class_dim=CLASS_DIM)
elif args.model == 'resnet':
out = resnet.resnet_imagenet(image, class_dim=CLASS_DIM)
elif args.model == 'googlenet':
out, _, _ = googlenet.googlenet(image, class_dim=CLASS_DIM)
# load parameters
with gzip.open(args.params_path, 'r') as f:
parameters = paddle.parameters.Parameters.from_tar(f)
file_list = [line.strip() for line in open(args.data_list)]
test_data = [(paddle.image.load_and_transform(image_file, 256, 224, False)
.flatten().astype('float32'), ) for image_file in file_list]
probs = paddle.infer(
output_layer=out, parameters=parameters, input=test_data)
lab = np.argsort(-probs)
for file_name, result in zip(file_list, lab):
print "Label of %s is: %d" % (file_name, result[0])
if __name__ == '__main__':
main()
import random import random
from paddle.v2.image import load_and_transform from paddle.v2.image import load_and_transform
import paddle.v2 as paddle
from multiprocessing import cpu_count
def train_reader(train_list): def train_mapper(sample):
'''
map image path to type needed by model input layer for the training set
'''
img, label = sample
img = paddle.image.load_image(img)
img = paddle.image.simple_transform(img, 256, 224, True)
return img.flatten().astype('float32'), label
def test_mapper(sample):
'''
map image path to type needed by model input layer for the test set
'''
img, label = sample
img = paddle.image.load_image(img)
img = paddle.image.simple_transform(img, 256, 224, True)
return img.flatten().astype('float32'), label
def train_reader(train_list, buffered_size=1024):
def reader(): def reader():
with open(train_list, 'r') as f: with open(train_list, 'r') as f:
lines = [line.strip() for line in f] lines = [line.strip() for line in f]
random.shuffle(lines)
for line in lines: for line in lines:
img_path, lab = line.strip().split('\t') img_path, lab = line.strip().split('\t')
im = load_and_transform(img_path, 256, 224, True) yield img_path, int(lab)
yield im.flatten().astype('float32'), int(lab)
return reader return paddle.reader.xmap_readers(train_mapper, reader,
cpu_count(), buffered_size)
def test_reader(test_list): def test_reader(test_list, buffered_size=1024):
def reader(): def reader():
with open(test_list, 'r') as f: with open(test_list, 'r') as f:
lines = [line.strip() for line in f] lines = [line.strip() for line in f]
for line in lines: for line in lines:
img_path, lab = line.strip().split('\t') img_path, lab = line.strip().split('\t')
im = load_and_transform(img_path, 256, 224, False) yield img_path, int(lab)
yield im.flatten().astype('float32'), int(lab)
return reader return paddle.reader.xmap_readers(test_mapper, reader,
cpu_count(), buffered_size)
if __name__ == '__main__': if __name__ == '__main__':
......
import paddle.v2 as paddle
__all__ = ['resnet_imagenet', 'resnet_cifar10']
def conv_bn_layer(input,
ch_out,
filter_size,
stride,
padding,
active_type=paddle.activation.Relu(),
ch_in=None):
tmp = paddle.layer.img_conv(
input=input,
filter_size=filter_size,
num_channels=ch_in,
num_filters=ch_out,
stride=stride,
padding=padding,
act=paddle.activation.Linear(),
bias_attr=False)
return paddle.layer.batch_norm(input=tmp, act=active_type)
def shortcut(input, ch_in, ch_out, stride):
if ch_in != ch_out:
return conv_bn_layer(input, ch_out, 1, stride, 0,
paddle.activation.Linear())
else:
return input
def basicblock(input, ch_in, ch_out, stride):
short = shortcut(input, ch_in, ch_out, stride)
conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)
conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())
return paddle.layer.addto(
input=[short, conv2], act=paddle.activation.Relu())
def bottleneck(input, ch_in, ch_out, stride):
short = shortcut(input, ch_in, ch_out * 4, stride)
conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)
conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)
conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,
paddle.activation.Linear())
return paddle.layer.addto(
input=[short, conv3], act=paddle.activation.Relu())
def layer_warp(block_func, input, ch_in, ch_out, count, stride):
conv = block_func(input, ch_in, ch_out, stride)
for i in range(1, count):
conv = block_func(conv, ch_out, ch_out, 1)
return conv
def resnet_imagenet(input, class_dim, depth=50):
cfg = {
18: ([2, 2, 2, 1], basicblock),
34: ([3, 4, 6, 3], basicblock),
50: ([3, 4, 6, 3], bottleneck),
101: ([3, 4, 23, 3], bottleneck),
152: ([3, 8, 36, 3], bottleneck)
}
stages, block_func = cfg[depth]
conv1 = conv_bn_layer(
input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)
pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)
res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)
res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)
res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)
res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)
pool2 = paddle.layer.img_pool(
input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())
out = paddle.layer.fc(
input=pool2, size=class_dim, act=paddle.activation.Softmax())
return out
def resnet_cifar10(input, class_dim, depth=32):
# depth should be one of 20, 32, 44, 56, 110, 1202
assert (depth - 2) % 6 == 0
n = (depth - 2) / 6
nStages = {16, 64, 128}
conv1 = conv_bn_layer(
input, ch_in=3, ch_out=16, filter_size=3, stride=1, padding=1)
res1 = layer_warp(basicblock, conv1, 16, 16, n, 1)
res2 = layer_warp(basicblock, res1, 16, 32, n, 2)
res3 = layer_warp(basicblock, res2, 32, 64, n, 2)
pool = paddle.layer.img_pool(
input=res3, pool_size=8, stride=1, pool_type=paddle.pooling.Avg())
out = paddle.layer.fc(
input=pool, size=class_dim, act=paddle.activation.Softmax())
return out
import gzip import gzip
import paddle.v2.dataset.flowers as flowers
import paddle.v2 as paddle import paddle.v2 as paddle
import reader import reader
import vgg import vgg
import resnet
import alexnet
import googlenet
import argparse
DATA_DIM = 3 * 224 * 224 DATA_DIM = 3 * 224 * 224
CLASS_DIM = 1000 CLASS_DIM = 102
BATCH_SIZE = 128 BATCH_SIZE = 128
def main(): def main():
# parse the argument
parser = argparse.ArgumentParser()
parser.add_argument(
'model',
help='The model for image classification',
choices=['alexnet', 'vgg13', 'vgg16', 'vgg19', 'resnet', 'googlenet'])
args = parser.parse_args()
# PaddlePaddle init # PaddlePaddle init
paddle.init(use_gpu=True, trainer_count=4) paddle.init(use_gpu=True, trainer_count=1)
image = paddle.layer.data( image = paddle.layer.data(
name="image", type=paddle.data_type.dense_vector(DATA_DIM)) name="image", type=paddle.data_type.dense_vector(DATA_DIM))
lbl = paddle.layer.data( lbl = paddle.layer.data(
name="label", type=paddle.data_type.integer_value(CLASS_DIM)) name="label", type=paddle.data_type.integer_value(CLASS_DIM))
net = vgg.vgg13(image)
out = paddle.layer.fc( extra_layers = None
input=net, size=CLASS_DIM, act=paddle.activation.Softmax()) learning_rate = 0.01
if args.model == 'alexnet':
out = alexnet.alexnet(image, class_dim=CLASS_DIM)
elif args.model == 'vgg13':
out = vgg.vgg13(image, class_dim=CLASS_DIM)
elif args.model == 'vgg16':
out = vgg.vgg16(image, class_dim=CLASS_DIM)
elif args.model == 'vgg19':
out = vgg.vgg19(image, class_dim=CLASS_DIM)
elif args.model == 'resnet':
out = resnet.resnet_imagenet(image, class_dim=CLASS_DIM)
learning_rate = 0.1
elif args.model == 'googlenet':
out, out1, out2 = googlenet.googlenet(image, class_dim=CLASS_DIM)
loss1 = paddle.layer.cross_entropy_cost(
input=out1, label=lbl, coeff=0.3)
paddle.evaluator.classification_error(input=out1, label=lbl)
loss2 = paddle.layer.cross_entropy_cost(
input=out2, label=lbl, coeff=0.3)
paddle.evaluator.classification_error(input=out2, label=lbl)
extra_layers = [loss1, loss2]
cost = paddle.layer.classification_cost(input=out, label=lbl) cost = paddle.layer.classification_cost(input=out, label=lbl)
# Create parameters # Create parameters
...@@ -31,16 +63,23 @@ def main(): ...@@ -31,16 +63,23 @@ def main():
momentum=0.9, momentum=0.9,
regularization=paddle.optimizer.L2Regularization(rate=0.0005 * regularization=paddle.optimizer.L2Regularization(rate=0.0005 *
BATCH_SIZE), BATCH_SIZE),
learning_rate=0.01 / BATCH_SIZE, learning_rate=learning_rate / BATCH_SIZE,
learning_rate_decay_a=0.1, learning_rate_decay_a=0.1,
learning_rate_decay_b=128000 * 35, learning_rate_decay_b=128000 * 35,
learning_rate_schedule="discexp", ) learning_rate_schedule="discexp", )
train_reader = paddle.batch( train_reader = paddle.batch(
paddle.reader.shuffle(reader.train_reader("train.list"), buf_size=1000), paddle.reader.shuffle(
flowers.train(),
# To use other data, replace the above line with:
# reader.train_reader('train.list'),
buf_size=1000),
batch_size=BATCH_SIZE) batch_size=BATCH_SIZE)
test_reader = paddle.batch( test_reader = paddle.batch(
reader.test_reader("test.list"), batch_size=BATCH_SIZE) flowers.valid(),
# To use other data, replace the above line with:
# reader.test_reader('val.list'),
batch_size=BATCH_SIZE)
# End batch and end pass event handler # End batch and end pass event handler
def event_handler(event): def event_handler(event):
...@@ -57,11 +96,14 @@ def main(): ...@@ -57,11 +96,14 @@ def main():
# Create trainer # Create trainer
trainer = paddle.trainer.SGD( trainer = paddle.trainer.SGD(
cost=cost, parameters=parameters, update_equation=optimizer) cost=cost,
parameters=parameters,
update_equation=optimizer,
extra_layers=extra_layers)
trainer.train( trainer.train(
reader=train_reader, num_passes=200, event_handler=event_handler) reader=train_reader, num_passes=200, event_handler=event_handler)
if __name__ == '__main__': if __name__ == '__main__':
main() main()
\ No newline at end of file
...@@ -3,7 +3,7 @@ import paddle.v2 as paddle ...@@ -3,7 +3,7 @@ import paddle.v2 as paddle
__all__ = ['vgg13', 'vgg16', 'vgg19'] __all__ = ['vgg13', 'vgg16', 'vgg19']
def vgg(input, nums): def vgg(input, nums, class_dim):
def conv_block(input, num_filter, groups, num_channels=None): def conv_block(input, num_filter, groups, num_channels=None):
return paddle.networks.img_conv_group( return paddle.networks.img_conv_group(
input=input, input=input,
...@@ -34,19 +34,21 @@ def vgg(input, nums): ...@@ -34,19 +34,21 @@ def vgg(input, nums):
size=fc_dim, size=fc_dim,
act=paddle.activation.Relu(), act=paddle.activation.Relu(),
layer_attr=paddle.attr.Extra(drop_rate=0.5)) layer_attr=paddle.attr.Extra(drop_rate=0.5))
return fc2 out = paddle.layer.fc(
input=fc2, size=class_dim, act=paddle.activation.Softmax())
return out
def vgg13(input): def vgg13(input, class_dim):
nums = [2, 2, 2, 2, 2] nums = [2, 2, 2, 2, 2]
return vgg(input, nums) return vgg(input, nums, class_dim)
def vgg16(input): def vgg16(input, class_dim):
nums = [2, 2, 3, 3, 3] nums = [2, 2, 3, 3, 3]
return vgg(input, nums) return vgg(input, nums, class_dim)
def vgg19(input): def vgg19(input, class_dim):
nums = [2, 2, 4, 4, 4] nums = [2, 2, 4, 4, 4]
return vgg(input, nums) return vgg(input, nums, class_dim)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册