提交 45b6ed84 编写于 作者: D dingjiaweiww

add save_model.ipynb

上级 9fb40dac
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 模型保存及加载\n",
"在训练完神经网络模型的时候,我们往往需要保存模型参数,以便做预测的时候能够省略训练步骤,直接加载模型参数。除此之外,在日常训练工作中我们会遇到一些突发情况,导致训练过程主动或被动的中断;抑或由于模型过于庞大,训练一个模型需要花费几天的训练时间;面对以上情况,Paddle中提供了很好地保存模型和提取模型的方法,支持从上一次保存状态开始训练,只要我们随时保存训练过程中的模型状态,就不用从初始状态重新训练。\n",
"\n",
"下面将基于VGG模型讲解paddle如果保存及加载模型,并恢复训练,网络结构部分省略。"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 环境设置\n",
"本示例基于飞桨开源框架2.0版本,模型搭建过程2.0高层API组建。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"'2.0.0-alpha0'"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import numpy as np\n",
"import paddle\n",
"import paddle.fluid as fluid\n",
"\n",
"from paddle.incubate.hapi.model import Model, Input, set_device\n",
"from paddle.incubate.hapi.loss import CrossEntropy\n",
"from paddle.incubate.hapi.metrics import Accuracy\n",
"import math\n",
"from paddle.incubate.hapi.datasets.flowers import Flowers as FlowersDataset\n",
"from paddle.incubate.hapi.vision.transforms import transforms\n",
"from paddle.incubate.hapi.vision.transforms import Resize\n",
"\n",
"paddle.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 数据集\n",
"本示例采用飞桨2.0 内置Flowers 数据集,其中每张图片都是一个RGB格式图片,图片的长宽不统一,使用之前需要将图片进行resize,将图片形状统一为(3,224,224),根据Flowers数据集的mode参数,我们把训练数据和测试数据分别保存在train_dataset和val_dataset中"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(3, 224, 224)\n",
"(1,)\n"
]
}
],
"source": [
"transform = transforms.Compose([\n",
" transforms.Resize(size=(224, 224)),\n",
" transforms.Permute(mode='CHW'),\n",
" transforms.Normalize(mean=[0.5, 0.5, 0.5],\n",
" std=[0.5, 0.5, 0.5]),\n",
"])\n",
"train_dataset = FlowersDataset(mode=\"train\", transform=transform)\n",
"val_dataset = FlowersDataset(mode=\"test\", transform=transform)\n",
"\n",
"print(train_dataset[0][0].shape)\n",
"print(train_dataset[0][1].shape)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# 搭建网络模型\n",
"class VGGNet(Model):\n",
" def __init__(self):\n",
" super(VGGNet, self).__init__()\n",
" self.vgg_spec = {\n",
" 11: ([1, 1, 2, 2, 2]),\n",
" 13: ([2, 2, 2, 2, 2]),\n",
" 16: ([2, 2, 3, 3, 3]),\n",
" 19: ([2, 2, 4, 4, 4])\n",
" }\n",
" layers = 16\n",
" class_dim = 103\n",
" nums = self.vgg_spec[layers]\n",
"\n",
" self.block1 = ConvBlock(self.full_name(), num_channels=3, num_filters=64, groups=nums[0])\n",
" self.block2 = ConvBlock(self.full_name(), num_channels=64, num_filters=128, groups=nums[1])\n",
" self.block3 = ConvBlock(self.full_name(), num_channels=128, num_filters=256, groups=nums[2])\n",
" self.block4 = ConvBlock(self.full_name(), num_channels=256, num_filters=512, groups=nums[3])\n",
" self.block5 = ConvBlock(self.full_name(), num_channels=512, num_filters=512, groups=nums[4])\n",
"\n",
" fc_dim = 4096\n",
" self._fc1 = paddle.nn.Linear(input_dim=25088, output_dim=fc_dim, act='relu')\n",
" self._fc2 = paddle.nn.Linear(input_dim=fc_dim, output_dim=fc_dim, act='relu')\n",
" self.out = paddle.nn.Linear(input_dim=fc_dim, output_dim=class_dim, act='softmax')\n",
" \n",
" # 定义网络的前向计算过程\n",
" def forward(self, inputs, label=None):\n",
" # print('input shape:', inputs.shape)\n",
" out = self.block1(inputs)\n",
" # print('out1 shape:', out.shape)\n",
" out = self.block2(out)\n",
" out = self.block3(out)\n",
" out = self.block4(out)\n",
" out = self.block5(out)\n",
"\n",
" out = paddle.reshape(out, [-1, 25088])\n",
"\n",
" out = self._fc1(out)\n",
" out = paddle.nn.functional.dropout(out, dropout_prob=0.5)\n",
"\n",
" out = self._fc2(out)\n",
" out = paddle.nn.functional.dropout(out, dropout_prob=0.5)\n",
"\n",
" out = self.out(out)\n",
"\n",
" if label is not None:\n",
" acc = fluid.layers.accuracy(input=out, label=label)\n",
" return out, acc\n",
" else:\n",
" return out"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"device = set_device('gpu')\n",
"\n",
"# 切换成动态图模式,默认使用静态图模式\n",
"fluid.enable_dygraph(device)\n",
"\n",
"lr = 0.000125\n",
"total_images = 6149\n",
"batch_size = 4\n",
"momentum_rate = 0.9\n",
"l2_decay = 1.2e-4\n",
"step = int(math.ceil(float(total_images) / batch_size))\n",
"num_epochs = 15\n",
"\n",
"# 实例化网络\n",
"model = VGGNet()\n",
"\n",
"# 定义优化器\n",
"optimizer = fluid.optimizer.Momentum(\n",
" learning_rate=fluid.layers.cosine_decay(\n",
" learning_rate=lr, step_each_epoch=step, epochs=num_epochs),\n",
" momentum=momentum_rate,\n",
" regularization=fluid.regularizer.L2Decay(l2_decay),\n",
" parameter_list=model.parameters())\n",
"\n",
"#定义损失函数\n",
"loss = CrossEntropy()\n",
"\n",
"inputs = [Input([None, 3, 224, 224], 'float32', name='image')]\n",
"labels = [Input([None, 1.], 'int64', name='label')]\n",
"model.prepare(optimizer=optimizer, loss_function=loss, inputs=inputs, labels=labels, device=device)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 保存模型参数\n",
"在paddle2.0中,有两种保存模型参数的方法: 第一种为使用model.fit函数进行网络循环训练,只需要设置训练的数据读取器、batchsize大小,迭代的轮数epoch、训练日志打印频率log_freq,并在save_dir参数中制定保存模型的路径,即可同时实现模型的训练和保存。另外一种保存模型参数的方法是model.save(path), path的格式为'dirname/file_prefix' 或 'file_prefix',其中dirname指定路径名称,file_prefix 指定参数文件的名称。\n",
"\n",
"这两种保存模型参数的区别是:\n",
"\n",
"1.通过fit,只能保存模型参数,不能保存优化器参数,只会生成一个.pdparams文件,并且可以边训练边保存,每次epoch会保存一次数据\n",
"\n",
"2.通过save_dir(path),可以保存模型参数及优化器参数,会生成两个文件 0.pdparams,0.pdopt,分别存储了模型参数和优化器参数,但是只会在整个模型训练完成后才会生成参数文件"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# 启动训练\n",
"# model.fit(train_data=train_dataset, epochs=15, batch_size=4, save_dir=\"./output/\", log_freq=1)\n",
"model.fit(train_data=train_dataset, epochs=15, batch_size=64, log_freq=1)\n",
"\n",
"model.save_dir('mnist_checkpoint/test')"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 加载模型参数\n",
"\n",
"当恢复训练状态时,需要加载模型数据,此时我们可以使用model.load()函数从存储模型状态和优化器状态的文件中载入模型参数和优化器参数,如果不需要恢复优化器,则不必使用优化器状态文件。\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model.load('checkpoint/test')"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 如何判断模型是否准确的恢复训练呢?\n",
"\n",
"理想的恢复训练是模型状态回到训练中断的时刻,恢复训练之后的梯度更新走向是和恢复训练前的梯度走向完全相同的。基于此,我们可以通过恢复训练后的损失变化,判断上述方法是否能准确的恢复训练。即从epoch 0结束时保存的模型参数和优化器状态恢复训练,校验其后训练的损失变化(epoch 1)是否和不中断时的训练完全一致。\n",
"\n",
"说明:\n",
"\n",
"恢复训练有如下两个要点:\n",
"\n",
"* 保存模型时同时保存模型参数和优化器参数\n",
"\n",
"* 恢复参数时同时恢复模型参数和优化器参数。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"device = set_device('gpu')\n",
"\n",
"# 切换成动态图模式,默认使用静态图模式\n",
"fluid.enable_dygraph(device)\n",
"\n",
"params_path = \"checkpoint/test\" \n",
"\n",
"model = VGGNet()\n",
"model.load(params_path)\n",
"\n",
"model.fit(train_data=train_dataset, epochs=5, batch_size=4, log_freq=10)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "PaddlePaddle 1.5.1 (Python 3.5)",
"language": "python",
"name": "py35-paddle1.2.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册