未验证 提交 2870469b 编写于 作者: C Chen Long 提交者: GitHub

fix tutorial test=develop (#2601)

* fix tutorial test=develop

* fix pic test=develop

* update tutorial test=develop
上级 cf666098
......@@ -21,8 +21,8 @@
本文主要介绍飞桨2.0动态图存储载入体系,各接口关系如下图所示:
.. image:: images/save_2.0.png
.. image:: images/load_2.0.png
.. image:: https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/paddle/guides/images/save_2.0.png?raw=true
.. image:: https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/paddle/guides/images/load_2.0.png?raw=true
1.2 静态图存储载入体系(飞桨1.x)
----------------------------
......@@ -739,4 +739,4 @@ Layer更准确的语义是描述一个具有预测功能的模型对象,接收
fluid.io.save_params(exe, model_path)
# load
state_dict = paddle.io.load_program_state(model_path)
\ No newline at end of file
state_dict = paddle.io.load_program_state(model_path)
......@@ -4,7 +4,7 @@
本示例教程将会演示如何使用飞桨的卷积神经网络来完成图像分类任务。这是一个较为简单的示例,将会使用一个由三个卷积层组成的网络完成\ `cifar10 <https://www.cs.toronto.edu/~kriz/cifar.html>`__\ 数据集的图像分类任务。
设置环境
----------
--------
我们将使用飞桨2.0beta版本。
......@@ -18,18 +18,15 @@
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
264e76cae6861ad9b1d4bcd8c3212f7a78c01e4d
2.0.0-beta0
加载并浏览数据集
-------------------
----------------
我们将会使用飞桨提供的API完成数据集的下载并为后续的训练任务准备好数据迭代器。cifar10数据集由60000张大小为32
\*
......@@ -49,7 +46,7 @@
train_labels[i, 0] = train_label
浏览数据集
-------------
----------
接下来我们从数据集中随机挑选一些图片并显示,从而对数据集有一个直观的了解。
......@@ -70,11 +67,11 @@
.. image:: convnet_image_classification_files/convnet_image_classification_6_0.png
.. image:: https://github.com/PaddlePaddle/FluidDoc/tree/develop/doc/paddle/tutorial/cv_case/convnet_image_classification/convnet_image_classification_files/convnet_image_classification_001.png?raw=true
组建网络
----------
--------
接下来我们使用飞桨定义一个使用了三个二维卷积(\ ``Conv2d``)且每次卷积之后使用\ ``relu``\ 激活函数,两个二维池化层(\ ``MaxPool2d``\ ),和两个线性变换层组成的分类网络,来把一个\ ``(32, 32, 3)``\ 形状的图片通过卷积神经网络映射为10个输出,这对应着10个分类的类别。
......@@ -165,8 +162,8 @@
if batch_id % 1000 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()
opt.step()
opt.clear_grad()
# evaluate model after one epoch
model.eval()
......@@ -198,36 +195,36 @@
.. parsed-literal::
start training ...
epoch: 0, batch_id: 0, loss is: [2.3024805]
epoch: 0, batch_id: 1000, loss is: [1.1422595]
[validation] accuracy/loss: 0.5575079917907715/1.2516425848007202
epoch: 1, batch_id: 0, loss is: [0.9350736]
epoch: 1, batch_id: 1000, loss is: [1.3825703]
[validation] accuracy/loss: 0.5959464907646179/1.1320706605911255
epoch: 2, batch_id: 0, loss is: [0.979844]
epoch: 2, batch_id: 1000, loss is: [0.87730503]
[validation] accuracy/loss: 0.6607428193092346/0.9754576086997986
epoch: 3, batch_id: 0, loss is: [0.7345351]
epoch: 3, batch_id: 1000, loss is: [1.0982555]
[validation] accuracy/loss: 0.6671326160430908/0.9667007327079773
epoch: 4, batch_id: 0, loss is: [0.9291839]
epoch: 4, batch_id: 1000, loss is: [1.1812104]
[validation] accuracy/loss: 0.6895966529846191/0.9075900316238403
epoch: 5, batch_id: 0, loss is: [0.5072213]
epoch: 5, batch_id: 1000, loss is: [0.60360587]
[validation] accuracy/loss: 0.6944888234138489/0.8740479350090027
epoch: 6, batch_id: 0, loss is: [0.5917944]
epoch: 6, batch_id: 1000, loss is: [0.7963876]
[validation] accuracy/loss: 0.7072683572769165/0.8597638607025146
epoch: 7, batch_id: 0, loss is: [0.50116754]
epoch: 7, batch_id: 1000, loss is: [0.95844793]
[validation] accuracy/loss: 0.700579047203064/0.876727819442749
epoch: 8, batch_id: 0, loss is: [0.87496114]
epoch: 8, batch_id: 1000, loss is: [0.68749857]
[validation] accuracy/loss: 0.7198482155799866/0.8403064608573914
epoch: 9, batch_id: 0, loss is: [0.8548105]
epoch: 9, batch_id: 1000, loss is: [0.6488569]
[validation] accuracy/loss: 0.7106629610061646/0.874437153339386
epoch: 0, batch_id: 0, loss is: [2.331658]
epoch: 0, batch_id: 1000, loss is: [1.6067888]
[validation] accuracy/loss: 0.5676916837692261/1.2106356620788574
epoch: 1, batch_id: 0, loss is: [1.1509854]
epoch: 1, batch_id: 1000, loss is: [1.3777964]
[validation] accuracy/loss: 0.5818690061569214/1.1748384237289429
epoch: 2, batch_id: 0, loss is: [1.051642]
epoch: 2, batch_id: 1000, loss is: [1.0261706]
[validation] accuracy/loss: 0.6607428193092346/0.9685573577880859
epoch: 3, batch_id: 0, loss is: [0.8457774]
epoch: 3, batch_id: 1000, loss is: [0.6820123]
[validation] accuracy/loss: 0.6822084784507751/0.9241172075271606
epoch: 4, batch_id: 0, loss is: [0.9059805]
epoch: 4, batch_id: 1000, loss is: [0.587117]
[validation] accuracy/loss: 0.7012779712677002/0.8670551180839539
epoch: 5, batch_id: 0, loss is: [1.0894825]
epoch: 5, batch_id: 1000, loss is: [0.9055369]
[validation] accuracy/loss: 0.6954872012138367/0.8820587992668152
epoch: 6, batch_id: 0, loss is: [0.4162583]
epoch: 6, batch_id: 1000, loss is: [0.5274862]
[validation] accuracy/loss: 0.7074680328369141/0.8538646697998047
epoch: 7, batch_id: 0, loss is: [0.52636147]
epoch: 7, batch_id: 1000, loss is: [0.70929015]
[validation] accuracy/loss: 0.7107627987861633/0.8633227944374084
epoch: 8, batch_id: 0, loss is: [0.57556355]
epoch: 8, batch_id: 1000, loss is: [0.83717]
[validation] accuracy/loss: 0.69319087266922/0.903077244758606
epoch: 9, batch_id: 0, loss is: [0.88774866]
epoch: 9, batch_id: 1000, loss is: [0.91165334]
[validation] accuracy/loss: 0.7194488644599915/0.8668457865715027
.. code:: ipython3
......@@ -244,15 +241,15 @@
.. parsed-literal::
<matplotlib.legend.Legend at 0x163d6ec50>
<matplotlib.legend.Legend at 0x167d186d0>
.. image:: convnet_image_classification_files/convnet_image_classification_12_1.png
.. image:: https://github.com/PaddlePaddle/FluidDoc/tree/develop/doc/paddle/tutorial/cv_case/convnet_image_classification/convnet_image_classification_files/convnet_image_classification_002.png?raw=true
The End
-------
从上面的示例可以看到,在cifar10数据集上,使用简单的卷积神经网络,用飞桨可以达到71%以上的准确率。
从上面的示例可以看到,在cifar10数据集上,使用简单的卷积神经网络,用飞桨可以达到71%以上的准确率。你也可以通过调整网络结构和参数,达到更好的效果。
......@@ -25,13 +25,11 @@
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
89af2088b6e74bdfeef2d4d78e08461ed2aafee5
2.0.0-beta0
数据集
......@@ -127,12 +125,12 @@
.. image:: image_search_files/image_search_8_0.png
.. image:: https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/paddle/tutorial/cv_case/image_search/image_search_files/image_search_001.png?raw=true?raw=true
构建训练数据
--------------
------------
图片检索的模型的训练样本跟我们常见的分类任务的训练样本不太一样的地方在于,每个训练样本并不是一个\ ``(image, class)``\ 这样的形式。而是(image0,
image1,
......@@ -205,12 +203,12 @@ similary_or_not)的形式,即,每一个训练样本由两张图片组成,
.. image:: image_search_files/image_search_15_1.png
.. image:: https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/paddle/tutorial/cv_case/image_search/image_search_files/image_search_002.png?raw=true
把图片转换为高维的向量表示的网络
-----------------------------------
--------------------------------
我们的目标是首先把图片转换为高维空间的表示,然后计算图片在高维空间表示时的相似度。
下面的网络结构用来把一个形状为\ ``(3, 32, 32)``\ 的图片转换成形状为\ ``(8,)``\ 的向量。在有些资料中也会把这个转换成的向量称为\ ``Embedding``\ ,请注意,这与自然语言处理领域的词向量的区别。
......@@ -267,8 +265,6 @@ similary_or_not)的形式,即,每一个训练样本由两张图片组成,
.. code:: ipython3
# 定义训练过程
def train(model):
print('start training ... ')
model.train()
......@@ -302,8 +298,8 @@ similary_or_not)的形式,即,每一个训练样本由两张图片组成,
if batch_id % 500 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()
opt.step()
opt.clear_grad()
model = MyNet()
train(model)
......@@ -312,46 +308,46 @@ similary_or_not)的形式,即,每一个训练样本由两张图片组成,
.. parsed-literal::
start training ...
epoch: 0, batch_id: 0, loss is: [2.3080945]
epoch: 0, batch_id: 500, loss is: [2.326215]
epoch: 1, batch_id: 0, loss is: [2.0898924]
epoch: 1, batch_id: 500, loss is: [1.8754089]
epoch: 2, batch_id: 0, loss is: [2.2416227]
epoch: 2, batch_id: 500, loss is: [1.9024051]
epoch: 3, batch_id: 0, loss is: [1.841417]
epoch: 3, batch_id: 500, loss is: [2.1239076]
epoch: 4, batch_id: 0, loss is: [1.9291763]
epoch: 4, batch_id: 500, loss is: [2.2363486]
epoch: 5, batch_id: 0, loss is: [2.0078473]
epoch: 5, batch_id: 500, loss is: [2.0765374]
epoch: 6, batch_id: 0, loss is: [2.080376]
epoch: 6, batch_id: 500, loss is: [2.1759136]
epoch: 7, batch_id: 0, loss is: [1.908263]
epoch: 7, batch_id: 500, loss is: [1.7774136]
epoch: 8, batch_id: 0, loss is: [1.6335764]
epoch: 8, batch_id: 500, loss is: [1.5713912]
epoch: 9, batch_id: 0, loss is: [2.287479]
epoch: 9, batch_id: 500, loss is: [1.7719988]
epoch: 10, batch_id: 0, loss is: [1.2894523]
epoch: 10, batch_id: 500, loss is: [1.599735]
epoch: 11, batch_id: 0, loss is: [1.78816]
epoch: 11, batch_id: 500, loss is: [1.4773489]
epoch: 12, batch_id: 0, loss is: [1.6737808]
epoch: 12, batch_id: 500, loss is: [1.8889393]
epoch: 13, batch_id: 0, loss is: [1.6156021]
epoch: 13, batch_id: 500, loss is: [1.3851049]
epoch: 14, batch_id: 0, loss is: [1.3854092]
epoch: 14, batch_id: 500, loss is: [2.0325592]
epoch: 15, batch_id: 0, loss is: [1.9734558]
epoch: 15, batch_id: 500, loss is: [1.8050598]
epoch: 16, batch_id: 0, loss is: [1.7084911]
epoch: 16, batch_id: 500, loss is: [1.8919995]
epoch: 17, batch_id: 0, loss is: [1.3137552]
epoch: 17, batch_id: 500, loss is: [1.8817297]
epoch: 18, batch_id: 0, loss is: [1.9453808]
epoch: 18, batch_id: 500, loss is: [2.1317677]
epoch: 19, batch_id: 0, loss is: [1.6051079]
epoch: 19, batch_id: 500, loss is: [1.779858]
epoch: 0, batch_id: 0, loss is: [2.3078856]
epoch: 0, batch_id: 500, loss is: [1.9325346]
epoch: 1, batch_id: 0, loss is: [1.9889]
epoch: 1, batch_id: 500, loss is: [2.0410695]
epoch: 2, batch_id: 0, loss is: [2.2465641]
epoch: 2, batch_id: 500, loss is: [1.8171736]
epoch: 3, batch_id: 0, loss is: [1.9939486]
epoch: 3, batch_id: 500, loss is: [2.1440036]
epoch: 4, batch_id: 0, loss is: [2.1497147]
epoch: 4, batch_id: 500, loss is: [2.3686018]
epoch: 5, batch_id: 0, loss is: [1.938681]
epoch: 5, batch_id: 500, loss is: [1.7729127]
epoch: 6, batch_id: 0, loss is: [2.0061004]
epoch: 6, batch_id: 500, loss is: [1.6132584]
epoch: 7, batch_id: 0, loss is: [1.8874661]
epoch: 7, batch_id: 500, loss is: [1.6153599]
epoch: 8, batch_id: 0, loss is: [1.9407685]
epoch: 8, batch_id: 500, loss is: [2.1532288]
epoch: 9, batch_id: 0, loss is: [1.4792883]
epoch: 9, batch_id: 500, loss is: [1.857158]
epoch: 10, batch_id: 0, loss is: [2.1518302]
epoch: 10, batch_id: 500, loss is: [1.790559]
epoch: 11, batch_id: 0, loss is: [1.7292264]
epoch: 11, batch_id: 500, loss is: [1.8555079]
epoch: 12, batch_id: 0, loss is: [1.6968924]
epoch: 12, batch_id: 500, loss is: [1.4554331]
epoch: 13, batch_id: 0, loss is: [1.3950458]
epoch: 13, batch_id: 500, loss is: [1.7197256]
epoch: 14, batch_id: 0, loss is: [1.7336586]
epoch: 14, batch_id: 500, loss is: [2.0465684]
epoch: 15, batch_id: 0, loss is: [1.7675827]
epoch: 15, batch_id: 500, loss is: [2.6443417]
epoch: 16, batch_id: 0, loss is: [1.7331158]
epoch: 16, batch_id: 500, loss is: [1.6207634]
epoch: 17, batch_id: 0, loss is: [2.0908554]
epoch: 17, batch_id: 500, loss is: [1.7711265]
epoch: 18, batch_id: 0, loss is: [1.8717268]
epoch: 18, batch_id: 500, loss is: [1.5269613]
epoch: 19, batch_id: 0, loss is: [1.5681677]
epoch: 19, batch_id: 500, loss is: [1.7821472]
模型预测
......@@ -397,7 +393,7 @@ similary_or_not)的形式,即,每一个训练样本由两张图片组成,
.. image:: image_search_files/image_search_22_0.png
.. image:: https://github.com/PaddlePaddle/FluidDoc/blob/develop/doc/paddle/tutorial/cv_case/image_search/image_search_files/image_search_003.png?raw=true
......
基于U型语义分割模型实现的宠物图像分割
基于U-Net卷积神经网络实现宠物图像分割
=====================================
本示例教程当前是基于2.0-beta版本Paddle做的案例实现,未来会随着2.0的系列版本发布进行升级。
......@@ -33,7 +33,7 @@
.. parsed-literal::
'0.0.0'
'2.0.0-beta0'
......@@ -65,6 +65,17 @@ Pet数据集,官网:https://www.robots.ox.ac.uk/~vgg/data/pets 。
!tar -xf images.tar.gz
!tar -xf annotations.tar.gz
.. parsed-literal::
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 755M 100 755M 0 0 1707k 0 0:07:32 0:07:32 --:--:-- 2865k0 0:12:48 524k 0 0:13:34 0:02:41 0:10:53 668k 0 0:12:45 0:03:06 0:09:39 1702k 0 1221k 0 0:10:33 0:03:25 0:07:08 3108k37 282M 0 0 1243k 0 0:10:21 0:03:52 0:06:29 719k0:05:53 566k0 1237k 0 0:10:25 0:04:43 0:05:42 1593k 0 0:09:46 0:05:28 0:04:18 2952k 1467k 0 0:08:47 0:06:43 0:02:04 1711k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18.2M 100 18.2M 0 0 1602k 0 0:00:11 0:00:11 --:--:-- 3226k
3.2 数据集概览
~~~~~~~~~~~~~~
......@@ -89,10 +100,10 @@ Pet数据集,官网:https://www.robots.ox.ac.uk/~vgg/data/pets 。
├── test.txt
├── trainval.txt
├── trimaps
   ├── Abyssinian_1.png
   ├── Abyssinian_10.png
   ├── ......
    └── yorkshire_terrier_99.png
├── Abyssinian_1.png
├── Abyssinian_10.png
├── ......
└── yorkshire_terrier_99.png
└── xmls
├── Abyssinian_1.xml
├── Abyssinian_10.xml
......@@ -315,7 +326,7 @@ DataLoader(多进程数据集加载)。
.. image:: pets_image_segmentation_U_Net_like_files/pets_image_segmentation_U_Net_like_12_0.svg
.. image:: https://github.com/PaddlePaddle/FluidDoc/tree/develop/doc/paddle/tutorial/cv_case/image_segmentation/pets_image_segmentation_U_Net_like_files/pets_image_segmentation_U_Net_like_001.png?raw=true
4.模型组网
......@@ -436,7 +447,7 @@ Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ 的C
kernel_size=3,
padding='same')
self.bn = paddle.nn.BatchNorm2d(out_channels)
self.upsample = paddle.nn.UpSample(scale_factor=2.0)
self.upsample = paddle.nn.Upsample(scale_factor=2.0)
self.residual_conv = paddle.nn.Conv2d(in_channels,
out_channels,
kernel_size=1,
......@@ -467,9 +478,9 @@ Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ 的C
.. code:: ipython3
class PetModel(paddle.nn.Layer):
class PetNet(paddle.nn.Layer):
def __init__(self, num_classes):
super(PetModel, self).__init__()
super(PetNet, self).__init__()
self.conv_1 = paddle.nn.Conv2d(3, 32,
kernel_size=3,
......@@ -531,7 +542,7 @@ Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ 的C
paddle.disable_static()
num_classes = 4
model = paddle.Model(PetModel(num_classes))
model = paddle.Model(PetNet(num_classes))
model.summary((3, 160, 160))
......@@ -540,32 +551,32 @@ Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ 的C
--------------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
================================================================================
Conv2d-22 [-1, 3, 160, 160] [-1, 32, 80, 80] 896
BatchNorm2d-9 [-1, 32, 80, 80] [-1, 32, 80, 80] 64
ReLU-9 [-1, 32, 80, 80] [-1, 32, 80, 80] 0
ReLU-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 0
Conv2d-33 [-1, 128, 20, 20] [-1, 128, 20, 20] 1,152
Conv2d-34 [-1, 128, 20, 20] [-1, 256, 20, 20] 33,024
SeparableConv2d-11 [-1, 128, 20, 20] [-1, 256, 20, 20] 0
BatchNorm2d-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 512
Conv2d-35 [-1, 256, 20, 20] [-1, 256, 20, 20] 2,304
Conv2d-36 [-1, 256, 20, 20] [-1, 256, 20, 20] 65,792
SeparableConv2d-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 0
MaxPool2d-6 [-1, 256, 20, 20] [-1, 256, 10, 10] 0
Conv2d-37 [-1, 128, 20, 20] [-1, 256, 10, 10] 33,024
Encoder-6 [-1, 128, 20, 20] [-1, 256, 10, 10] 0
ReLU-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 0
ConvTranspose2d-15 [-1, 64, 80, 80] [-1, 32, 80, 80] 18,464
BatchNorm2d-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 64
ConvTranspose2d-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 9,248
UpSample-8 [-1, 64, 80, 80] [-1, 64, 160, 160] 0
Conv2d-41 [-1, 64, 160, 160] [-1, 32, 160, 160] 2,080
Decoder-8 [-1, 64, 80, 80] [-1, 32, 160, 160] 0
Conv2d-42 [-1, 32, 160, 160] [-1, 4, 160, 160] 1,156
Conv2d-38 [-1, 3, 160, 160] [-1, 32, 80, 80] 896
BatchNorm2d-14 [-1, 32, 80, 80] [-1, 32, 80, 80] 128
ReLU-14 [-1, 32, 80, 80] [-1, 32, 80, 80] 0
ReLU-17 [-1, 256, 20, 20] [-1, 256, 20, 20] 0
Conv2d-49 [-1, 128, 20, 20] [-1, 128, 20, 20] 1,152
Conv2d-50 [-1, 128, 20, 20] [-1, 256, 20, 20] 33,024
SeparableConv2d-17 [-1, 128, 20, 20] [-1, 256, 20, 20] 0
BatchNorm2d-17 [-1, 256, 20, 20] [-1, 256, 20, 20] 1,024
Conv2d-51 [-1, 256, 20, 20] [-1, 256, 20, 20] 2,304
Conv2d-52 [-1, 256, 20, 20] [-1, 256, 20, 20] 65,792
SeparableConv2d-18 [-1, 256, 20, 20] [-1, 256, 20, 20] 0
MaxPool2d-9 [-1, 256, 20, 20] [-1, 256, 10, 10] 0
Conv2d-53 [-1, 128, 20, 20] [-1, 256, 10, 10] 33,024
Encoder-9 [-1, 128, 20, 20] [-1, 256, 10, 10] 0
ReLU-21 [-1, 32, 80, 80] [-1, 32, 80, 80] 0
ConvTranspose2d-17 [-1, 64, 80, 80] [-1, 32, 80, 80] 18,464
BatchNorm2d-21 [-1, 32, 80, 80] [-1, 32, 80, 80] 128
ConvTranspose2d-18 [-1, 32, 80, 80] [-1, 32, 80, 80] 9,248
Upsample-8 [-1, 64, 80, 80] [-1, 64, 160, 160] 0
Conv2d-57 [-1, 64, 160, 160] [-1, 32, 160, 160] 2,080
Decoder-9 [-1, 64, 80, 80] [-1, 32, 160, 160] 0
Conv2d-58 [-1, 32, 160, 160] [-1, 4, 160, 160] 1,156
================================================================================
Total params: 167,780
Trainable params: 167,780
Non-trainable params: 0
Total params: 168,420
Trainable params: 167,140
Non-trainable params: 1,280
--------------------------------------------------------------------------------
Input size (MB): 0.29
Forward/backward pass size (MB): 43.16
......@@ -579,7 +590,7 @@ Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ 的C
.. parsed-literal::
{'total_params': 167780, 'trainable_params': 167780}
{'total_params': 168420, 'trainable_params': 167140}
......@@ -629,15 +640,12 @@ Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ 的C
epsilon=1e-07,
centered=False,
parameters=model.parameters())
model = paddle.Model(PetModel(num_classes, model_tools))
model.prepare(optim,
SoftmaxWithCrossEntropy())
model = paddle.Model(PetModel(num_classes))
model.prepare(optim, SoftmaxWithCrossEntropy())
model.fit(train_dataset,
val_dataset,
epochs=EPOCHS,
batch_size=BATCH_SIZE
)
val_dataset,
epochs=EPOCHS,
batch_size=BATCH_SIZE)
6.模型预测
----------
......@@ -660,6 +668,7 @@ Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ 的C
.. code:: ipython3
print(len(predict_results))
plt.figure(figsize=(10, 10))
i = 0
......@@ -678,8 +687,9 @@ Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ 的C
plt.title('Label')
plt.axis("off")
data = val_preds[0][mask_idx][0].transpose((1, 2, 0))
# 模型只有一个输出,所以我们通过predict_results[0]来取出1000个预测的结果
# 映射原始图片的index来取出预测结果,提取mask进行展示
data = predict_results[0][mask_idx][0].transpose((1, 2, 0))
mask = np.argmax(data, axis=-1)
mask = np.expand_dims(mask, axis=-1)
......
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MNIST数据集使用LeNet进行图像分类\n",
"本示例教程演示如何在MNIST数据集上用LeNet进行图像分类。\n",
"手写数字的MNIST数据集,包含60,000个用于训练的示例和10,000个用于测试的示例。这些数字已经过尺寸标准化并位于图像中心,图像是固定大小(28x28像素),其值为0到1。该数据集的官方地址为:http://yann.lecun.com/exdb/mnist/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 环境\n",
"本教程基于paddle-2.0-beta编写,如果您的环境不是本版本,请先安装paddle-2.0-beta版本。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2.0.0-beta0\n"
]
}
],
"source": [
"import paddle\n",
"print(paddle.__version__)\n",
"paddle.disable_static()\n",
"# 开启动态图"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 加载数据集\n",
"我们使用飞桨自带的paddle.dataset完成mnist数据集的加载。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"download training data and load training data\n",
"load finished\n"
]
}
],
"source": [
"print('download training data and load training data')\n",
"train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
"test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
"print('load finished')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"取训练集中的一条数据看一下。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"train_data0 label is: [5]\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAI4AAACOCAYAAADn/TAIAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAIY0lEQVR4nO3dXWhUZxoH8P/jaPxav7KREtNgiooQFvwg1l1cNOr6sQUN3ixR0VUK9cKPXTBYs17ohReLwl5ovCmuZMU1y+IaWpdC0GIuxCJJMLhJa6oWtSl+FVEXvdDK24s5nc5zapKTZ86cOTPz/4Hk/M8xc17w8Z13zpl5RpxzIBquEbkeAOUnFg6ZsHDIhIVDJiwcMmHhkElGhSMiq0WkT0RuisjesAZF8SfW6zgikgDwFYAVAPoBdABY75z7IrzhUVyNzOB33wVw0zn3NQCIyL8A1AEYsHDKyspcVVVVBqekqHV1dX3nnJvq359J4VQA+CYt9wNYONgvVFVVobOzM4NTUtRE5M6b9md9cSwiH4hIp4h0Pnr0KNuno4hkUjjfAqhMy297+xTn3EfOuRrnXM3UqT+b8ShPZVI4HQBmicg7IlICoB7AJ+EMi+LOvMZxzn0vIjsAtAFIADjhnOsNbWQUa5ksjuGc+xTApyGNhfIIrxyTCQuHTFg4ZMLCIRMWDpmwcMiEhUMmLBwyYeGQCQuHTFg4ZMLCIZOMbnIWk9evX6v89OnTwL/b1NSk8osXL1Tu6+tT+dixYyo3NDSo3NLSovKYMWNU3rv3p88N7N+/P/A4h4MzDpmwcMiEhUMmRbPGuXv3rsovX75U+fLlyypfunRJ5SdPnqh85syZ0MZWWVmp8s6dO1VubW1VecKECSrPmTNH5SVLloQ2toFwxiETFg6ZsHDIpGDXOFevXlV52bJlKg/nOkzYEomEygcPHlR5/PjxKm/cuFHladOmqTxlyhSVZ8+enekQh8QZh0xYOGTCwiGTgl3jTJ8+XeWysjKVw1zjLFyom3T41xwXL15UuaSkROVNmzaFNpaocMYhExYOmbBwyKRg1zilpaUqHz58WOVz586pPG/ePJV37do16OPPnTs3tX3hwgV1zH8dpqenR+UjR44M+tj5gDMOmQxZOCJyQkQeikhP2r5SETkvIje8n1MGewwqPEFmnGYAq3379gL4zDk3C8BnXqYiEqjPsYhUAfivc+5XXu4DUOucuyci5QDanXND3iCpqalxcek6+uzZM5X973HZtm2bysePH1f51KlTqe0NGzaEPLr4EJEu51yNf791jfOWc+6et30fwFvmkVFeynhx7JJT1oDTFtvVFiZr4TzwnqLg/Xw40F9ku9rCZL2O8wmAPwL4q/fz49BGFJGJEycOenzSpEmDHk9f89TX16tjI0YU/lWOIC/HWwB8DmC2iPSLyPtIFswKEbkB4HdepiIy5IzjnFs/wKHlIY+F8kjhz6mUFQV7rypTBw4cULmrq0vl9vb21Lb/XtXKlSuzNazY4IxDJiwcMmHhkIn5Ozkt4nSvarhu3bql8vz581PbkydPVseWLl2qck2NvtWzfft2lUUkjCFmRdj3qqjIsXDIhC/HA5oxY4bKzc3Nqe2tW7eqYydPnhw0P3/+XOXNmzerXF5ebh1mZDjjkAkLh0xYOGTCNY7RunXrUtszZ85Ux3bv3q2y/5ZEY2Ojynfu6O+E37dvn8oVFRXmcWYLZxwyYeGQCQuHTHjLIQv8rW39HzfesmWLyv5/g+XL9Xvkzp8/H97ghom3HChULBwyYeGQCdc4OTB69GiVX716pfKoUaNUbmtrU7m2tjYr43oTrnEoVCwcMmHhkAnvVYXg2rVrKvu/kqijo0Nl/5rGr7q6WuXFixdnMLrs4IxDJiwcMmHhkAnXOAH5v+L56NGjqe2zZ8+qY/fv3x/WY48cqf8Z/O85jmPblPiNiPJCkP44lSJyUUS+EJFeEfmTt58ta4tYkBnnewC7nXPVAH4NYLuIVIMta4takMZK9wDc87b/LyJfAqgAUAeg1vtr/wDQDuDDrIwyAv51yenTp1VuampS+fbt2+ZzLViwQGX/e4zXrl1rfuyoDGuN4/U7ngfgCtiytqgFLhwR+QWA/wD4s3NOdZcerGUt29UWpkCFIyKjkCyafzrnfnztGahlLdvVFqYh1ziS7MHxdwBfOuf+lnYor1rWPnjwQOXe3l6Vd+zYofL169fN5/J/1eKePXtUrqurUzmO12mGEuQC4CIAmwD8T0S6vX1/QbJg/u21r70D4A/ZGSLFUZBXVZcADNT5hy1ri1T+zZEUCwVzr+rx48cq+782qLu7W2V/a7bhWrRoUWrb/1nxVatWqTx27NiMzhVHnHHIhIVDJiwcMsmrNc6VK1dS24cOHVLH/O/r7e/vz+hc48aNU9n/ddLp95f8XxddDDjjkAkLh0zy6qmqtbX1jdtB+D9ysmbNGpUTiYTKDQ0NKvu7pxc7zjhkwsIhExYOmbDNCQ2KbU4oVCwcMmHhkAkLh0xYOGTCwiETFg6ZsHDIhIVDJiwcMmHhkEmk96pE5BGSn/osA/BdZCcenriOLVfjmu6c+9mH/iMtnNRJRTrfdOMsDuI6triNi09VZMLCIZNcFc5HOTpvEHEdW6zGlZM1DuU/PlWRSaSFIyKrRaRPRG6KSE7b24rICRF5KCI9afti0bs5H3pLR1Y4IpIAcAzA7wFUA1jv9UvOlWYAq3374tK7Of69pZ1zkfwB8BsAbWm5EUBjVOcfYExVAHrSch+Acm+7HEBfLseXNq6PAayI0/iifKqqAPBNWu739sVJ7Ho3x7W3NBfHA3DJ/9Y5fclp7S0dhSgL51sAlWn5bW9fnATq3RyFTHpLRyHKwukAMEtE3hGREgD1SPZKjpMfezcDOezdHKC3NJDr3tIRL/LeA/AVgFsA9uV4wdmC5JebvEJyvfU+gF8i+WrlBoALAEpzNLbfIvk0dA1At/fnvbiMzznHK8dkw8UxmbBwyISFQyYsHDJh4ZAJC4dMWDhkwsIhkx8AyyZIbO5tLBIAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 144x144 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"train_data0, train_label_0 = train_dataset[0][0],train_dataset[0][1]\n",
"train_data0 = train_data0.reshape([28,28])\n",
"plt.figure(figsize=(2,2))\n",
"plt.imshow(train_data0, cmap=plt.cm.binary)\n",
"print('train_data0 label is: ' + str(train_label_0))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 组网\n",
"用paddle.nn下的API,如`Conv2d`、`MaxPool2d`、`Linear`完成LeNet的构建。"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"import paddle\n",
"import paddle.nn.functional as F\n",
"class LeNet(paddle.nn.Layer):\n",
" def __init__(self):\n",
" super(LeNet, self).__init__()\n",
" self.conv1 = paddle.nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2)\n",
" self.max_pool1 = paddle.nn.MaxPool2d(kernel_size=2, stride=2)\n",
" self.conv2 = paddle.nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1)\n",
" self.max_pool2 = paddle.nn.MaxPool2d(kernel_size=2, stride=2)\n",
" self.linear1 = paddle.nn.Linear(in_features=16*5*5, out_features=120)\n",
" self.linear2 = paddle.nn.Linear(in_features=120, out_features=84)\n",
" self.linear3 = paddle.nn.Linear(in_features=84, out_features=10)\n",
"\n",
" def forward(self, x):\n",
" x = self.conv1(x)\n",
" x = F.relu(x)\n",
" x = self.max_pool1(x)\n",
" x = F.relu(x)\n",
" x = self.conv2(x)\n",
" x = self.max_pool2(x)\n",
" x = paddle.flatten(x, start_axis=1,stop_axis=-1)\n",
" x = self.linear1(x)\n",
" x = F.relu(x)\n",
" x = self.linear2(x)\n",
" x = F.relu(x)\n",
" x = self.linear3(x)\n",
" x = F.softmax(x)\n",
" return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 训练方式一\n",
"组网后,开始对模型进行训练,先构建`train_loader`,加载训练数据,然后定义`train`函数,设置好损失函数后,按batch加载数据,完成模型的训练。"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch: 0, batch_id: 0, loss is: [2.3037894], acc is: [0.140625]\n",
"epoch: 0, batch_id: 100, loss is: [1.6175328], acc is: [0.9375]\n",
"epoch: 0, batch_id: 200, loss is: [1.5388051], acc is: [0.96875]\n",
"epoch: 0, batch_id: 300, loss is: [1.5251061], acc is: [0.96875]\n",
"epoch: 0, batch_id: 400, loss is: [1.4678856], acc is: [1.]\n",
"epoch: 0, batch_id: 500, loss is: [1.4944503], acc is: [0.984375]\n",
"epoch: 0, batch_id: 600, loss is: [1.5365536], acc is: [0.96875]\n",
"epoch: 0, batch_id: 700, loss is: [1.4885054], acc is: [0.984375]\n",
"epoch: 0, batch_id: 800, loss is: [1.4872254], acc is: [0.984375]\n",
"epoch: 0, batch_id: 900, loss is: [1.4884174], acc is: [0.984375]\n",
"epoch: 1, batch_id: 0, loss is: [1.4776722], acc is: [1.]\n",
"epoch: 1, batch_id: 100, loss is: [1.4751343], acc is: [1.]\n",
"epoch: 1, batch_id: 200, loss is: [1.4772581], acc is: [1.]\n",
"epoch: 1, batch_id: 300, loss is: [1.4918218], acc is: [0.984375]\n",
"epoch: 1, batch_id: 400, loss is: [1.5038397], acc is: [0.96875]\n",
"epoch: 1, batch_id: 500, loss is: [1.5088196], acc is: [0.96875]\n",
"epoch: 1, batch_id: 600, loss is: [1.4961376], acc is: [0.984375]\n",
"epoch: 1, batch_id: 700, loss is: [1.4755756], acc is: [1.]\n",
"epoch: 1, batch_id: 800, loss is: [1.4921497], acc is: [0.984375]\n",
"epoch: 1, batch_id: 900, loss is: [1.4944404], acc is: [1.]\n"
]
}
],
"source": [
"import paddle\n",
"train_loader = paddle.io.DataLoader(train_dataset, places=paddle.CPUPlace(), batch_size=64, shuffle=True)\n",
"# 加载训练集 batch_size 设为 64\n",
"def train(model):\n",
" model.train()\n",
" epochs = 2\n",
" optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())\n",
" # 用Adam作为优化函数\n",
" for epoch in range(epochs):\n",
" for batch_id, data in enumerate(train_loader()):\n",
" x_data = data[0]\n",
" y_data = data[1]\n",
" predicts = model(x_data)\n",
" loss = paddle.nn.functional.cross_entropy(predicts, y_data)\n",
" # 计算损失\n",
" acc = paddle.metric.accuracy(predicts, y_data, k=2)\n",
" avg_loss = paddle.mean(loss)\n",
" avg_acc = paddle.mean(acc)\n",
" avg_loss.backward()\n",
" if batch_id % 100 == 0:\n",
" print(\"epoch: {}, batch_id: {}, loss is: {}, acc is: {}\".format(epoch, batch_id, avg_loss.numpy(), avg_acc.numpy()))\n",
" optim.step()\n",
" optim.clear_grad()\n",
"model = LeNet()\n",
"train(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 对模型进行验证\n",
"训练完成后,需要验证模型的效果,此时,加载测试数据集,然后用训练好的模对测试集进行预测,计算损失与精度。"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"batch_id: 0, loss is: [1.4915928], acc is: [1.]\n",
"batch_id: 20, loss is: [1.4818308], acc is: [1.]\n",
"batch_id: 40, loss is: [1.5006062], acc is: [0.984375]\n",
"batch_id: 60, loss is: [1.521233], acc is: [1.]\n",
"batch_id: 80, loss is: [1.4772738], acc is: [1.]\n",
"batch_id: 100, loss is: [1.4755945], acc is: [1.]\n",
"batch_id: 120, loss is: [1.4746133], acc is: [1.]\n",
"batch_id: 140, loss is: [1.4786345], acc is: [1.]\n"
]
}
],
"source": [
"import paddle\n",
"test_loader = paddle.io.DataLoader(test_dataset, places=paddle.CPUPlace(), batch_size=64)\n",
"# 加载测试数据集\n",
"def test(model):\n",
" model.eval()\n",
" batch_size = 64\n",
" for batch_id, data in enumerate(test_loader()):\n",
" x_data = data[0]\n",
" y_data = data[1]\n",
" predicts = model(x_data)\n",
" # 获取预测结果\n",
" loss = paddle.nn.functional.cross_entropy(predicts, y_data)\n",
" acc = paddle.metric.accuracy(predicts, y_data, k=2)\n",
" avg_loss = paddle.mean(loss)\n",
" avg_acc = paddle.mean(acc)\n",
" avg_loss.backward()\n",
" if batch_id % 20 == 0:\n",
" print(\"batch_id: {}, loss is: {}, acc is: {}\".format(batch_id, avg_loss.numpy(), avg_acc.numpy()))\n",
"test(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 训练方式一结束\n",
"以上就是训练方式一,通过这种方式,可以清楚的看到训练和测试中的每一步过程。但是,这种方式句法比较复杂。因此,我们提供了训练方式二,能够更加快速、高效的完成模型的训练与测试。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.训练方式二\n",
"通过paddle提供的`Model` 构建实例,使用封装好的训练与测试接口,快速完成模型训练与测试。"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"import paddle\n",
"from paddle.static import InputSpec\n",
"from paddle.metric import Accuracy\n",
"inputs = InputSpec([None, 784], 'float32', 'x')\n",
"labels = InputSpec([None, 10], 'float32', 'x')\n",
"model = paddle.Model(LeNet(), inputs, labels)\n",
"optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())\n",
"\n",
"model.prepare(\n",
" optim,\n",
" paddle.nn.loss.CrossEntropyLoss(),\n",
" Accuracy(topk=(1, 2))\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 使用model.fit来训练模型"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1/2\n",
"step 200/938 - loss: 1.5219 - acc_top1: 0.9829 - acc_top2: 0.9965 - 14ms/step\n",
"step 400/938 - loss: 1.4765 - acc_top1: 0.9825 - acc_top2: 0.9958 - 13ms/step\n",
"step 600/938 - loss: 1.4624 - acc_top1: 0.9823 - acc_top2: 0.9953 - 13ms/step\n",
"step 800/938 - loss: 1.4768 - acc_top1: 0.9829 - acc_top2: 0.9955 - 13ms/step\n",
"step 938/938 - loss: 1.4612 - acc_top1: 0.9836 - acc_top2: 0.9956 - 13ms/step\n",
"Epoch 2/2\n",
"step 200/938 - loss: 1.4705 - acc_top1: 0.9834 - acc_top2: 0.9959 - 13ms/step\n",
"step 400/938 - loss: 1.4620 - acc_top1: 0.9833 - acc_top2: 0.9960 - 13ms/step\n",
"step 600/938 - loss: 1.4613 - acc_top1: 0.9830 - acc_top2: 0.9960 - 13ms/step\n",
"step 800/938 - loss: 1.4763 - acc_top1: 0.9831 - acc_top2: 0.9960 - 13ms/step\n",
"step 938/938 - loss: 1.4924 - acc_top1: 0.9834 - acc_top2: 0.9959 - 13ms/step\n"
]
}
],
"source": [
"model.fit(train_dataset,\n",
" epochs=2,\n",
" batch_size=64,\n",
" log_freq=200\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 使用model.evaluate来预测模型"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Eval begin...\n",
"step 20/157 - loss: 1.5246 - acc_top1: 0.9773 - acc_top2: 0.9969 - 6ms/step\n",
"step 40/157 - loss: 1.4622 - acc_top1: 0.9758 - acc_top2: 0.9961 - 6ms/step\n",
"step 60/157 - loss: 1.5241 - acc_top1: 0.9763 - acc_top2: 0.9951 - 6ms/step\n",
"step 80/157 - loss: 1.4612 - acc_top1: 0.9787 - acc_top2: 0.9959 - 6ms/step\n",
"step 100/157 - loss: 1.4612 - acc_top1: 0.9823 - acc_top2: 0.9967 - 5ms/step\n",
"step 120/157 - loss: 1.4612 - acc_top1: 0.9835 - acc_top2: 0.9966 - 5ms/step\n",
"step 140/157 - loss: 1.4612 - acc_top1: 0.9844 - acc_top2: 0.9969 - 5ms/step\n",
"step 157/157 - loss: 1.4612 - acc_top1: 0.9838 - acc_top2: 0.9966 - 5ms/step\n",
"Eval samples: 10000\n"
]
},
{
"data": {
"text/plain": [
"{'loss': [1.4611504], 'acc_top1': 0.9838, 'acc_top2': 0.9966}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.evaluate(test_dataset, log_freq=20, batch_size=64)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 训练方式二结束\n",
"以上就是训练方式二,可以快速、高效的完成网络模型训练与预测。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 总结\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"以上就是用LeNet对手写数字数据及MNIST进行分类。本示例提供了两种训练模型的方式,一种可以快速完成模型的组建与预测,非常适合新手用户上手。另一种则需要多个步骤来完成模型的训练,适合进阶用户使用。"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
......@@ -9,31 +9,28 @@
"本示例教程演示如何在IMDB数据集上用简单的BOW网络完成文本分类的任务。\n",
"\n",
"IMDB数据集是一个对电影评论标注为正向评论与负向评论的数据集,共有25000条文本数据作为训练集,25000条文本数据作为测试集。\n",
"该数据集的官方地址为: http://ai.stanford.edu/~amaas/data/sentiment/\n",
"\n",
"- Warning: `paddle.dataset.imdb`先在是一个非常粗野的实现,后续需要有替代的方案。"
"该数据集的官方地址为: http://ai.stanford.edu/~amaas/data/sentiment/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 环境设置\n",
"## 环境设置\n",
"\n",
"本示例基于飞桨开源框架2.0版本。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.0.0\n",
"264e76cae6861ad9b1d4bcd8c3212f7a78c01e4d\n"
"2.0.0-beta0\n"
]
}
],
......@@ -42,22 +39,21 @@
"import numpy as np\n",
"\n",
"paddle.disable_static()\n",
"print(paddle.__version__)\n",
"print(paddle.__git_commit__)\n"
"print(paddle.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 加载数据\n",
"## 加载数据\n",
"\n",
"我们会使用`paddle.dataset`完成数据下载,构建字典和准备数据读取器。在飞桨2.0版本中,推荐使用padding的方式来对同一个batch中长度不一的数据进行补齐,所以在字典中,我们还会添加一个特殊的`<pad>`词,用来在后续对batch中较短的句子进行填充。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"metadata": {},
"outputs": [
{
......@@ -78,7 +74,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"metadata": {},
"outputs": [
{
......@@ -119,14 +115,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 参数设置\n",
"## 参数设置\n",
"\n",
"在这里我们设置一下词表大小,`embedding`的大小,batch_size,等等"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
......@@ -157,7 +153,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 23,
"metadata": {},
"outputs": [
{
......@@ -183,14 +179,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 用padding的方式对齐数据\n",
"## 用padding的方式对齐数据\n",
"\n",
"文本数据中,每一句话的长度都是不一样的,为了方便后续的神经网络的计算,常见的处理方式是把数据集中的数据都统一成同样长度的数据。这包括:对于较长的数据进行截断处理,对于较短的数据用特殊的词`<pad>`进行填充。接下来的代码会对数据集中的数据进行这样的处理。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 24,
"metadata": {},
"outputs": [
{
......@@ -234,14 +230,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 组建网络\n",
"## 组建网络\n",
"\n",
"本示例中,我们将会使用一个不考虑词的顺序的BOW的网络,在查找到每个词对应的embedding后,简单的取平均,作为一个句子的表示。然后用`Linear`进行线性变换。为了防止过拟合,我们还使用了`Dropout`。"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
......@@ -264,24 +260,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 开始模型的训练\n"
"## 开始模型的训练\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch: 0, batch_id: 0, loss is: [0.6926701]\n",
"epoch: 0, batch_id: 500, loss is: [0.41248566]\n",
"[validation] accuracy/loss: 0.8505121469497681/0.3615057170391083\n",
"epoch: 1, batch_id: 0, loss is: [0.29521096]\n",
"epoch: 1, batch_id: 500, loss is: [0.2916747]\n",
"[validation] accuracy/loss: 0.86475670337677/0.3259459137916565\n"
"epoch: 0, batch_id: 0, loss is: [0.6918494]\n",
"epoch: 0, batch_id: 500, loss is: [0.33142853]\n",
"[validation] accuracy/loss: 0.8506321907043457/0.3620821535587311\n",
"epoch: 1, batch_id: 0, loss is: [0.37161]\n",
"epoch: 1, batch_id: 500, loss is: [0.2296829]\n",
"[validation] accuracy/loss: 0.8622759580612183/0.3286365270614624\n"
]
}
],
......@@ -311,8 +307,8 @@
" if batch_id % 500 == 0:\n",
" print(\"epoch: {}, batch_id: {}, loss is: {}\".format(epoch, batch_id, avg_loss.numpy()))\n",
" avg_loss.backward()\n",
" opt.minimize(avg_loss)\n",
" model.clear_gradients()\n",
" opt.step()\n",
" opt.clear_grad()\n",
"\n",
" # evaluate model after one epoch\n",
" model.eval()\n",
......@@ -345,17 +341,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# The End\n",
"## The End\n",
"\n",
"可以看到,在这个数据集上,经过两轮的迭代可以得到86%左右的准确率。你也可以通过调整网络结构和超参数,来获得更好的效果。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
......@@ -369,8 +358,20 @@
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}
......@@ -6,29 +6,23 @@ IMDB 数据集使用BOW网络的文本分类
IMDB数据集是一个对电影评论标注为正向评论与负向评论的数据集,共有25000条文本数据作为训练集,25000条文本数据作为测试集。
该数据集的官方地址为: http://ai.stanford.edu/~amaas/data/sentiment/
- Warning:
``paddle.dataset.imdb``\ 先在是一个非常粗野的实现,后续需要有替代的方案。
环境设置
--------
本示例基于飞桨开源框架2.0版本。
.. code::
.. code:: ipython3
import paddle
import numpy as np
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
264e76cae6861ad9b1d4bcd8c3212f7a78c01e4d
2.0.0-beta0
加载数据
......@@ -36,7 +30,7 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
我们会使用\ ``paddle.dataset``\ 完成数据下载,构建字典和准备数据读取器。在飞桨2.0版本中,推荐使用padding的方式来对同一个batch中长度不一的数据进行补齐,所以在字典中,我们还会添加一个特殊的\ ``<pad>``\ 词,用来在后续对batch中较短的句子进行填充。
.. code::
.. code:: ipython3
print("Loading IMDB word dict....")
word_dict = paddle.dataset.imdb.word_dict()
......@@ -51,7 +45,7 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
Loading IMDB word dict....
.. code::
.. code:: ipython3
# add a pad token to the dict for later padding the sequence
word_dict['<pad>'] = len(word_dict)
......@@ -88,7 +82,7 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
在这里我们设置一下词表大小,\ ``embedding``\ 的大小,batch_size,等等
.. code::
.. code:: ipython3
vocab_size = len(word_dict)
emb_size = 256
......@@ -109,7 +103,7 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
在这里,取出一条数据打印出来看看,可以对数据有一个初步直观的印象。
.. code::
.. code:: ipython3
# 取出来第一条数据看看样子。
sent, label = next(train_reader())
......@@ -127,11 +121,11 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
padding的方式对齐数据
----------------------------
-----------------------
文本数据中,每一句话的长度都是不一样的,为了方便后续的神经网络的计算,常见的处理方式是把数据集中的数据都统一成同样长度的数据。这包括:对于较长的数据进行截断处理,对于较短的数据用特殊的词\ ``<pad>``\ 进行填充。接下来的代码会对数据集中的数据进行这样的处理。
.. code::
.. code:: ipython3
def create_padded_dataset(reader):
padded_sents = []
......@@ -172,7 +166,7 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
本示例中,我们将会使用一个不考虑词的顺序的BOW的网络,在查找到每个词对应的embedding后,简单的取平均,作为一个句子的表示。然后用\ ``Linear``\ 进行线性变换。为了防止过拟合,我们还使用了\ ``Dropout``\
.. code::
.. code:: ipython3
class MyNet(paddle.nn.Layer):
def __init__(self):
......@@ -191,7 +185,7 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
开始模型的训练
--------------
.. code::
.. code:: ipython3
def train(model):
model.train()
......@@ -218,8 +212,8 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
if batch_id % 500 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()
opt.step()
opt.clear_grad()
# evaluate model after one epoch
model.eval()
......@@ -250,16 +244,15 @@ IMDB数据集是一个对电影评论标注为正向评论与负向评论的数
.. parsed-literal::
epoch: 0, batch_id: 0, loss is: [0.6926701]
epoch: 0, batch_id: 500, loss is: [0.41248566]
[validation] accuracy/loss: 0.8505121469497681/0.3615057170391083
epoch: 1, batch_id: 0, loss is: [0.29521096]
epoch: 1, batch_id: 500, loss is: [0.2916747]
[validation] accuracy/loss: 0.86475670337677/0.3259459137916565
epoch: 0, batch_id: 0, loss is: [0.6918494]
epoch: 0, batch_id: 500, loss is: [0.33142853]
[validation] accuracy/loss: 0.8506321907043457/0.3620821535587311
epoch: 1, batch_id: 0, loss is: [0.37161]
epoch: 1, batch_id: 500, loss is: [0.2296829]
[validation] accuracy/loss: 0.8622759580612183/0.3286365270614624
The End
--------
-------
可以看到,在这个数据集上,经过两轮的迭代可以得到86%左右的准确率。你也可以通过调整网络结构和超参数,来获得更好的效果。
......@@ -10,7 +10,7 @@ trigram,以此类推。实际应用通常采用 bigram 和 trigram 进行计
环境
----
本教程基于paddle-develop编写,如果您的环境不是本版本,请先安装paddle-develop
本教程基于paddle-2.0-beta编写,如果您的环境不是本版本,请先安装paddle-2.0-beta
.. code:: ipython3
......@@ -22,7 +22,7 @@ trigram,以此类推。实际应用通常采用 bigram 和 trigram 进行计
.. parsed-literal::
'0.0.0'
'2.0.0-beta0'
......@@ -39,16 +39,15 @@ context_size设为2,意味着是trigram。embedding_dim设为256。
.. parsed-literal::
--2020-09-09 14:58:26-- https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt
正在解析主机 ocw.mit.edu (ocw.mit.edu)... 151.101.110.133
正在连接 ocw.mit.edu (ocw.mit.edu)|151.101.110.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
--2020-09-12 13:49:29-- https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt
正在连接 172.19.57.45:3128... 已连接。
已发出 Proxy 请求,正在等待回应... 200 OK
长度:5458199 (5.2M) [text/plain]
正在保存至: t8.shakespeare.txt
t8.shakespeare.txt 100%[===================>] 5.21M 94.1KB/s 用时 70s
t8.shakespeare.txt 100%[===================>] 5.21M 2.01MB/s 用时 2.6s
2020-09-09 14:59:38 (75.7 KB/s) - 已保存 t8.shakespeare.txt [5458199/5458199])
2020-09-12 13:49:33 (2.01 MB/s) - 已保存 t8.shakespeare.txt [5458199/5458199])
......@@ -164,6 +163,7 @@ context_size设为2,意味着是trigram。embedding_dim设为256。
import paddle
import numpy as np
import paddle.nn.functional as F
hidden_size = 1024
class NGramModel(paddle.nn.Layer):
def __init__(self, vocab_size, embedding_dim, context_size):
......@@ -176,7 +176,7 @@ context_size设为2,意味着是trigram。embedding_dim设为256。
x = self.embedding(x)
x = paddle.reshape(x, [-1, context_size * embedding_dim])
x = self.linear1(x)
x = paddle.nn.functional.relu(x)
x = F.relu(x)
x = self.linear2(x)
return x
......@@ -185,6 +185,7 @@ context_size设为2,意味着是trigram。embedding_dim设为256。
.. code:: ipython3
import paddle.nn.functional as F
vocab_size = len(vocab)
epochs = 2
losses = []
......@@ -196,37 +197,37 @@ context_size设为2,意味着是trigram。embedding_dim设为256。
x_data = data[0]
y_data = data[1]
predicts = model(x_data)
y_data = paddle.reshape(y_data, ([-1, 1]))
loss = paddle.nn.functional.softmax_with_cross_entropy(predicts, y_data)
y_data = paddle.reshape(y_data, shape=[-1, 1])
loss = F.softmax_with_cross_entropy(predicts, y_data)
avg_loss = paddle.mean(loss)
avg_loss.backward()
if batch_id % 500 == 0:
losses.append(avg_loss.numpy())
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
optim.minimize(avg_loss)
model.clear_gradients()
optim.step()
optim.clear_grad()
model = NGramModel(vocab_size, embedding_dim, context_size)
train(model)
.. parsed-literal::
epoch: 0, batch_id: 0, loss is: [10.252193]
epoch: 0, batch_id: 500, loss is: [6.894636]
epoch: 0, batch_id: 1000, loss is: [6.849346]
epoch: 0, batch_id: 1500, loss is: [6.931605]
epoch: 0, batch_id: 2000, loss is: [6.6860313]
epoch: 0, batch_id: 2500, loss is: [6.2472367]
epoch: 0, batch_id: 3000, loss is: [6.8818874]
epoch: 0, batch_id: 3500, loss is: [6.941615]
epoch: 1, batch_id: 0, loss is: [6.3628616]
epoch: 1, batch_id: 500, loss is: [6.2065206]
epoch: 1, batch_id: 1000, loss is: [6.5334334]
epoch: 1, batch_id: 1500, loss is: [6.5788]
epoch: 1, batch_id: 2000, loss is: [6.352103]
epoch: 1, batch_id: 2500, loss is: [6.6272373]
epoch: 1, batch_id: 3000, loss is: [6.801074]
epoch: 1, batch_id: 3500, loss is: [6.2274427]
epoch: 0, batch_id: 0, loss is: [10.252176]
epoch: 0, batch_id: 500, loss is: [6.6429553]
epoch: 0, batch_id: 1000, loss is: [6.801544]
epoch: 0, batch_id: 1500, loss is: [6.7114644]
epoch: 0, batch_id: 2000, loss is: [6.628998]
epoch: 0, batch_id: 2500, loss is: [6.511376]
epoch: 0, batch_id: 3000, loss is: [6.878798]
epoch: 0, batch_id: 3500, loss is: [6.8752203]
epoch: 1, batch_id: 0, loss is: [6.5908413]
epoch: 1, batch_id: 500, loss is: [6.9765778]
epoch: 1, batch_id: 1000, loss is: [6.603841]
epoch: 1, batch_id: 1500, loss is: [6.9935036]
epoch: 1, batch_id: 2000, loss is: [6.751287]
epoch: 1, batch_id: 2500, loss is: [7.1222277]
epoch: 1, batch_id: 3000, loss is: [6.6431484]
epoch: 1, batch_id: 3500, loss is: [6.6024966]
打印loss下降曲线
......@@ -248,12 +249,12 @@ context_size设为2,意味着是trigram。embedding_dim设为256。
.. parsed-literal::
[<matplotlib.lines.Line2D at 0x14e27b3c8>]
[<matplotlib.lines.Line2D at 0x15c295cc0>]
.. image:: n_gram_model_files/n_gram_model_19_1.png
.. image:: https://github.com/PaddlePaddle/FluidDoc/tree/develop/doc/paddle/tutorial/nlp_case/n_gram_model/n_gram_model_files/n_gram_model_001.png?raw=true
预测
......
......@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 环境设置\n",
"## 环境设置\n",
"\n",
"本示例教程基于飞桨2.0-beta版本。"
]
......@@ -27,8 +27,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"0.0.0\n",
"89af2088b6e74bdfeef2d4d78e08461ed2aafee5\n"
"2.0.0-beta0\n"
]
}
],
......@@ -39,15 +38,14 @@
"import numpy as np\n",
"\n",
"paddle.disable_static()\n",
"print(paddle.__version__)\n",
"print(paddle.__git_commit__)"
"print(paddle.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 下载数据集\n",
"## 下载数据集\n",
"\n",
"我们将使用 [http://www.manythings.org/anki/](http://www.manythings.org/anki/) 提供的中英文的英汉句对作为数据集,来完成本任务。该数据集含有23610个中英文双语的句对。"
]
......@@ -61,16 +59,16 @@
"name": "stdout",
"output_type": "stream",
"text": [
"--2020-09-04 16:13:35-- https://www.manythings.org/anki/cmn-eng.zip\n",
"Resolving www.manythings.org (www.manythings.org)... 104.24.109.196, 172.67.173.198, 2606:4700:3037::6818:6cc4, ...\n",
"Connecting to www.manythings.org (www.manythings.org)|104.24.109.196|:443... connected.\n",
"--2020-09-10 16:17:25-- https://www.manythings.org/anki/cmn-eng.zip\n",
"Resolving www.manythings.org (www.manythings.org)... 2606:4700:3033::6818:6dc4, 2606:4700:3036::ac43:adc6, 2606:4700:3037::6818:6cc4, ...\n",
"Connecting to www.manythings.org (www.manythings.org)|2606:4700:3033::6818:6dc4|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 1030722 (1007K) [application/zip]\n",
"Saving to: ‘cmn-eng.zip’\n",
"\n",
"cmn-eng.zip 100%[===================>] 1007K 520KB/s in 1.9s \n",
"cmn-eng.zip 100%[===================>] 1007K 91.2KB/s in 11s \n",
"\n",
"2020-09-04 16:13:38 (520 KB/s) - ‘cmn-eng.zip’ saved [1030722/1030722]\n",
"2020-09-10 16:17:38 (91.2 KB/s) - ‘cmn-eng.zip’ saved [1030722/1030722]\n",
"\n",
"Archive: cmn-eng.zip\n",
" inflating: cmn.txt \n",
......@@ -91,7 +89,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
" 23610 cmn.txt\r\n"
" 23610 cmn.txt\n"
]
}
],
......@@ -103,7 +101,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建双语句对的数据结构\n",
"## 构建双语句对的数据结构\n",
"\n",
"接下来我们通过处理下载下来的双语句对的文本文件,将双语句对读入到python的数据结构中。这里做了如下的处理。\n",
"\n",
......@@ -169,7 +167,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建词表\n",
"## 创建词表\n",
"\n",
"接下来我们分别创建中英文的词表,这两份词表会用来将英文和中文的句子转换为词的ID构成的序列。词表中还加入了如下三个特殊的词:\n",
"- `<pad>`: 用来对较短的句子进行填充。\n",
......@@ -220,7 +218,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建padding过的数据集\n",
"## 创建padding过的数据集\n",
"\n",
"接下来根据词表,我们将会创建一份实际的用于训练的用numpy array组织起来的数据集。\n",
"- 所有的句子都通过`<pad>`补充成为了长度相同的句子。\n",
......@@ -271,7 +269,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建网络\n",
"## 创建网络\n",
"\n",
"我们将会创建一个Encoder-AttentionDecoder架构的模型结构用来完成机器翻译任务。\n",
"首先我们将设置一些必要的网络结构中用到的参数。"
......@@ -296,7 +294,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Encoder部分\n",
"## Encoder部分\n",
"\n",
"在编码器的部分,我们通过查找完Embedding之后接一个LSTM的方式构建一个对源语言编码的网络。飞桨的RNN系列的API,除了LSTM之外,还提供了SimleRNN, GRU供使用,同时,还可以使用反向RNN,双向RNN,多层RNN等形式。也可以通过`dropout`参数设置是否对多层RNN的中间层进行`dropout`处理,来防止过拟合。\n",
"\n",
......@@ -328,7 +326,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# AttentionDecoder部分\n",
"## AttentionDecoder部分\n",
"\n",
"在解码器部分,我们通过一个带有注意力机制的LSTM来完成解码。\n",
"\n",
......@@ -402,7 +400,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 训练模型\n",
"## 训练模型\n",
"\n",
"接下来我们开始训练模型。\n",
"\n",
......@@ -421,65 +419,65 @@
"output_type": "stream",
"text": [
"epoch:0\n",
"iter 0, loss:[7.6194725]\n",
"iter 200, loss:[3.4147663]\n",
"iter 0, loss:[7.620109]\n",
"iter 200, loss:[2.9760551]\n",
"epoch:1\n",
"iter 0, loss:[3.0931656]\n",
"iter 200, loss:[2.7543137]\n",
"iter 0, loss:[2.9679596]\n",
"iter 200, loss:[3.161064]\n",
"epoch:2\n",
"iter 0, loss:[2.8413522]\n",
"iter 200, loss:[2.340513]\n",
"iter 0, loss:[2.7516625]\n",
"iter 200, loss:[2.9755423]\n",
"epoch:3\n",
"iter 0, loss:[2.597812]\n",
"iter 200, loss:[2.5552855]\n",
"iter 0, loss:[2.7249248]\n",
"iter 200, loss:[2.3419888]\n",
"epoch:4\n",
"iter 0, loss:[2.0783448]\n",
"iter 200, loss:[2.4544785]\n",
"iter 0, loss:[2.3236473]\n",
"iter 200, loss:[2.3453429]\n",
"epoch:5\n",
"iter 0, loss:[1.8709135]\n",
"iter 200, loss:[1.8736631]\n",
"iter 0, loss:[2.1926975]\n",
"iter 200, loss:[2.1977856]\n",
"epoch:6\n",
"iter 0, loss:[1.9589291]\n",
"iter 200, loss:[2.119414]\n",
"iter 0, loss:[2.014393]\n",
"iter 200, loss:[2.1863418]\n",
"epoch:7\n",
"iter 0, loss:[1.5829577]\n",
"iter 200, loss:[1.6002902]\n",
"iter 0, loss:[1.8619595]\n",
"iter 200, loss:[1.8904227]\n",
"epoch:8\n",
"iter 0, loss:[1.6022769]\n",
"iter 200, loss:[1.52694]\n",
"iter 0, loss:[1.5901132]\n",
"iter 200, loss:[1.7812968]\n",
"epoch:9\n",
"iter 0, loss:[1.3616685]\n",
"iter 200, loss:[1.5420443]\n",
"iter 0, loss:[1.341565]\n",
"iter 200, loss:[1.4957166]\n",
"epoch:10\n",
"iter 0, loss:[1.0397792]\n",
"iter 200, loss:[1.2458231]\n",
"iter 0, loss:[1.2202356]\n",
"iter 200, loss:[1.3485341]\n",
"epoch:11\n",
"iter 0, loss:[1.2107158]\n",
"iter 200, loss:[1.426417]\n",
"iter 0, loss:[1.1035374]\n",
"iter 200, loss:[1.2871654]\n",
"epoch:12\n",
"iter 0, loss:[1.1840894]\n",
"iter 200, loss:[1.0999664]\n",
"iter 0, loss:[1.194801]\n",
"iter 200, loss:[1.0479954]\n",
"epoch:13\n",
"iter 0, loss:[1.0968472]\n",
"iter 200, loss:[0.8149167]\n",
"iter 0, loss:[1.0022258]\n",
"iter 200, loss:[1.0899843]\n",
"epoch:14\n",
"iter 0, loss:[0.95585203]\n",
"iter 200, loss:[1.0070628]\n",
"iter 0, loss:[0.93466896]\n",
"iter 200, loss:[0.99347967]\n",
"epoch:15\n",
"iter 0, loss:[0.89463925]\n",
"iter 200, loss:[0.8288595]\n",
"iter 0, loss:[0.83665943]\n",
"iter 200, loss:[0.9594004]\n",
"epoch:16\n",
"iter 0, loss:[0.5672495]\n",
"iter 200, loss:[0.7317069]\n",
"iter 0, loss:[0.78929776]\n",
"iter 200, loss:[0.945769]\n",
"epoch:17\n",
"iter 0, loss:[0.76785177]\n",
"iter 200, loss:[0.5319323]\n",
"iter 0, loss:[0.62574965]\n",
"iter 200, loss:[0.6308163]\n",
"epoch:18\n",
"iter 0, loss:[0.5250005]\n",
"iter 200, loss:[0.4182841]\n",
"iter 0, loss:[0.63433456]\n",
"iter 200, loss:[0.6287957]\n",
"epoch:19\n",
"iter 0, loss:[0.52320284]\n",
"iter 200, loss:[0.47618982]\n"
"iter 0, loss:[0.54270047]\n",
"iter 200, loss:[0.72688276]\n"
]
}
],
......@@ -527,16 +525,15 @@
" print(\"iter {}, loss:{}\".format(iteration, loss.numpy()))\n",
"\n",
" loss.backward()\n",
" opt.minimize(loss)\n",
" encoder.clear_gradients()\n",
" atten_decoder.clear_gradients()"
" opt.step()\n",
" opt.clear_grad()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用模型进行机器翻译\n",
"## 使用模型进行机器翻译\n",
"\n",
"根据你所使用的计算设备的不同,上面的训练过程可能需要不等的时间。(在一台Mac笔记本上,大约耗时15~20分钟)\n",
"完成上面的模型训练之后,我们可以得到一个能够从英文翻译成中文的机器翻译模型。接下来我们通过一个greedy search来实现使用该模型完成实际的机器翻译。(实际的任务中,你可能需要用beam search算法来提升效果)"
......@@ -544,43 +541,43 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"i agree with him\n",
"true: 我同意他。\n",
"pred: 我同意他。\n",
"i think i ll take a bath tonight\n",
"true: 我想我今晚會洗澡。\n",
"pred: 我想我今晚會洗澡。\n",
"he asked for a drink of water\n",
"true: 他要了水喝。\n",
"pred: 他喝了一杯水。\n",
"i began running\n",
"true: 我開始跑。\n",
"pred: 我開始跑。\n",
"i m sick\n",
"true: 我生病了。\n",
"pred: 我生病了。\n",
"you had better go to the dentist s\n",
"true: 你最好去看牙醫。\n",
"pred: 你最好去看牙醫。\n",
"we went for a walk in the forest\n",
"true: 我们去了林中散步。\n",
"pred: 我們去公园散步。\n",
"you ve arrived very early\n",
"true: 你來得很早。\n",
"pred: 你去早个。\n",
"he pretended not to be listening\n",
"true: 他裝作沒在聽。\n",
"pred: 他假装聽到它。\n",
"he always wanted to study japanese\n",
"true: 他一直想學日語。\n",
"pred: 他一直想學日語。\n"
"i want to study french\n",
"true: 我要学法语。\n",
"pred: 我要学法语。\n",
"i didn t know that he was there\n",
"true: 我不知道他在那裡。\n",
"pred: 我不知道他在那裡。\n",
"i called tom\n",
"true: 我給湯姆打了電話。\n",
"pred: 我看見湯姆了。\n",
"he is getting along with his employees\n",
"true: 他和他的員工相處。\n",
"pred: 他和他的員工相處。\n",
"we raced toward the fire\n",
"true: 我們急忙跑向火。\n",
"pred: 我們住在美國。\n",
"i ran away in a hurry\n",
"true: 我趕快跑走了。\n",
"pred: 我在班里是最高。\n",
"he cut the envelope open\n",
"true: 他裁開了那個信封。\n",
"pred: 他裁開了信封。\n",
"he s shorter than tom\n",
"true: 他比湯姆矮。\n",
"pred: 他比湯姆矮。\n",
"i ve just started playing tennis\n",
"true: 我剛開始打網球。\n",
"pred: 我剛去打網球。\n",
"i need to go home\n",
"true: 我该回家了。\n",
"pred: 我该回家了。\n"
]
}
],
......@@ -628,17 +625,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# The End\n",
"## The End\n",
"\n",
"你还可以通过变换网络结构,调整数据集,尝试不同的参数的方式来进一步提升本示例当中的机器翻译的效果。同时,也可以尝试在其他的类似的任务中用飞桨来完成实际的实践。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
......@@ -657,7 +647,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.7"
"version": "3.7.3"
}
},
"nbformat": 4,
......
......@@ -4,7 +4,7 @@
本示例教程介绍如何使用飞桨完成一个机器翻译任务。我们将会使用飞桨提供的LSTM的API,组建一个\ ``sequence to sequence with attention``\ 的机器翻译的模型,并在示例的数据集上完成从英文翻译成中文的机器翻译。
环境设置
---------
--------
本示例教程基于飞桨2.0-beta版本。
......@@ -17,17 +17,15 @@
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
89af2088b6e74bdfeef2d4d78e08461ed2aafee5
2.0.0-beta0
下载数据集
------------
----------
我们将使用 http://www.manythings.org/anki/
提供的中英文的英汉句对作为数据集,来完成本任务。该数据集含有23610个中英文双语的句对。
......@@ -39,16 +37,16 @@
.. parsed-literal::
--2020-09-04 16:13:35-- https://www.manythings.org/anki/cmn-eng.zip
Resolving www.manythings.org (www.manythings.org)... 104.24.109.196, 172.67.173.198, 2606:4700:3037::6818:6cc4, ...
Connecting to www.manythings.org (www.manythings.org)|104.24.109.196|:443... connected.
--2020-09-10 16:17:25-- https://www.manythings.org/anki/cmn-eng.zip
Resolving www.manythings.org (www.manythings.org)... 2606:4700:3033::6818:6dc4, 2606:4700:3036::ac43:adc6, 2606:4700:3037::6818:6cc4, ...
Connecting to www.manythings.org (www.manythings.org)|2606:4700:3033::6818:6dc4|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1030722 (1007K) [application/zip]
Saving to: ‘cmn-eng.zip’
cmn-eng.zip 100%[===================>] 1007K 520KB/s in 1.9s
cmn-eng.zip 100%[===================>] 1007K 91.2KB/s in 11s
2020-09-04 16:13:38 (520 KB/s) - ‘cmn-eng.zip’ saved [1030722/1030722]
2020-09-10 16:17:38 (91.2 KB/s) - ‘cmn-eng.zip’ saved [1030722/1030722]
Archive: cmn-eng.zip
inflating: cmn.txt
......@@ -62,11 +60,11 @@
.. parsed-literal::
23610 cmn.txt
23610 cmn.txt
构建双语句对的数据结构
-------------------------
----------------------
接下来我们通过处理下载下来的双语句对的文本文件,将双语句对读入到python的数据结构中。这里做了如下的处理。
......@@ -116,7 +114,7 @@
创建词表
----------
--------
接下来我们分别创建中英文的词表,这两份词表会用来将英文和中文的句子转换为词的ID构成的序列。词表中还加入了如下三个特殊的词:
- ``<pad>``: 用来对较短的句子进行填充。 - ``<bos>``: “begin of
......@@ -157,7 +155,7 @@ Note:
创建padding过的数据集
-----------------------------
---------------------
接下来根据词表,我们将会创建一份实际的用于训练的用numpy
array组织起来的数据集。 -
......@@ -198,7 +196,7 @@ array组织起来的数据集。 -
创建网络
---------
--------
我们将会创建一个Encoder-AttentionDecoder架构的模型结构用来完成机器翻译任务。
首先我们将设置一些必要的网络结构中用到的参数。
......@@ -214,7 +212,7 @@ array组织起来的数据集。 -
batch_size = 16
Encoder部分
----------------
-----------
在编码器的部分,我们通过查找完Embedding之后接一个LSTM的方式构建一个对源语言编码的网络。飞桨的RNN系列的API,除了LSTM之外,还提供了SimleRNN,
GRU供使用,同时,还可以使用反向RNN,双向RNN,多层RNN等形式。也可以通过\ ``dropout``\ 参数设置是否对多层RNN的中间层进行\ ``dropout``\ 处理,来防止过拟合。
......@@ -239,7 +237,7 @@ LSTMCell等API更灵活的创建单步的RNN计算,甚至通过继承RNNCellBa
return x
AttentionDecoder部分
------------------------
--------------------
在解码器部分,我们通过一个带有注意力机制的LSTM来完成解码。
......@@ -358,77 +356,76 @@ AttentionDecoder部分
print("iter {}, loss:{}".format(iteration, loss.numpy()))
loss.backward()
opt.minimize(loss)
encoder.clear_gradients()
atten_decoder.clear_gradients()
opt.step()
opt.clear_grad()
.. parsed-literal::
epoch:0
iter 0, loss:[7.6194725]
iter 200, loss:[3.4147663]
iter 0, loss:[7.620109]
iter 200, loss:[2.9760551]
epoch:1
iter 0, loss:[3.0931656]
iter 200, loss:[2.7543137]
iter 0, loss:[2.9679596]
iter 200, loss:[3.161064]
epoch:2
iter 0, loss:[2.8413522]
iter 200, loss:[2.340513]
iter 0, loss:[2.7516625]
iter 200, loss:[2.9755423]
epoch:3
iter 0, loss:[2.597812]
iter 200, loss:[2.5552855]
iter 0, loss:[2.7249248]
iter 200, loss:[2.3419888]
epoch:4
iter 0, loss:[2.0783448]
iter 200, loss:[2.4544785]
iter 0, loss:[2.3236473]
iter 200, loss:[2.3453429]
epoch:5
iter 0, loss:[1.8709135]
iter 200, loss:[1.8736631]
iter 0, loss:[2.1926975]
iter 200, loss:[2.1977856]
epoch:6
iter 0, loss:[1.9589291]
iter 200, loss:[2.119414]
iter 0, loss:[2.014393]
iter 200, loss:[2.1863418]
epoch:7
iter 0, loss:[1.5829577]
iter 200, loss:[1.6002902]
iter 0, loss:[1.8619595]
iter 200, loss:[1.8904227]
epoch:8
iter 0, loss:[1.6022769]
iter 200, loss:[1.52694]
iter 0, loss:[1.5901132]
iter 200, loss:[1.7812968]
epoch:9
iter 0, loss:[1.3616685]
iter 200, loss:[1.5420443]
iter 0, loss:[1.341565]
iter 200, loss:[1.4957166]
epoch:10
iter 0, loss:[1.0397792]
iter 200, loss:[1.2458231]
iter 0, loss:[1.2202356]
iter 200, loss:[1.3485341]
epoch:11
iter 0, loss:[1.2107158]
iter 200, loss:[1.426417]
iter 0, loss:[1.1035374]
iter 200, loss:[1.2871654]
epoch:12
iter 0, loss:[1.1840894]
iter 200, loss:[1.0999664]
iter 0, loss:[1.194801]
iter 200, loss:[1.0479954]
epoch:13
iter 0, loss:[1.0968472]
iter 200, loss:[0.8149167]
iter 0, loss:[1.0022258]
iter 200, loss:[1.0899843]
epoch:14
iter 0, loss:[0.95585203]
iter 200, loss:[1.0070628]
iter 0, loss:[0.93466896]
iter 200, loss:[0.99347967]
epoch:15
iter 0, loss:[0.89463925]
iter 200, loss:[0.8288595]
iter 0, loss:[0.83665943]
iter 200, loss:[0.9594004]
epoch:16
iter 0, loss:[0.5672495]
iter 200, loss:[0.7317069]
iter 0, loss:[0.78929776]
iter 200, loss:[0.945769]
epoch:17
iter 0, loss:[0.76785177]
iter 200, loss:[0.5319323]
iter 0, loss:[0.62574965]
iter 200, loss:[0.6308163]
epoch:18
iter 0, loss:[0.5250005]
iter 200, loss:[0.4182841]
iter 0, loss:[0.63433456]
iter 200, loss:[0.6287957]
epoch:19
iter 0, loss:[0.52320284]
iter 200, loss:[0.47618982]
iter 0, loss:[0.54270047]
iter 200, loss:[0.72688276]
使用模型进行机器翻译
-----------------------
--------------------
根据你所使用的计算设备的不同,上面的训练过程可能需要不等的时间。(在一台Mac笔记本上,大约耗时15~20分钟)
完成上面的模型训练之后,我们可以得到一个能够从英文翻译成中文的机器翻译模型。接下来我们通过一个greedy
......@@ -478,40 +475,39 @@ search算法来提升效果)
.. parsed-literal::
i agree with him
true: 我同意他
pred: 我同意他
i think i ll take a bath tonight
true: 我想我今晚會洗澡
pred: 我想我今晚會洗澡
he asked for a drink of water
true: 他要了水喝
pred: 他喝了一杯水
i began running
true: 我開始跑
pred: 我開始跑
i m sick
true: 我生病了
pred: 我生病了
you had better go to the dentist s
true: 你最好去看牙醫
pred: 你最好去看牙醫
we went for a walk in the forest
true: 我们去了林中散步
pred: 我們去公园散步
you ve arrived very early
true: 你來得很早
pred: 你去早个
he pretended not to be listening
true: 他裝作沒在聽
pred: 他假装聽到它
he always wanted to study japanese
true: 他一直想學日語
pred: 他一直想學日語
i want to study french
true: 我要学法语
pred: 我要学法语
i didn t know that he was there
true: 我不知道他在那裡
pred: 我不知道他在那裡
i called tom
true: 我給湯姆打了電話
pred: 我看見湯姆了
he is getting along with his employees
true: 他和他的員工相處
pred: 他和他的員工相處
we raced toward the fire
true: 我們急忙跑向火
pred: 我們住在美國
i ran away in a hurry
true: 我趕快跑走了
pred: 我在班里是最高
he cut the envelope open
true: 他裁開了那個信封
pred: 他裁開了信封
he s shorter than tom
true: 他比湯姆矮
pred: 他比湯姆矮
i ve just started playing tennis
true: 我剛開始打網球
pred: 我剛去打網球
i need to go home
true: 我该回家了
pred: 我该回家了
The End
-------
你还可以通过变换网络结构,调整数据集,尝试不同的参数的方式来进一步提升本示例当中的机器翻译的效果。同时,也可以尝试在其他的类似的任务中用飞桨来完成实际的实践。
......@@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 设置环境\n",
"## 设置环境\n",
"\n",
"我们将使用飞桨2.0beta版本,并确认已经开启了动态图模式。"
]
......@@ -29,8 +29,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"0.0.0\n",
"89af2088b6e74bdfeef2d4d78e08461ed2aafee5\n"
"2.0.0-beta0\n"
]
}
],
......@@ -40,15 +39,14 @@
"import numpy as np\n",
"\n",
"paddle.disable_static()\n",
"print(paddle.__version__)\n",
"print(paddle.__git_commit__)\n"
"print(paddle.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 基本用法\n",
"## 基本用法\n",
"\n",
"在动态图模式下,您可以直接运行一个飞桨提供的API,它会立刻返回结果到python。不再需要首先创建一个计算图,然后再给定数据去运行。"
]
......@@ -62,16 +60,16 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[[-0.49341336 -0.8112665 ]\n",
" [ 0.8929015 0.24661176]\n",
" [-0.64440054 -0.7945008 ]\n",
" [-0.07345356 1.3641853 ]]\n",
"[[ 1.5645729 -0.74514765]\n",
" [-0.01248 0.68240154]\n",
" [ 0.11316949 -1.6579045 ]\n",
" [-0.1425675 -1.0153968 ]]\n",
"[1. 2.]\n",
"[[0.5065867 1.1887336 ]\n",
" [1.8929014 2.2466118 ]\n",
" [0.35559946 1.2054992 ]\n",
" [0.92654645 3.3641853 ]]\n",
"[-2.1159463 1.386125 -2.2334023 2.654917 ]\n"
"[[2.5645728 1.2548523 ]\n",
" [0.98752 2.6824017 ]\n",
" [1.1131694 0.3420955 ]\n",
" [0.8574325 0.98460317]]\n",
"[ 0.07427764 1.352323 -3.2026396 -2.173361 ]\n"
]
}
],
......@@ -93,14 +91,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用python的控制流\n",
"## 使用python的控制流\n",
"\n",
"动态图模式下,您可以使用python的条件判断和循环,这类控制语句来执行神经网络的计算。(不再需要`cond`, `loop`这类OP)\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"outputs": [
{
......@@ -108,12 +106,12 @@
"output_type": "stream",
"text": [
"0 +> [5 6 7]\n",
"1 +> [5 7 9]\n",
"1 -> [-3 -3 -3]\n",
"2 +> [ 5 9 15]\n",
"3 -> [-3 3 21]\n",
"4 -> [-3 11 75]\n",
"5 +> [ 5 37 249]\n",
"6 +> [ 5 69 735]\n",
"4 +> [ 5 21 87]\n",
"5 -> [ -3 27 237]\n",
"6 -> [ -3 59 723]\n",
"7 -> [ -3 123 2181]\n",
"8 +> [ 5 261 6567]\n",
"9 +> [ 5 517 19689]\n"
......@@ -138,7 +136,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建更加灵活的网络:控制流\n",
"## 构建更加灵活的网络:控制流\n",
"\n",
"- 使用动态图可以用来创建更加灵活的网络,比如根据控制流选择不同的分支网络,和方便的构建权重共享的网络。接下来我们来看一个具体的例子,在这个例子中,第二个线性变换只有0.5的可能性会运行。\n",
"- 在sequence to sequence with attention的机器翻译的示例中,你会看到更实际的使用动态图构建RNN类的网络带来的灵活性。\n"
......@@ -146,7 +144,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
......@@ -172,28 +170,28 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0 [2.0915627]\n",
"200 [0.67530334]\n",
"400 [0.52042854]\n",
"600 [0.28010666]\n",
"800 [0.09739777]\n",
"1000 [0.09307177]\n",
"1200 [0.04252927]\n",
"1400 [0.03095707]\n",
"1600 [0.03022156]\n",
"1800 [0.01616007]\n",
"2000 [0.01069116]\n",
"2200 [0.0055158]\n",
"2400 [0.00195092]\n",
"2600 [0.00101116]\n",
"2800 [0.00192219]\n"
"0 [1.3384138]\n",
"200 [0.7855983]\n",
"400 [0.59084535]\n",
"600 [0.30849028]\n",
"800 [0.26992702]\n",
"1000 [0.03990713]\n",
"1200 [0.07111286]\n",
"1400 [0.01177792]\n",
"1600 [0.03160322]\n",
"1800 [0.02757282]\n",
"2000 [0.00916022]\n",
"2200 [0.00217024]\n",
"2400 [0.00186833]\n",
"2600 [0.00101926]\n",
"2800 [0.0009654]\n"
]
}
],
......@@ -220,39 +218,39 @@
" print(t, loss.numpy())\n",
"\n",
" loss.backward()\n",
" optimizer.minimize(loss)\n",
" model.clear_gradients()"
" optimizer.step()\n",
" optimizer.clear_grad()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建更加灵活的网络:共享权重\n",
"## 构建更加灵活的网络:共享权重\n",
"\n",
"- 使用动态图还可以更加方便的创建共享权重的网络,下面的示例展示了一个共享了权重的简单的AutoEncoder的示例。\n",
"- 使用动态图还可以更加方便的创建共享权重的网络,下面的示例展示了一个共享了权重的简单的AutoEncoder。\n",
"- 你也可以参考图像搜索的示例看到共享参数权重的更实际的使用。"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"step: 0, loss: [0.37666085]\n",
"step: 1, loss: [0.3063845]\n",
"step: 2, loss: [0.2647248]\n",
"step: 3, loss: [0.23831272]\n",
"step: 4, loss: [0.21714918]\n",
"step: 5, loss: [0.1955545]\n",
"step: 6, loss: [0.17261818]\n",
"step: 7, loss: [0.15009595]\n",
"step: 8, loss: [0.13051331]\n",
"step: 9, loss: [0.11537809]\n"
"step: 0, loss: [0.33474904]\n",
"step: 1, loss: [0.31669515]\n",
"step: 2, loss: [0.29729688]\n",
"step: 3, loss: [0.27288628]\n",
"step: 4, loss: [0.24694422]\n",
"step: 5, loss: [0.2203041]\n",
"step: 6, loss: [0.19171436]\n",
"step: 7, loss: [0.16213782]\n",
"step: 8, loss: [0.13443354]\n",
"step: 9, loss: [0.11170781]\n"
]
}
],
......@@ -270,15 +268,15 @@
" loss = loss_fn(outputs, inputs)\n",
" loss.backward()\n",
" print(\"step: {}, loss: {}\".format(i, loss.numpy()))\n",
" optimizer.minimize(loss)\n",
" linear.clear_gradients()"
" optimizer.step()\n",
" optimizer.clear_grad()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The end\n",
"## The end\n",
"\n",
"可以看到使用动态图带来了更灵活易用的方式来组网和训练。"
]
......@@ -300,7 +298,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.7"
"version": "3.7.3"
}
},
"nbformat": 4,
......
......@@ -18,14 +18,11 @@
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
89af2088b6e74bdfeef2d4d78e08461ed2aafee5
2.0.0-beta0
基本用法
......@@ -50,16 +47,16 @@
.. parsed-literal::
[[-0.49341336 -0.8112665 ]
[ 0.8929015 0.24661176]
[-0.64440054 -0.7945008 ]
[-0.07345356 1.3641853 ]]
[[ 1.5645729 -0.74514765]
[-0.01248 0.68240154]
[ 0.11316949 -1.6579045 ]
[-0.1425675 -1.0153968 ]]
[1. 2.]
[[0.5065867 1.1887336 ]
[1.8929014 2.2466118 ]
[0.35559946 1.2054992 ]
[0.92654645 3.3641853 ]]
[-2.1159463 1.386125 -2.2334023 2.654917 ]
[[2.5645728 1.2548523 ]
[0.98752 2.6824017 ]
[1.1131694 0.3420955 ]
[0.8574325 0.98460317]]
[ 0.07427764 1.352323 -3.2026396 -2.173361 ]
使用python的控制流
......@@ -87,19 +84,19 @@
.. parsed-literal::
0 +> [5 6 7]
1 +> [5 7 9]
1 -> [-3 -3 -3]
2 +> [ 5 9 15]
3 -> [-3 3 21]
4 -> [-3 11 75]
5 +> [ 5 37 249]
6 +> [ 5 69 735]
4 +> [ 5 21 87]
5 -> [ -3 27 237]
6 -> [ -3 59 723]
7 -> [ -3 123 2181]
8 +> [ 5 261 6567]
9 +> [ 5 517 19689]
构建更加灵活的网络:控制流
-------------------------------
--------------------------
- 使用动态图可以用来创建更加灵活的网络,比如根据控制流选择不同的分支网络,和方便的构建权重共享的网络。接下来我们来看一个具体的例子,在这个例子中,第二个线性变换只有0.5的可能性会运行。
- sequence to sequence with
......@@ -150,33 +147,33 @@
print(t, loss.numpy())
loss.backward()
optimizer.minimize(loss)
model.clear_gradients()
optimizer.step()
optimizer.clear_grad()
.. parsed-literal::
0 [2.0915627]
200 [0.67530334]
400 [0.52042854]
600 [0.28010666]
800 [0.09739777]
1000 [0.09307177]
1200 [0.04252927]
1400 [0.03095707]
1600 [0.03022156]
1800 [0.01616007]
2000 [0.01069116]
2200 [0.0055158]
2400 [0.00195092]
2600 [0.00101116]
2800 [0.00192219]
0 [1.3384138]
200 [0.7855983]
400 [0.59084535]
600 [0.30849028]
800 [0.26992702]
1000 [0.03990713]
1200 [0.07111286]
1400 [0.01177792]
1600 [0.03160322]
1800 [0.02757282]
2000 [0.00916022]
2200 [0.00217024]
2400 [0.00186833]
2600 [0.00101926]
2800 [0.0009654]
构建更加灵活的网络:共享权重
---------------------------------
----------------------------
- 使用动态图还可以更加方便的创建共享权重的网络,下面的示例展示了一个共享了权重的简单的AutoEncoder的示例
- 使用动态图还可以更加方便的创建共享权重的网络,下面的示例展示了一个共享了权重的简单的AutoEncoder
- 你也可以参考图像搜索的示例看到共享参数权重的更实际的使用。
.. code:: ipython3
......@@ -194,25 +191,25 @@
loss = loss_fn(outputs, inputs)
loss.backward()
print("step: {}, loss: {}".format(i, loss.numpy()))
optimizer.minimize(loss)
linear.clear_gradients()
optimizer.step()
optimizer.clear_grad()
.. parsed-literal::
step: 0, loss: [0.37666085]
step: 1, loss: [0.3063845]
step: 2, loss: [0.2647248]
step: 3, loss: [0.23831272]
step: 4, loss: [0.21714918]
step: 5, loss: [0.1955545]
step: 6, loss: [0.17261818]
step: 7, loss: [0.15009595]
step: 8, loss: [0.13051331]
step: 9, loss: [0.11537809]
step: 0, loss: [0.33474904]
step: 1, loss: [0.31669515]
step: 2, loss: [0.29729688]
step: 3, loss: [0.27288628]
step: 4, loss: [0.24694422]
step: 5, loss: [0.2203041]
step: 6, loss: [0.19171436]
step: 7, loss: [0.16213782]
step: 8, loss: [0.13443354]
step: 9, loss: [0.11170781]
The end
--------
-------
可以看到使用动态图带来了更灵活易用的方式来组网和训练。
......@@ -31,16 +31,16 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.0.0'"
"'2.0.0-beta0'"
]
},
"execution_count": 4,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
......
......@@ -26,7 +26,7 @@
.. parsed-literal::
'0.0.0'
'2.0.0-beta0'
......
......@@ -18,7 +18,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 普通程序跟机器学习程序的逻辑区别\n",
"## 普通程序跟机器学习程序的逻辑区别\n",
"\n",
"作为一名开发者,你最熟悉的开始学习一门编程语言,或者一个深度学习框架的方式,可能是通过一个hello, world程序。\n",
"\n",
......@@ -37,7 +37,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 22,
"metadata": {},
"outputs": [
{
......@@ -80,7 +80,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 导入飞桨\n",
"## 导入飞桨\n",
"\n",
"为了能够使用飞桨,我们需要先用python的`import`语句导入飞桨`paddle`。\n",
"同时,为了能够更好的对数组进行计算和处理,我们也还需要导入`numpy`。\n",
......@@ -90,28 +90,28 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"paddle version 0.0.0\n"
"paddle 2.0.0-beta0\n"
]
}
],
"source": [
"import paddle\n",
"paddle.disable_static()\n",
"print(\"paddle version \" + paddle.__version__)"
"print(\"paddle \" + paddle.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 准备数据\n",
"## 准备数据\n",
"\n",
"在这个机器学习任务中,我们已经知道了乘客的行驶里程`distance_travelled`,和对应的,这些乘客的总费用`total_fee`。\n",
"通常情况下,在机器学习任务中,像`distance_travelled`这样的输入值,一般被称为`x`(或者特征`feature`),像`total_fee`这样的输出值,一般被称为`y`(或者标签`label`)。\n",
......@@ -121,7 +121,7 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
......@@ -133,7 +133,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 用飞桨定义模型的计算\n",
"## 用飞桨定义模型的计算\n",
"\n",
"使用飞桨定义模型的计算的过程,本质上,是我们用python,通过飞桨提供的API,来告诉飞桨我们的计算规则的过程。回顾一下,我们想要通过飞桨用机器学习方法,从数据当中学习出来如下公式当中的`w`和`b`。这样在未来,给定`x`时就可以估算出来`y`值(估算出来的`y`记为`y_predict`)\n",
"\n",
......@@ -150,7 +150,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
......@@ -161,21 +161,21 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 准备好运行飞桨\n",
"## 准备好运行飞桨\n",
"\n",
"机器(计算机)在一开始的时候会随便猜`w`和`b`,我们先看看机器猜的怎么样。你应该可以看到,这时候的`w`是一个随机值,`b`是0.0,这是飞桨的初始化策略,也是这个领域常用的初始化策略。(如果你愿意,也可以采用其他的初始化的方式,今后你也会看到,选择不同的初始化策略也是对于做好深度学习任务来说很重要的一点)。"
]
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"w before optimize: -1.7107375860214233\n",
"w before optimize: -1.696260690689087\n",
"b before optimize: 0.0\n"
]
}
......@@ -192,7 +192,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 告诉飞桨怎么样学习\n",
"## 告诉飞桨怎么样学习\n",
"\n",
"前面我们定义好了神经网络(尽管是一个最简单的神经网络),我们还需要告诉飞桨,怎么样去**学习**,从而能得到参数`w`和`b`。\n",
"\n",
......@@ -205,7 +205,7 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
......@@ -217,26 +217,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 运行优化算法\n",
"## 运行优化算法\n",
"\n",
"接下来,我们让飞桨运行一下这个优化算法,这会是一个前面介绍过的逐步调整参数的过程,你应该可以看到loss值(衡量`y`和`y_predict`的差距的`loss`)在不断的降低。"
]
},
{
"cell_type": "code",
"execution_count": 30,
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 0 loss [2107.3943]\n",
"epoch 1000 loss [7.8432994]\n",
"epoch 2000 loss [1.7537074]\n",
"epoch 3000 loss [0.39211753]\n",
"epoch 4000 loss [0.08767726]\n",
"finished training, loss [0.01963376]\n"
"epoch 0 loss [2094.069]\n",
"epoch 1000 loss [7.8451133]\n",
"epoch 2000 loss [1.7541145]\n",
"epoch 3000 loss [0.39221546]\n",
"epoch 4000 loss [0.08769739]\n",
"finished training, loss [0.0196382]\n"
]
}
],
......@@ -246,8 +246,8 @@
" y_predict = linear(x_data)\n",
" loss = mse_loss(y_predict, y_data)\n",
" loss.backward()\n",
" sgd_optimizer.minimize(loss)\n",
" linear.clear_gradients()\n",
" sgd_optimizer.step()\n",
" sgd_optimizer.clear_grad()\n",
" \n",
" if i%1000 == 0:\n",
" print(\"epoch {} loss {}\".format(i, loss.numpy()))\n",
......@@ -259,22 +259,22 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 机器学习出来的参数\n",
"## 机器学习出来的参数\n",
"\n",
"经过了这样的对参数`w`和`b`的调整(**学习**),我们再通过下面的程序,来看看现在的参数变成了多少。你应该会发现`w`变成了很接近2.0的一个值,`b`变成了接近10.0的一个值。虽然并不是正好的2和10,但却是从数据当中学习出来的还不错的模型的参数,可以在未来的时候,用从这批数据当中学习到的参数来预估了。(如果你愿意,也可以通过让机器多学习一段时间,从而得到更加接近2.0和10.0的参数值。)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"w after optimize: 2.017843246459961\n",
"b after optimize: 9.771851539611816\n"
"w after optimize: 2.0178451538085938\n",
"b after optimize: 9.771825790405273\n"
]
}
],
......@@ -290,14 +290,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# hello paddle\n",
"## hello paddle\n",
"\n",
"通过这个小示例,希望你已经初步了解了飞桨,能在接下来随着对飞桨的更多学习,来解决实际遇到的问题。"
]
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 10,
"metadata": {},
"outputs": [
{
......@@ -335,9 +335,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.7"
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}
......@@ -51,7 +51,7 @@ world程序。
接下来,我们看看用飞桨如何实现这个hello, world级别的机器学习程序。
导入飞桨
---------
--------
为了能够使用飞桨,我们需要先用python的\ ``import``\ 语句导入飞桨\ ``paddle``\ 。
同时,为了能够更好的对数组进行计算和处理,我们也还需要导入\ ``numpy``\ 。
......@@ -62,16 +62,16 @@ world程序。
import paddle
paddle.disable_static()
print("paddle version " + paddle.__version__)
print("paddle " + paddle.__version__)
.. parsed-literal::
paddle version 0.0.0
paddle 2.0.0-beta0
准备数据
---------
--------
在这个机器学习任务中,我们已经知道了乘客的行驶里程\ ``distance_travelled``\ ,和对应的,这些乘客的总费用\ ``total_fee``\ 。
通常情况下,在机器学习任务中,像\ ``distance_travelled``\ 这样的输入值,一般被称为\ ``x``\ (或者特征\ ``feature``\ ),像\ ``total_fee``\ 这样的输出值,一般被称为\ ``y``\ (或者标签\ ``label``)。
......@@ -104,7 +104,7 @@ world程序。
linear = paddle.nn.Linear(in_features=1, out_features=1)
准备好运行飞桨
----------------
--------------
机器(计算机)在一开始的时候会随便猜\ ``w``\ 和\ ``b``\ ,我们先看看机器猜的怎么样。你应该可以看到,这时候的\ ``w``\ 是一个随机值,\ ``b``\ 是0.0,这是飞桨的初始化策略,也是这个领域常用的初始化策略。(如果你愿意,也可以采用其他的初始化的方式,今后你也会看到,选择不同的初始化策略也是对于做好深度学习任务来说很重要的一点)。
......@@ -119,12 +119,12 @@ world程序。
.. parsed-literal::
w before optimize: -1.7107375860214233
w before optimize: -1.696260690689087
b before optimize: 0.0
告诉飞桨怎么样学习
--------------------
------------------
前面我们定义好了神经网络(尽管是一个最简单的神经网络),我们还需要告诉飞桨,怎么样去\ **学习**\ ,从而能得到参数\ ``w``\ 和\ ``b``\ 。
......@@ -143,7 +143,7 @@ descent)作为优化算法(传给\ ``paddle.optimizer.SGD``\ 的参数\ ``lear
sgd_optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters = linear.parameters())
运行优化算法
---------------
------------
接下来,我们让飞桨运行一下这个优化算法,这会是一个前面介绍过的逐步调整参数的过程,你应该可以看到loss值(衡量\ ``y``\ 和\ ``y_predict``\ 的差距的\ ``loss``)在不断的降低。
......@@ -154,8 +154,8 @@ descent)作为优化算法(传给\ ``paddle.optimizer.SGD``\ 的参数\ ``lear
y_predict = linear(x_data)
loss = mse_loss(y_predict, y_data)
loss.backward()
sgd_optimizer.minimize(loss)
linear.clear_gradients()
sgd_optimizer.step()
sgd_optimizer.clear_grad()
if i%1000 == 0:
print("epoch {} loss {}".format(i, loss.numpy()))
......@@ -165,16 +165,16 @@ descent)作为优化算法(传给\ ``paddle.optimizer.SGD``\ 的参数\ ``lear
.. parsed-literal::
epoch 0 loss [2107.3943]
epoch 1000 loss [7.8432994]
epoch 2000 loss [1.7537074]
epoch 3000 loss [0.39211753]
epoch 4000 loss [0.08767726]
finished training, loss [0.01963376]
epoch 0 loss [2094.069]
epoch 1000 loss [7.8451133]
epoch 2000 loss [1.7541145]
epoch 3000 loss [0.39221546]
epoch 4000 loss [0.08769739]
finished training, loss [0.0196382]
机器学习出来的参数
-------------------
------------------
经过了这样的对参数\ ``w``\ 和\ ``b``\ 的调整(\ **学习**),我们再通过下面的程序,来看看现在的参数变成了多少。你应该会发现\ ``w``\ 变成了很接近2.0的一个值,\ ``b``\ 变成了接近10.0的一个值。虽然并不是正好的2和10,但却是从数据当中学习出来的还不错的模型的参数,可以在未来的时候,用从这批数据当中学习到的参数来预估了。(如果你愿意,也可以通过让机器多学习一段时间,从而得到更加接近2.0和10.0的参数值。)
......@@ -190,12 +190,12 @@ descent)作为优化算法(传给\ ``paddle.optimizer.SGD``\ 的参数\ ``lear
.. parsed-literal::
w after optimize: 2.017843246459961
b after optimize: 9.771851539611816
w after optimize: 2.0178451538085938
b after optimize: 9.771825790405273
hello paddle
---------------
------------
通过这个小示例,希望你已经初步了解了飞桨,能在接下来随着对飞桨的更多学习,来解决实际遇到的问题。
......
......@@ -45,7 +45,7 @@ paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.visi
.. parsed-literal::
'0.0.0'
'2.0.0-beta0'
......@@ -62,8 +62,6 @@ paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.visi
- 如何在fit接口满足需求的时候进行自定义,使用基础API来完成训练。
- 如何使用多卡来加速训练。
其他端到端的示例教程: \* TBD
3. 数据集定义、加载和数据预处理
-------------------------------
......@@ -76,28 +74,21 @@ paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.visi
.. code:: ipython3
paddle.vision.datasets.__all__
print('视觉相关数据集:', paddle.vision.datasets.__all__)
print('自然语言相关数据集:', paddle.text.datasets.__all__)
.. parsed-literal::
['DatasetFolder',
'ImageFolder',
'MNIST',
'Flowers',
'Cifar10',
'Cifar100',
'VOC2012']
视觉相关数据集: ['DatasetFolder', 'ImageFolder', 'MNIST', 'Flowers', 'Cifar10', 'Cifar100', 'VOC2012']
自然语言相关数据集: ['Conll05st', 'Imdb', 'Imikolov', 'Movielens', 'MovieReviews', 'UCIHousing', 'WMT14', 'WMT16']
这里我们是加载一个手写数字识别的数据集,用\ ``mode``\ 来标识是训练数据还是测试数据集。数据集接口会自动从远端下载数据集到本机缓存目录\ ``~/.cache/paddle/dataset``\
.. code:: ipython3
# 测试数据集
# 训练数据集
train_dataset = vision.datasets.MNIST(mode='train')
# 验证数据集
......@@ -340,9 +331,9 @@ paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.visi
5. 模型训练
-----------
使用\ ``paddle.Model``\ 封装成模型类后进行训练非常的简洁方便,我们可以直接通过调用\ ``Model.fit``\ 就可以完成训练过程。
网络结构通过\ ``paddle.Model``\ 接口封装成模型类后进行执行操作非常的简洁方便,可以直接通过调用\ ``Model.fit``\ 就可以完成训练过程。
使用\ ``Model.fit``\ 接口启动训练前,我们先通过\ ``Model.prepare``\ 接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。
使用\ ``Model.fit``\ 接口启动训练前,我们先通过\ ``Model.prepare``\ 接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。
.. code:: ipython3
......@@ -398,10 +389,252 @@ paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.visi
# train.py里面包含的就是单机单卡代码
python -m paddle.distributed.launch train.py
5.3 自定义Loss
~~~~~~~~~~~~~~
有时我们会遇到特定任务的Loss计算方式在框架既有的Loss接口中不存在,或算法不符合自己的需求,那么期望能够自己来进行Loss的自定义,我们这里就会讲解介绍一下如何进行Loss的自定义操作,首先来看下面的代码:
.. code:: python
class SelfDefineLoss(paddle.nn.Layer):
"""
1. 继承paddle.nn.Layer
"""
def __init__(self):
"""
2. 构造函数根据自己的实际算法需求和使用需求进行参数定义即可
"""
super(SelfDefineLoss, self).__init__()
def forward(self, input, label):
"""
3. 实现forward函数,forward在调用时会传递两个参数:input和label
- input:单个或批次训练数据经过模型前向计算输出结果
- label:单个或批次训练数据对应的标签数据
接口返回值是一个Tensor,根据自定义的逻辑加和或计算均值后的损失
"""
# 使用Paddle中相关API自定义的计算逻辑
# output = xxxxx
# return output
那么了解完代码层面如果编写自定义代码后我们看一个实际的例子,下面是在图像分割示例代码中写的一个自定义Loss,当时主要是想使用自定义的softmax计算维度。
.. code:: python
class SoftmaxWithCrossEntropy(paddle.nn.Layer):
def __init__(self):
super(SoftmaxWithCrossEntropy, self).__init__()
def forward(self, input, label):
loss = F.softmax_with_cross_entropy(input,
label,
return_softmax=False,
axis=1)
return paddle.mean(loss)
5.4 自定义Metric
~~~~~~~~~~~~~~~~
Loss一样,如果遇到一些想要做个性化实现的操作时,我们也可以来通过框架完成自定义的评估计算方法,具体的实现方式如下:
.. code:: python
class SelfDefineMetric(paddle.metric.Metric):
"""
1. 继承paddle.metric.Metric
"""
def __init__(self):
"""
2. 构造函数实现,自定义参数即可
"""
super(SelfDefineMetric, self).__init__()
def name(self):
"""
3. 实现name方法,返回定义的评估指标名字
"""
return '自定义评价指标的名字'
def compute(self, ...)
"""
4. 本步骤可以省略,实现compute方法,这个方法主要用于`update`的加速,可以在这个方法中调用一些paddle实现好的Tensor计算API,编译到模型网络中一起使用低层C++ OP计算。
"""
return 自己想要返回的数据,会做为update的参数传入。
def update(self, ...):
"""
5. 实现update方法,用于单个batch训练时进行评估指标计算。
- 当`compute`类函数未实现时,会将模型的计算输出和标签数据的展平作为`update`的参数传入。
- 当`compute`类函数做了实现时,会将compute的返回结果作为`update`的参数传入。
"""
return acc value
def accumulate(self):
"""
6. 实现accumulate方法,返回历史batch训练积累后计算得到的评价指标值。
每次`update`调用时进行数据积累,`accumulate`计算时对积累的所有数据进行计算并返回。
结算结果会在`fit`接口的训练日志中呈现。
"""
# 利用update中积累的成员变量数据进行计算后返回
return accumulated acc value
def reset(self):
"""
7. 实现reset方法,每个Epoch结束后进行评估指标的重置,这样下个Epoch可以重新进行计算。
"""
# do reset action
我们看一个框架中的具体例子,这个是框架中已提供的一个评估指标计算接口,这里就是按照上述说明中的实现方法进行了相关类继承和成员函数实现。
.. code:: python
from paddle.metric import Metric
class Precision(Metric):
"""
Precision (also called positive predictive value) is the fraction of
relevant instances among the retrieved instances. Refer to
https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers
Noted that this class manages the precision score only for binary
classification task.
......
"""
def __init__(self, name='precision', *args, **kwargs):
super(Precision, self).__init__(*args, **kwargs)
self.tp = 0 # true positive
self.fp = 0 # false positive
self._name = name
def update(self, preds, labels):
"""
Update the states based on the current mini-batch prediction results.
Args:
preds (numpy.ndarray): The prediction result, usually the output
of two-class sigmoid function. It should be a vector (column
vector or row vector) with data type: 'float64' or 'float32'.
labels (numpy.ndarray): The ground truth (labels),
the shape should keep the same as preds.
The data type is 'int32' or 'int64'.
"""
if isinstance(preds, paddle.Tensor):
preds = preds.numpy()
elif not _is_numpy_(preds):
raise ValueError("The 'preds' must be a numpy ndarray or Tensor.")
if isinstance(labels, paddle.Tensor):
labels = labels.numpy()
elif not _is_numpy_(labels):
raise ValueError("The 'labels' must be a numpy ndarray or Tensor.")
sample_num = labels.shape[0]
preds = np.floor(preds + 0.5).astype("int32")
for i in range(sample_num):
pred = preds[i]
label = labels[i]
if pred == 1:
if pred == label:
self.tp += 1
else:
self.fp += 1
def reset(self):
"""
Resets all of the metric state.
"""
self.tp = 0
self.fp = 0
def accumulate(self):
"""
Calculate the final precision.
Returns:
A scaler float: results of the calculated precision.
"""
ap = self.tp + self.fp
return float(self.tp) / ap if ap != 0 else .0
def name(self):
"""
Returns metric name
"""
return self._name
5.5 自定义Callback
~~~~~~~~~~~~~~~~~~
``fit``\ 接口的callback参数支持我们传一个Callback类实例,用来在每轮训练和每个batch训练前后进行调用,可以通过callback收集到训练过程中的一些数据和参数,或者实现一些自定义操作。
.. code:: python
class SelfDefineCallback(paddle.callbacks.Callback):
"""
1. 继承paddle.callbacks.Callback
2. 按照自己的需求实现以下类成员方法:
def on_train_begin(self, logs=None) 训练开始前,`Model.fit`接口中调用
def on_train_end(self, logs=None) 训练结束后,`Model.fit`接口中调用
def on_eval_begin(self, logs=None) 评估开始前,`Model.evaluate`接口调用
def on_eval_end(self, logs=None) 评估结束后,`Model.evaluate`接口调用
def on_test_begin(self, logs=None) 预测测试开始前,`Model.predict`接口中调用
def on_test_end(self, logs=None) 预测测试结束后,`Model.predict`接口中调用
def on_epoch_begin(self, epoch, logs=None) 每轮训练开始前,`Model.fit`接口中调用
def on_epoch_end(self, epoch, logs=None) 每轮训练结束后,`Model.fit`接口中调用
def on_train_batch_begin(self, step, logs=None) 单个Batch训练开始前,`Model.fit`和`Model.train_batch`接口中调用
def on_train_batch_end(self, step, logs=None) 单个Batch训练结束后,`Model.fit`和`Model.train_batch`接口中调用
def on_eval_batch_begin(self, step, logs=None) 单个Batch评估开始前,`Model.evalute`和`Model.eval_batch`接口中调用
def on_eval_batch_end(self, step, logs=None) 单个Batch评估结束后,`Model.evalute`和`Model.eval_batch`接口中调用
def on_test_batch_begin(self, step, logs=None) 单个Batch预测测试开始前,`Model.predict`和`Model.test_batch`接口中调用
def on_test_batch_end(self, step, logs=None) 单个Batch预测测试结束后,`Model.predict`和`Model.test_batch`接口中调用
"""
def __init__(self):
super(SelfDefineCallback, self).__init__()
按照需求定义自己的类成员方法
我们看一个框架中的实际例子,这是一个框架自带的ModelCheckpoint回调函数,方便用户在fit训练模型时自动存储每轮训练得到的模型。
.. code:: python
class ModelCheckpoint(Callback):
def __init__(self, save_freq=1, save_dir=None):
self.save_freq = save_freq
self.save_dir = save_dir
def on_epoch_begin(self, epoch=None, logs=None):
self.epoch = epoch
def _is_save(self):
return self.model and self.save_dir and ParallelEnv().local_rank == 0
def on_epoch_end(self, epoch, logs=None):
if self._is_save() and self.epoch % self.save_freq == 0:
path = '{}/{}'.format(self.save_dir, epoch)
print('save checkpoint at {}'.format(os.path.abspath(path)))
self.model.save(path)
def on_train_end(self, logs=None):
if self._is_save():
path = '{}/final'.format(self.save_dir)
print('save checkpoint at {}'.format(os.path.abspath(path)))
self.model.save(path)
6. 模型评估
-----------
对于训练好的模型进行评估操作可以使用\ ``evaluate``\ 接口来实现。
对于训练好的模型进行评估操作可以使用\ ``evaluate``\ 接口来实现,事先定义好用于评估使用的数据集后,可以简单的调用\ ``evaluate``\ 接口即可完成模型评估操作,结束后根据preparelossmetric的定义来进行相关评估结果计算返回。
返回格式是一个字典: \* 只包含loss\ ``{'loss': xxx}`` \*
包含loss和一个评估指标,\ ``{'loss': xxx, 'metric name': xxx}`` \*
包含loss和多个评估指标,\ ``{'loss': xxx, 'metric name': xxx, 'metric name': xxx}``
.. code:: ipython3
......@@ -410,19 +643,40 @@ paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.visi
7. 模型预测
-----------
高层API中提供\ ``predict``\ 接口,支持用户使用测试数据来完成模型的预测。
高层API中提供了\ ``predict``\ 接口来方便用户对训练好的模型进行预测验证,只需要基于训练好的模型将需要进行预测测试的数据放到接口中进行计算即可,接口会将经过模型计算得到的预测结果进行返回。
返回格式是一个list,元素数目对应模型的输出数目: \*
模型是单一输出:[(numpy_ndarray_1, numpy_ndarray_2, , numpy_ndarray_n)]
\* 模型是多输出:[(numpy_ndarray_1, numpy_ndarray_2, ,
numpy_ndarray_n), (numpy_ndarray_1, numpy_ndarray_2, ,
numpy_ndarray_n), ]
numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据,数目对应预测数据集的数目。
.. code:: ipython3
pred_result = model.predict(val_dataset)
7.1 使用多卡进行预测
~~~~~~~~~~~~~~~~~~~~
有时我们需要进行预测验证的数据较多,单卡无法满足我们的时间诉求,那么\ ``predict``\ 接口也为用户支持实现了使用多卡模式来运行。
使用起来也是超级简单,无需修改代码程序,只需要使用launch来启动对应的预测脚本即可。
.. code:: bash
$ python3 -m paddle.distributed.launch infer.py
infer.py里面就是包含model.predict的代码程序。
8. 模型部署
-----------
8.1 模型存储
~~~~~~~~~~~~
模型训练和验证达到我们的预期后,可以使用\ ``save``\ 接口来将我们的模型保存下来,用于后续模型的Fine-tuning或推理部署
模型训练和验证达到我们的预期后,可以使用\ ``save``\ 接口来将我们的模型保存下来,用于后续模型的Fine-tuning(接口参数training=True)或推理部署(接口参数training=False
.. code:: ipython3
......
......@@ -14,19 +14,19 @@
"metadata": {},
"source": [
"## 环境\n",
"本教程基于paddle-develop编写,如果您的环境不是本版本,请先安装paddle-develop版本。"
"本教程基于paddle-2.0Beta编写,如果您的环境不是此版本,请先安装paddle-2.0Beta版本,使用命令:pip3 install paddlepaddle==2.0Beta。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.0.0\n"
"2.0.0-beta0\n"
]
}
],
......
......@@ -7,7 +7,8 @@
环境
----
本教程基于paddle-develop编写,如果您的环境不是本版本,请先安装paddle-develop版本。
本教程基于paddle-2.0Beta编写,如果您的环境不是此版本,请先安装paddle-2.0Beta版本,使用命令:pip3
install paddlepaddle==2.0Beta
.. code:: ipython3
......@@ -25,7 +26,7 @@
.. parsed-literal::
0.0.0
2.0.0-beta0
数据集
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册