Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
book
提交
bf87006a
B
book
项目概览
PaddlePaddle
/
book
通知
16
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
40
列表
看板
标记
里程碑
合并请求
37
Wiki
5
Wiki
分析
仓库
DevOps
项目成员
Pages
B
book
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
40
Issue
40
列表
看板
标记
里程碑
合并请求
37
合并请求
37
Pages
分析
分析
仓库分析
DevOps
Wiki
5
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
bf87006a
编写于
9月 12, 2020
作者:
C
Chen Long
提交者:
GitHub
9月 12, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix docs for beta and change docs style (#900)
上级
02b3916d
变更
8
展开全部
显示空白变更内容
内联
并排
Showing
8 changed file
with
160 addition
and
179 deletion
+160
-179
paddle2.0_docs/convnet_image_classification/convnet_image_classification.ipynb
...t_image_classification/convnet_image_classification.ipynb
+7
-7
paddle2.0_docs/dynamic_graph/dynamic_graph.ipynb
paddle2.0_docs/dynamic_graph/dynamic_graph.ipynb
+7
-7
paddle2.0_docs/hello_paddle/hello_paddle.ipynb
paddle2.0_docs/hello_paddle/hello_paddle.ipynb
+10
-10
paddle2.0_docs/image_classification/mnist_lenet_classification.ipynb
...ocs/image_classification/mnist_lenet_classification.ipynb
+66
-86
paddle2.0_docs/image_search/image_search.ipynb
paddle2.0_docs/image_search/image_search.ipynb
+9
-9
paddle2.0_docs/imdb_bow_classification/imdb_bow_classification.ipynb
...ocs/imdb_bow_classification/imdb_bow_classification.ipynb
+8
-8
paddle2.0_docs/n_gram_model/n_gram_model.ipynb
paddle2.0_docs/n_gram_model/n_gram_model.ipynb
+41
-40
paddle2.0_docs/seq2seq_with_attention/seq2seq_with_attention.ipynb
..._docs/seq2seq_with_attention/seq2seq_with_attention.ipynb
+12
-12
未找到文件。
paddle2.0_docs/convnet_image_classification/convnet_image_classification.ipynb
浏览文件 @
bf87006a
...
...
@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 设置环境\n",
"#
#
设置环境\n",
"\n",
"我们将使用飞桨2.0beta版本。"
]
...
...
@@ -46,7 +46,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 加载并浏览数据集\n",
"#
#
加载并浏览数据集\n",
"\n",
"我们将会使用飞桨提供的API完成数据集的下载并为后续的训练任务准备好数据迭代器。cifar10数据集由60000张大小为32 * 32的彩色图片组成,其中有50000张图片组成了训练集,另外10000张图片组成了测试集。这些图片分为10个类别,我们的任务是训练一个模型能够把图片进行正确的分类。"
]
...
...
@@ -73,7 +73,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 浏览数据集\n",
"#
#
浏览数据集\n",
"\n",
"接下来我们从数据集中随机挑选一些图片并显示,从而对数据集有一个直观的了解。"
]
...
...
@@ -113,7 +113,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 组建网络\n",
"#
#
组建网络\n",
"\n",
"接下来我们使用飞桨定义一个使用了三个二维卷积(`Conv2d`)且每次卷积之后使用`relu`激活函数,两个二维池化层(`MaxPool2d`),和两个线性变换层组成的分类网络,来把一个`(32, 32, 3)`形状的图片通过卷积神经网络映射为10个输出,这对应着10个分类的类别。"
]
...
...
@@ -164,7 +164,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 模型训练\n",
"#
#
模型训练\n",
"\n",
"接下来,我们用一个循环来进行模型的训练,我们将会:\n",
"- 使用`paddle.optimizer.Adam`优化器来进行优化。\n",
...
...
@@ -332,7 +332,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# The End\n",
"#
#
The End\n",
"\n",
"从上面的示例可以看到,在cifar10数据集上,使用简单的卷积神经网络,用飞桨可以达到71%以上的准确率。你也可以通过调整网络结构和参数,达到更好的效果。"
]
...
...
@@ -354,7 +354,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.
7
"
"version": "3.7.
3
"
}
},
"nbformat": 4,
...
...
paddle2.0_docs/dynamic_graph/dynamic_graph.ipynb
浏览文件 @
bf87006a
...
...
@@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 设置环境\n",
"#
#
设置环境\n",
"\n",
"我们将使用飞桨2.0beta版本,并确认已经开启了动态图模式。"
]
...
...
@@ -46,7 +46,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 基本用法\n",
"#
#
基本用法\n",
"\n",
"在动态图模式下,您可以直接运行一个飞桨提供的API,它会立刻返回结果到python。不再需要首先创建一个计算图,然后再给定数据去运行。"
]
...
...
@@ -91,7 +91,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用python的控制流\n",
"#
#
使用python的控制流\n",
"\n",
"动态图模式下,您可以使用python的条件判断和循环,这类控制语句来执行神经网络的计算。(不再需要`cond`, `loop`这类OP)\n"
]
...
...
@@ -136,7 +136,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建更加灵活的网络:控制流\n",
"#
#
构建更加灵活的网络:控制流\n",
"\n",
"- 使用动态图可以用来创建更加灵活的网络,比如根据控制流选择不同的分支网络,和方便的构建权重共享的网络。接下来我们来看一个具体的例子,在这个例子中,第二个线性变换只有0.5的可能性会运行。\n",
"- 在sequence to sequence with attention的机器翻译的示例中,你会看到更实际的使用动态图构建RNN类的网络带来的灵活性。\n"
...
...
@@ -226,7 +226,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建更加灵活的网络:共享权重\n",
"#
#
构建更加灵活的网络:共享权重\n",
"\n",
"- 使用动态图还可以更加方便的创建共享权重的网络,下面的示例展示了一个共享了权重的简单的AutoEncoder。\n",
"- 你也可以参考图像搜索的示例看到共享参数权重的更实际的使用。"
...
...
@@ -276,7 +276,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# The end\n",
"#
#
The end\n",
"\n",
"可以看到使用动态图带来了更灵活易用的方式来组网和训练。"
]
...
...
@@ -298,7 +298,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.
7
"
"version": "3.7.
3
"
}
},
"nbformat": 4,
...
...
paddle2.0_docs/hello_paddle/hello_paddle.ipynb
浏览文件 @
bf87006a
...
...
@@ -18,7 +18,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 普通程序跟机器学习程序的逻辑区别\n",
"#
#
普通程序跟机器学习程序的逻辑区别\n",
"\n",
"作为一名开发者,你最熟悉的开始学习一门编程语言,或者一个深度学习框架的方式,可能是通过一个hello, world程序。\n",
"\n",
...
...
@@ -80,7 +80,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 导入飞桨\n",
"#
#
导入飞桨\n",
"\n",
"为了能够使用飞桨,我们需要先用python的`import`语句导入飞桨`paddle`。\n",
"同时,为了能够更好的对数组进行计算和处理,我们也还需要导入`numpy`。\n",
...
...
@@ -111,7 +111,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 准备数据\n",
"#
#
准备数据\n",
"\n",
"在这个机器学习任务中,我们已经知道了乘客的行驶里程`distance_travelled`,和对应的,这些乘客的总费用`total_fee`。\n",
"通常情况下,在机器学习任务中,像`distance_travelled`这样的输入值,一般被称为`x`(或者特征`feature`),像`total_fee`这样的输出值,一般被称为`y`(或者标签`label`)。\n",
...
...
@@ -133,7 +133,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 用飞桨定义模型的计算\n",
"#
#
用飞桨定义模型的计算\n",
"\n",
"使用飞桨定义模型的计算的过程,本质上,是我们用python,通过飞桨提供的API,来告诉飞桨我们的计算规则的过程。回顾一下,我们想要通过飞桨用机器学习方法,从数据当中学习出来如下公式当中的`w`和`b`。这样在未来,给定`x`时就可以估算出来`y`值(估算出来的`y`记为`y_predict`)\n",
"\n",
...
...
@@ -161,7 +161,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 准备好运行飞桨\n",
"#
#
准备好运行飞桨\n",
"\n",
"机器(计算机)在一开始的时候会随便猜`w`和`b`,我们先看看机器猜的怎么样。你应该可以看到,这时候的`w`是一个随机值,`b`是0.0,这是飞桨的初始化策略,也是这个领域常用的初始化策略。(如果你愿意,也可以采用其他的初始化的方式,今后你也会看到,选择不同的初始化策略也是对于做好深度学习任务来说很重要的一点)。"
]
...
...
@@ -192,7 +192,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 告诉飞桨怎么样学习\n",
"#
#
告诉飞桨怎么样学习\n",
"\n",
"前面我们定义好了神经网络(尽管是一个最简单的神经网络),我们还需要告诉飞桨,怎么样去**学习**,从而能得到参数`w`和`b`。\n",
"\n",
...
...
@@ -217,7 +217,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 运行优化算法\n",
"#
#
运行优化算法\n",
"\n",
"接下来,我们让飞桨运行一下这个优化算法,这会是一个前面介绍过的逐步调整参数的过程,你应该可以看到loss值(衡量`y`和`y_predict`的差距的`loss`)在不断的降低。"
]
...
...
@@ -259,7 +259,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 机器学习出来的参数\n",
"#
#
机器学习出来的参数\n",
"\n",
"经过了这样的对参数`w`和`b`的调整(**学习**),我们再通过下面的程序,来看看现在的参数变成了多少。你应该会发现`w`变成了很接近2.0的一个值,`b`变成了接近10.0的一个值。虽然并不是正好的2和10,但却是从数据当中学习出来的还不错的模型的参数,可以在未来的时候,用从这批数据当中学习到的参数来预估了。(如果你愿意,也可以通过让机器多学习一段时间,从而得到更加接近2.0和10.0的参数值。)"
]
...
...
@@ -290,7 +290,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# hello paddle\n",
"#
#
hello paddle\n",
"\n",
"通过这个小示例,希望你已经初步了解了飞桨,能在接下来随着对飞桨的更多学习,来解决实际遇到的问题。"
]
...
...
@@ -335,7 +335,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.
7
"
"version": "3.7.
3
"
}
},
"nbformat": 4,
...
...
paddle2.0_docs/image_classification/mnist_lenet_classification.ipynb
浏览文件 @
bf87006a
...
...
@@ -14,26 +14,27 @@
"metadata": {},
"source": [
"## 环境\n",
"本教程基于paddle-
develop编写,如果您的环境不是本版本,请先安装paddle-develop
版本。"
"本教程基于paddle-
2.0-beta编写,如果您的环境不是本版本,请先安装paddle-2.0-beta
版本。"
]
},
{
"cell_type": "code",
"execution_count":
1
3,
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"
0.0.
0\n"
"
2.0.0-beta
0\n"
]
}
],
"source": [
"import paddle\n",
"print(paddle.__version__)\n",
"paddle.disable_static()"
"paddle.disable_static()\n",
"# 开启动态图"
]
},
{
...
...
@@ -46,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count":
1
4,
"execution_count": 4,
"metadata": {},
"outputs": [
{
...
...
@@ -74,7 +75,7 @@
},
{
"cell_type": "code",
"execution_count":
1
5,
"execution_count": 5,
"metadata": {},
"outputs": [
{
...
...
@@ -117,7 +118,7 @@
},
{
"cell_type": "code",
"execution_count":
18
,
"execution_count":
7
,
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -161,33 +162,33 @@
},
{
"cell_type": "code",
"execution_count": 1
9
,
"execution_count": 1
0
,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch: 0, batch_id: 0, loss is: [2.30
79572], acc is: [0.1
25]\n",
"epoch: 0, batch_id: 100, loss is: [1.
7078608], acc is: [0.82812
5]\n",
"epoch: 0, batch_id: 200, loss is: [1.5
642334], acc is: [0.9062
5]\n",
"epoch: 0, batch_id: 300, loss is: [1.
7024238], acc is: [0.7812
5]\n",
"epoch: 0, batch_id: 400, loss is: [1.
5536337], acc is: [0.921875
]\n",
"epoch: 0, batch_id: 500, loss is: [1.
6908336], acc is: [0.82812
5]\n",
"epoch: 0, batch_id: 600, loss is: [1.5
622432], acc is: [0.921
875]\n",
"epoch: 0, batch_id: 700, loss is: [1.
5251796], acc is: [0.95312
5]\n",
"epoch: 0, batch_id: 800, loss is: [1.
5698484], acc is: [0.89062
5]\n",
"epoch: 0, batch_id: 900, loss is: [1.
5524453], acc is: [0.9
375]\n",
"epoch: 1, batch_id: 0, loss is: [1.
6443151], acc is: [0.84375
]\n",
"epoch: 1, batch_id: 100, loss is: [1.
5547533], acc is: [0.90625
]\n",
"epoch: 1, batch_id: 200, loss is: [1.
5019028
], acc is: [1.]\n",
"epoch: 1, batch_id: 300, loss is: [1.4
820204], acc is: [1.
]\n",
"epoch: 1, batch_id: 400, loss is: [1.5
215418], acc is: [0.9843
75]\n",
"epoch: 1, batch_id: 500, loss is: [1.
4972374], acc is: [1.
]\n",
"epoch: 1, batch_id: 600, loss is: [1.49
30981
], acc is: [0.984375]\n",
"epoch: 1, batch_id: 700, loss is: [1.4
971689], acc is: [0.984375
]\n",
"epoch: 1, batch_id: 800, loss is: [1.4
611597], acc is: [1.
]\n",
"epoch: 1, batch_id: 900, loss is: [1.49
03957], acc is: [0.984375
]\n"
"epoch: 0, batch_id: 0, loss is: [2.30
37894], acc is: [0.1406
25]\n",
"epoch: 0, batch_id: 100, loss is: [1.
6175328], acc is: [0.937
5]\n",
"epoch: 0, batch_id: 200, loss is: [1.5
388051], acc is: [0.9687
5]\n",
"epoch: 0, batch_id: 300, loss is: [1.
5251061], acc is: [0.9687
5]\n",
"epoch: 0, batch_id: 400, loss is: [1.
4678856], acc is: [1.
]\n",
"epoch: 0, batch_id: 500, loss is: [1.
4944503], acc is: [0.98437
5]\n",
"epoch: 0, batch_id: 600, loss is: [1.5
365536], acc is: [0.96
875]\n",
"epoch: 0, batch_id: 700, loss is: [1.
4885054], acc is: [0.98437
5]\n",
"epoch: 0, batch_id: 800, loss is: [1.
4872254], acc is: [0.98437
5]\n",
"epoch: 0, batch_id: 900, loss is: [1.
4884174], acc is: [0.984
375]\n",
"epoch: 1, batch_id: 0, loss is: [1.
4776722], acc is: [1.
]\n",
"epoch: 1, batch_id: 100, loss is: [1.
4751343], acc is: [1.
]\n",
"epoch: 1, batch_id: 200, loss is: [1.
4772581
], acc is: [1.]\n",
"epoch: 1, batch_id: 300, loss is: [1.4
918218], acc is: [0.984375
]\n",
"epoch: 1, batch_id: 400, loss is: [1.5
038397], acc is: [0.968
75]\n",
"epoch: 1, batch_id: 500, loss is: [1.
5088196], acc is: [0.96875
]\n",
"epoch: 1, batch_id: 600, loss is: [1.49
61376
], acc is: [0.984375]\n",
"epoch: 1, batch_id: 700, loss is: [1.4
755756], acc is: [1.
]\n",
"epoch: 1, batch_id: 800, loss is: [1.4
921497], acc is: [0.984375
]\n",
"epoch: 1, batch_id: 900, loss is: [1.49
44404], acc is: [1.
]\n"
]
}
],
...
...
@@ -213,8 +214,8 @@
" avg_loss.backward()\n",
" if batch_id % 100 == 0:\n",
" print(\"epoch: {}, batch_id: {}, loss is: {}, acc is: {}\".format(epoch, batch_id, avg_loss.numpy(), avg_acc.numpy()))\n",
" optim.
minimize(avg_loss
)\n",
"
model.clear_gradients
()\n",
" optim.
step(
)\n",
"
optim.clear_grad
()\n",
"model = LeNet()\n",
"train(model)"
]
...
...
@@ -229,21 +230,21 @@
},
{
"cell_type": "code",
"execution_count":
20
,
"execution_count":
11
,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"batch_id: 0, loss is: [1.4
767745
], acc is: [1.]\n",
"batch_id: 20, loss is: [1.48
41802], acc is: [0.984375
]\n",
"batch_id: 40, loss is: [1.
4997194], acc is: [1.
]\n",
"batch_id: 60, loss is: [1.
489541
3], acc is: [1.]\n",
"batch_id: 80, loss is: [1.4
66879
8], acc is: [1.]\n",
"batch_id: 100, loss is: [1.4
611752
], acc is: [1.]\n",
"batch_id: 120, loss is: [1.4
613602
], acc is: [1.]\n",
"batch_id: 140, loss is: [1.4
923686
], acc is: [1.]\n"
"batch_id: 0, loss is: [1.4
915928
], acc is: [1.]\n",
"batch_id: 20, loss is: [1.48
18308], acc is: [1.
]\n",
"batch_id: 40, loss is: [1.
5006062], acc is: [0.984375
]\n",
"batch_id: 60, loss is: [1.
52123
3], acc is: [1.]\n",
"batch_id: 80, loss is: [1.4
77273
8], acc is: [1.]\n",
"batch_id: 100, loss is: [1.4
755945
], acc is: [1.]\n",
"batch_id: 120, loss is: [1.4
746133
], acc is: [1.]\n",
"batch_id: 140, loss is: [1.4
786345
], acc is: [1.]\n"
]
}
],
...
...
@@ -287,7 +288,7 @@
},
{
"cell_type": "code",
"execution_count":
21
,
"execution_count":
12
,
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -315,7 +316,7 @@
},
{
"cell_type": "code",
"execution_count":
22
,
"execution_count":
16
,
"metadata": {},
"outputs": [
{
...
...
@@ -323,30 +324,17 @@
"output_type": "stream",
"text": [
"Epoch 1/2\n",
"step 100/938 - loss: 1.5644 - acc_top1: 0.6281 - acc_top2: 0.7145 - 14ms/step\n",
"step 200/938 - loss: 1.6221 - acc_top1: 0.7634 - acc_top2: 0.8380 - 13ms/step\n",
"step 300/938 - loss: 1.5123 - acc_top1: 0.8215 - acc_top2: 0.8835 - 13ms/step\n",
"step 400/938 - loss: 1.4791 - acc_top1: 0.8530 - acc_top2: 0.9084 - 13ms/step\n",
"step 500/938 - loss: 1.4904 - acc_top1: 0.8733 - acc_top2: 0.9235 - 13ms/step\n",
"step 600/938 - loss: 1.5101 - acc_top1: 0.8875 - acc_top2: 0.9341 - 13ms/step\n",
"step 700/938 - loss: 1.4642 - acc_top1: 0.8983 - acc_top2: 0.9417 - 13ms/step\n",
"step 800/938 - loss: 1.4789 - acc_top1: 0.9069 - acc_top2: 0.9477 - 13ms/step\n",
"step 900/938 - loss: 1.4773 - acc_top1: 0.9135 - acc_top2: 0.9523 - 13ms/step\n",
"step 938/938 - loss: 1.4714 - acc_top1: 0.9157 - acc_top2: 0.9538 - 13ms/step\n",
"save checkpoint at /Users/chenlong/online_repo/book/paddle2.0_docs/image_classification/mnist_checkpoint/0\n",
"step 200/938 - loss: 1.5219 - acc_top1: 0.9829 - acc_top2: 0.9965 - 14ms/step\n",
"step 400/938 - loss: 1.4765 - acc_top1: 0.9825 - acc_top2: 0.9958 - 13ms/step\n",
"step 600/938 - loss: 1.4624 - acc_top1: 0.9823 - acc_top2: 0.9953 - 13ms/step\n",
"step 800/938 - loss: 1.4768 - acc_top1: 0.9829 - acc_top2: 0.9955 - 13ms/step\n",
"step 938/938 - loss: 1.4612 - acc_top1: 0.9836 - acc_top2: 0.9956 - 13ms/step\n",
"Epoch 2/2\n",
"step 100/938 - loss: 1.4863 - acc_top1: 0.9695 - acc_top2: 0.9897 - 13ms/step\n",
"step 200/938 - loss: 1.4883 - acc_top1: 0.9707 - acc_top2: 0.9912 - 13ms/step\n",
"step 300/938 - loss: 1.4695 - acc_top1: 0.9720 - acc_top2: 0.9910 - 13ms/step\n",
"step 400/938 - loss: 1.4628 - acc_top1: 0.9720 - acc_top2: 0.9915 - 13ms/step\n",
"step 500/938 - loss: 1.5079 - acc_top1: 0.9727 - acc_top2: 0.9918 - 13ms/step\n",
"step 600/938 - loss: 1.4803 - acc_top1: 0.9727 - acc_top2: 0.9919 - 13ms/step\n",
"step 700/938 - loss: 1.4612 - acc_top1: 0.9732 - acc_top2: 0.9923 - 13ms/step\n",
"step 800/938 - loss: 1.4755 - acc_top1: 0.9732 - acc_top2: 0.9923 - 13ms/step\n",
"step 900/938 - loss: 1.4698 - acc_top1: 0.9732 - acc_top2: 0.9922 - 13ms/step\n",
"step 938/938 - loss: 1.4764 - acc_top1: 0.9734 - acc_top2: 0.9923 - 13ms/step\n",
"save checkpoint at /Users/chenlong/online_repo/book/paddle2.0_docs/image_classification/mnist_checkpoint/1\n",
"save checkpoint at /Users/chenlong/online_repo/book/paddle2.0_docs/image_classification/mnist_checkpoint/final\n"
"step 200/938 - loss: 1.4705 - acc_top1: 0.9834 - acc_top2: 0.9959 - 13ms/step\n",
"step 400/938 - loss: 1.4620 - acc_top1: 0.9833 - acc_top2: 0.9960 - 13ms/step\n",
"step 600/938 - loss: 1.4613 - acc_top1: 0.9830 - acc_top2: 0.9960 - 13ms/step\n",
"step 800/938 - loss: 1.4763 - acc_top1: 0.9831 - acc_top2: 0.9960 - 13ms/step\n",
"step 938/938 - loss: 1.4924 - acc_top1: 0.9834 - acc_top2: 0.9959 - 13ms/step\n"
]
}
],
...
...
@@ -354,8 +342,8 @@
"model.fit(train_dataset,\n",
" epochs=2,\n",
" batch_size=64,\n",
" log_freq=
100,
\n",
"
save_dir='mnist_checkpoint'
)"
" log_freq=
200
\n",
" )"
]
},
{
...
...
@@ -367,7 +355,7 @@
},
{
"cell_type": "code",
"execution_count":
23
,
"execution_count":
17
,
"metadata": {},
"outputs": [
{
...
...
@@ -375,38 +363,30 @@
"output_type": "stream",
"text": [
"Eval begin...\n",
"step 10/157 - loss: 1.5238 - acc_top1: 0.9750 - acc_top2: 0.9938 - 7ms/step\n",
"step 20/157 - loss: 1.5143 - acc_top1: 0.9727 - acc_top2: 0.9922 - 7ms/step\n",
"step 30/157 - loss: 1.5290 - acc_top1: 0.9698 - acc_top2: 0.9932 - 7ms/step\n",
"step 40/157 - loss: 1.4624 - acc_top1: 0.9684 - acc_top2: 0.9930 - 7ms/step\n",
"step 50/157 - loss: 1.4771 - acc_top1: 0.9697 - acc_top2: 0.9925 - 7ms/step\n",
"step 60/157 - loss: 1.5066 - acc_top1: 0.9701 - acc_top2: 0.9922 - 6ms/step\n",
"step 70/157 - loss: 1.4804 - acc_top1: 0.9699 - acc_top2: 0.9920 - 6ms/step\n",
"step 80/157 - loss: 1.4718 - acc_top1: 0.9707 - acc_top2: 0.9930 - 6ms/step\n",
"step 90/157 - loss: 1.4874 - acc_top1: 0.9726 - acc_top2: 0.9934 - 6ms/step\n",
"step 100/157 - loss: 1.4612 - acc_top1: 0.9736 - acc_top2: 0.9936 - 6ms/step\n",
"step 110/157 - loss: 1.4612 - acc_top1: 0.9746 - acc_top2: 0.9938 - 6ms/step\n",
"step 120/157 - loss: 1.4763 - acc_top1: 0.9763 - acc_top2: 0.9941 - 6ms/step\n",
"step 130/157 - loss: 1.4786 - acc_top1: 0.9764 - acc_top2: 0.9935 - 6ms/step\n",
"step 140/157 - loss: 1.4612 - acc_top1: 0.9775 - acc_top2: 0.9939 - 6ms/step\n",
"step 150/157 - loss: 1.4894 - acc_top1: 0.9785 - acc_top2: 0.9943 - 6ms/step\n",
"step 157/157 - loss: 1.4612 - acc_top1: 0.9777 - acc_top2: 0.9941 - 6ms/step\n",
"step 20/157 - loss: 1.5246 - acc_top1: 0.9773 - acc_top2: 0.9969 - 6ms/step\n",
"step 40/157 - loss: 1.4622 - acc_top1: 0.9758 - acc_top2: 0.9961 - 6ms/step\n",
"step 60/157 - loss: 1.5241 - acc_top1: 0.9763 - acc_top2: 0.9951 - 6ms/step\n",
"step 80/157 - loss: 1.4612 - acc_top1: 0.9787 - acc_top2: 0.9959 - 6ms/step\n",
"step 100/157 - loss: 1.4612 - acc_top1: 0.9823 - acc_top2: 0.9967 - 5ms/step\n",
"step 120/157 - loss: 1.4612 - acc_top1: 0.9835 - acc_top2: 0.9966 - 5ms/step\n",
"step 140/157 - loss: 1.4612 - acc_top1: 0.9844 - acc_top2: 0.9969 - 5ms/step\n",
"step 157/157 - loss: 1.4612 - acc_top1: 0.9838 - acc_top2: 0.9966 - 5ms/step\n",
"Eval samples: 10000\n"
]
},
{
"data": {
"text/plain": [
"{'loss': [1.4611504], 'acc_top1': 0.9
777, 'acc_top2': 0.9941
}"
"{'loss': [1.4611504], 'acc_top1': 0.9
838, 'acc_top2': 0.9966
}"
]
},
"execution_count":
23
,
"execution_count":
17
,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.evaluate(test_dataset, batch_size=64)"
"model.evaluate(test_dataset,
log_freq=20,
batch_size=64)"
]
},
{
...
...
paddle2.0_docs/image_search/image_search.ipynb
浏览文件 @
bf87006a
...
...
@@ -14,7 +14,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 简要介绍\n",
"#
#
简要介绍\n",
"\n",
"图片搜索是一种有着广泛的应用场景的深度学习技术的应用,目前,无论是工程图纸的检索,还是互联网上相似图片的搜索,都基于深度学习算法能够实现很好的基于给定图片,检索出跟该图片相似的图片的效果。\n",
"\n",
...
...
@@ -25,7 +25,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 环境设置\n",
"#
#
环境设置\n",
"\n",
"本示例基于飞桨开源框架2.0版本。"
]
...
...
@@ -68,7 +68,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 数据集\n",
"#
#
数据集\n",
"\n",
"本示例采用[CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html)数据集。这是一个经典的数据集,由50000张图片的训练数据,和10000张图片的测试数据组成,其中每张图片是一个RGB的长和宽都为32的图片。使用`paddle.dataset.cifar`可以方便的完成数据的下载工作,把数据归一化到`(0, 1.0)`区间内,并提供迭代器供按顺序访问数据。我们会把训练数据和测试数据分别存放在两个`numpy`数组中,供后面的训练和评估来使用。"
]
...
...
@@ -159,7 +159,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 数据探索\n",
"#
#
数据探索\n",
"\n",
"接下来我们随机从训练数据里找一些图片,浏览一下这些图片。"
]
...
...
@@ -221,7 +221,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建训练数据\n",
"#
#
构建训练数据\n",
"\n",
"图片检索的模型的训练样本跟我们常见的分类任务的训练样本不太一样的地方在于,每个训练样本并不是一个`(image, class)`这样的形式。而是(image0, image1, similary_or_not)的形式,即,每一个训练样本由两张图片组成,而其`label`是这两张图片是否相似的标志位(0或者1)。\n",
"\n",
...
...
@@ -358,7 +358,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 把图片转换为高维的向量表示的网络\n",
"#
#
把图片转换为高维的向量表示的网络\n",
"\n",
"我们的目标是首先把图片转换为高维空间的表示,然后计算图片在高维空间表示时的相似度。\n",
"下面的网络结构用来把一个形状为`(3, 32, 32)`的图片转换成形状为`(8,)`的向量。在有些资料中也会把这个转换成的向量称为`Embedding`,请注意,这与自然语言处理领域的词向量的区别。\n",
...
...
@@ -532,7 +532,7 @@
"id": "v2izWWI9PutF"
},
"source": [
"# 模型预测 \n",
"#
#
模型预测 \n",
"\n",
"前述的模型训练训练结束之后,我们就可以用该网络结构来计算出任意一张图片的高维向量表示(embedding),通过计算该图片与图片库中其他图片的高维向量表示之间的相似度,就可以按照相似程度进行排序,排序越靠前,则相似程度越高。\n",
"\n",
...
...
@@ -614,7 +614,7 @@
"outputId": "3f46eb46-fa8e-4b9c-e11f-f9bbb1abf2d5"
},
"source": [
"# The end\n",
"#
#
The end\n",
"\n",
"上面展示的结果当中,每一行里其余的图片都是跟第一张图片按照相似度进行排序相似的图片。你也可以调整网络结构和超参数,以获得更好的结果。"
]
...
...
@@ -642,7 +642,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.
7
"
"version": "3.7.
3
"
}
},
"nbformat": 4,
paddle2.0_docs/imdb_bow_classification/imdb_bow_classification.ipynb
浏览文件 @
bf87006a
...
...
@@ -16,7 +16,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 环境设置\n",
"#
#
环境设置\n",
"\n",
"本示例基于飞桨开源框架2.0版本。"
]
...
...
@@ -46,7 +46,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 加载数据\n",
"#
#
加载数据\n",
"\n",
"我们会使用`paddle.dataset`完成数据下载,构建字典和准备数据读取器。在飞桨2.0版本中,推荐使用padding的方式来对同一个batch中长度不一的数据进行补齐,所以在字典中,我们还会添加一个特殊的`<pad>`词,用来在后续对batch中较短的句子进行填充。"
]
...
...
@@ -115,7 +115,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 参数设置\n",
"#
#
参数设置\n",
"\n",
"在这里我们设置一下词表大小,`embedding`的大小,batch_size,等等"
]
...
...
@@ -179,7 +179,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 用padding的方式对齐数据\n",
"#
#
用padding的方式对齐数据\n",
"\n",
"文本数据中,每一句话的长度都是不一样的,为了方便后续的神经网络的计算,常见的处理方式是把数据集中的数据都统一成同样长度的数据。这包括:对于较长的数据进行截断处理,对于较短的数据用特殊的词`<pad>`进行填充。接下来的代码会对数据集中的数据进行这样的处理。"
]
...
...
@@ -230,7 +230,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 组建网络\n",
"#
#
组建网络\n",
"\n",
"本示例中,我们将会使用一个不考虑词的顺序的BOW的网络,在查找到每个词对应的embedding后,简单的取平均,作为一个句子的表示。然后用`Linear`进行线性变换。为了防止过拟合,我们还使用了`Dropout`。"
]
...
...
@@ -260,7 +260,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 开始模型的训练\n"
"#
#
开始模型的训练\n"
]
},
{
...
...
@@ -341,7 +341,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# The End\n",
"#
#
The End\n",
"\n",
"可以看到,在这个数据集上,经过两轮的迭代可以得到86%左右的准确率。你也可以通过调整网络结构和超参数,来获得更好的效果。"
]
...
...
@@ -369,7 +369,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.
7
"
"version": "3.7.
3
"
}
},
"nbformat": 4,
...
...
paddle2.0_docs/n_gram_model/n_gram_model.ipynb
浏览文件 @
bf87006a
此差异已折叠。
点击以展开。
paddle2.0_docs/seq2seq_with_attention/seq2seq_with_attention.ipynb
浏览文件 @
bf87006a
...
...
@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 环境设置\n",
"#
#
环境设置\n",
"\n",
"本示例教程基于飞桨2.0-beta版本。"
]
...
...
@@ -45,7 +45,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 下载数据集\n",
"#
#
下载数据集\n",
"\n",
"我们将使用 [http://www.manythings.org/anki/](http://www.manythings.org/anki/) 提供的中英文的英汉句对作为数据集,来完成本任务。该数据集含有23610个中英文双语的句对。"
]
...
...
@@ -101,7 +101,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建双语句对的数据结构\n",
"#
#
构建双语句对的数据结构\n",
"\n",
"接下来我们通过处理下载下来的双语句对的文本文件,将双语句对读入到python的数据结构中。这里做了如下的处理。\n",
"\n",
...
...
@@ -167,7 +167,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建词表\n",
"#
#
创建词表\n",
"\n",
"接下来我们分别创建中英文的词表,这两份词表会用来将英文和中文的句子转换为词的ID构成的序列。词表中还加入了如下三个特殊的词:\n",
"- `<pad>`: 用来对较短的句子进行填充。\n",
...
...
@@ -218,7 +218,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建padding过的数据集\n",
"#
#
创建padding过的数据集\n",
"\n",
"接下来根据词表,我们将会创建一份实际的用于训练的用numpy array组织起来的数据集。\n",
"- 所有的句子都通过`<pad>`补充成为了长度相同的句子。\n",
...
...
@@ -269,7 +269,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建网络\n",
"#
#
创建网络\n",
"\n",
"我们将会创建一个Encoder-AttentionDecoder架构的模型结构用来完成机器翻译任务。\n",
"首先我们将设置一些必要的网络结构中用到的参数。"
...
...
@@ -294,7 +294,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Encoder部分\n",
"#
#
Encoder部分\n",
"\n",
"在编码器的部分,我们通过查找完Embedding之后接一个LSTM的方式构建一个对源语言编码的网络。飞桨的RNN系列的API,除了LSTM之外,还提供了SimleRNN, GRU供使用,同时,还可以使用反向RNN,双向RNN,多层RNN等形式。也可以通过`dropout`参数设置是否对多层RNN的中间层进行`dropout`处理,来防止过拟合。\n",
"\n",
...
...
@@ -326,7 +326,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# AttentionDecoder部分\n",
"#
#
AttentionDecoder部分\n",
"\n",
"在解码器部分,我们通过一个带有注意力机制的LSTM来完成解码。\n",
"\n",
...
...
@@ -400,7 +400,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 训练模型\n",
"#
#
训练模型\n",
"\n",
"接下来我们开始训练模型。\n",
"\n",
...
...
@@ -533,7 +533,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用模型进行机器翻译\n",
"#
#
使用模型进行机器翻译\n",
"\n",
"根据你所使用的计算设备的不同,上面的训练过程可能需要不等的时间。(在一台Mac笔记本上,大约耗时15~20分钟)\n",
"完成上面的模型训练之后,我们可以得到一个能够从英文翻译成中文的机器翻译模型。接下来我们通过一个greedy search来实现使用该模型完成实际的机器翻译。(实际的任务中,你可能需要用beam search算法来提升效果)"
...
...
@@ -625,7 +625,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# The End\n",
"#
#
The End\n",
"\n",
"你还可以通过变换网络结构,调整数据集,尝试不同的参数的方式来进一步提升本示例当中的机器翻译的效果。同时,也可以尝试在其他的类似的任务中用飞桨来完成实际的实践。"
]
...
...
@@ -647,7 +647,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.
7
"
"version": "3.7.
3
"
}
},
"nbformat": 4,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录