Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
OpenDocCN
d2l-zh
提交
bf315f9a
D
d2l-zh
项目概览
OpenDocCN
/
d2l-zh
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
d2l-zh
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
bf315f9a
编写于
6月 28, 2018
作者:
A
Aston Zhang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update data_iter
上级
878d70f6
变更
9
隐藏空白更改
内联
并排
Showing
9 changed file
with
13 addition
and
12 deletion
+13
-12
chapter_deep-learning-basics/linear-regression-scratch.md
chapter_deep-learning-basics/linear-regression-scratch.md
+4
-4
chapter_deep-learning-basics/reg-scratch.md
chapter_deep-learning-basics/reg-scratch.md
+1
-1
chapter_optimization/adadelta-scratch.md
chapter_optimization/adadelta-scratch.md
+1
-1
chapter_optimization/adagrad-scratch.md
chapter_optimization/adagrad-scratch.md
+1
-1
chapter_optimization/adam-scratch.md
chapter_optimization/adam-scratch.md
+1
-1
chapter_optimization/gd-sgd-scratch.md
chapter_optimization/gd-sgd-scratch.md
+1
-1
chapter_optimization/momentum-scratch.md
chapter_optimization/momentum-scratch.md
+1
-1
chapter_optimization/rmsprop-scratch.md
chapter_optimization/rmsprop-scratch.md
+1
-1
gluonbook/utils.py
gluonbook/utils.py
+2
-1
未找到文件。
chapter_deep-learning-basics/linear-regression-scratch.md
浏览文件 @
bf315f9a
...
...
@@ -59,7 +59,7 @@ plt.show()
## 读取数据
在训练模型的时候,我们需要遍历数据集并不断读取小批量数据样本。这里我们定义一个函数:它每次返回
批量大小
个随机样本的特征和标签。
在训练模型的时候,我们需要遍历数据集并不断读取小批量数据样本。这里我们定义一个函数:它每次返回
`batch_size`
(批量大小)
个随机样本的特征和标签。
```
{.python .input n=5}
batch_size = 10
...
...
@@ -74,7 +74,7 @@ def data_iter(batch_size, features, labels):
yield features.take(j), labels.take(j)
```
让我们读取第一个小批量数据样本并打印。每个批量的特征形状为
`(10, 2)`
,分别对应批量大小和输入个数;标签形状为批量大小。
让我们读取第一个小批量数据样本并打印。每个批量的特征形状为
(10, 2)
,分别对应批量大小和输入个数;标签形状为批量大小。
```
{.python .input n=6}
for X, y in data_iter(batch_size, features, labels):
...
...
@@ -86,7 +86,7 @@ for X, y in data_iter(batch_size, features, labels):
## 初始化模型参数
我们将权重
随机化成均值0方
差为0.01的正态随机数,偏差则初始化成0。
我们将权重
初始化成均值0标准
差为0.01的正态随机数,偏差则初始化成0。
```
{.python .input n=7}
w = nd.random.normal(scale=0.01, shape=(num_inputs, 1))
...
...
@@ -123,7 +123,7 @@ def squared_loss(y_hat, y):
## 定义优化算法
以下的
`sgd`
函数实现了上一节中介绍的小批量随机梯度下降算法
中的模型更新
。
以下的
`sgd`
函数实现了上一节中介绍的小批量随机梯度下降算法
。它通过不断更新模型参数来优化损失函数
。
```
{.python .input n=11}
def sgd(params, lr, batch_size):
...
...
chapter_deep-learning-basics/reg-scratch.md
浏览文件 @
bf315f9a
...
...
@@ -96,7 +96,7 @@ def fit_and_plot(lambd):
train_ls = []
test_ls = []
for _ in range(num_epochs):
for X, y in gb.data_iter(batch_size,
n_train,
features, labels):
for X, y in gb.data_iter(batch_size, features, labels):
with autograd.record():
# 添加了 L2 范数惩罚项。
l = loss(net(X, w, b), y) + lambd * l2_penalty(w)
...
...
chapter_optimization/adadelta-scratch.md
浏览文件 @
bf315f9a
...
...
@@ -93,7 +93,7 @@ def optimize(batch_size, rho, num_epochs, log_interval):
ls = [loss(net(features, w, b), labels).mean().asnumpy()]
for epoch in range(1, num_epochs + 1):
for batch_i, (X, y) in enumerate(
gb.data_iter(batch_size,
num_examples,
features, labels)):
gb.data_iter(batch_size, features, labels)):
with autograd.record():
l = loss(net(X, w, b), y)
l.backward()
...
...
chapter_optimization/adagrad-scratch.md
浏览文件 @
bf315f9a
...
...
@@ -100,7 +100,7 @@ def optimize(batch_size, lr, num_epochs, log_interval):
ls = [loss(net(features, w, b), labels).mean().asnumpy()]
for epoch in range(1, num_epochs + 1):
for batch_i, (X, y) in enumerate(
gb.data_iter(batch_size,
num_examples,
features, labels)):
gb.data_iter(batch_size, features, labels)):
with autograd.record():
l = loss(net(X, w, b), y)
l.backward()
...
...
chapter_optimization/adam-scratch.md
浏览文件 @
bf315f9a
...
...
@@ -109,7 +109,7 @@ def optimize(batch_size, lr, num_epochs, log_interval):
t = 0
for epoch in range(1, num_epochs + 1):
for batch_i, (X, y) in enumerate(
gb.data_iter(batch_size,
num_examples,
features, labels)):
gb.data_iter(batch_size, features, labels)):
with autograd.record():
l = loss(net(X, w, b), y)
l.backward()
...
...
chapter_optimization/gd-sgd-scratch.md
浏览文件 @
bf315f9a
...
...
@@ -172,7 +172,7 @@ def optimize(batch_size, lr, num_epochs, log_interval, decay_epoch):
if decay_epoch and epoch > decay_epoch:
lr *= 0.1
for batch_i, (X, y) in enumerate(
gb.data_iter(batch_size,
num_examples,
features, labels)):
gb.data_iter(batch_size, features, labels)):
with autograd.record():
l = loss(net(X, w, b), y)
# 先对 l 中元素求和,得到小批量损失之和,然后求参数的梯度。
...
...
chapter_optimization/momentum-scratch.md
浏览文件 @
bf315f9a
...
...
@@ -135,7 +135,7 @@ def optimize(batch_size, lr, mom, num_epochs, log_interval):
if epoch > 2:
lr *= 0.1
for batch_i, (X, y) in enumerate(
gb.data_iter(batch_size,
num_examples,
features, labels)):
gb.data_iter(batch_size, features, labels)):
with autograd.record():
l = loss(net(X, w, b), y)
l.backward()
...
...
chapter_optimization/rmsprop-scratch.md
浏览文件 @
bf315f9a
...
...
@@ -89,7 +89,7 @@ def optimize(batch_size, lr, gamma, num_epochs, log_interval):
ls = [loss(net(features, w, b), labels).mean().asnumpy()]
for epoch in range(1, num_epochs + 1):
for batch_i, (X, y) in enumerate(
gb.data_iter(batch_size,
num_examples,
features, labels)):
gb.data_iter(batch_size, features, labels)):
with autograd.record():
l = loss(net(X, w, b), y)
l.backward()
...
...
gluonbook/utils.py
浏览文件 @
bf315f9a
...
...
@@ -377,8 +377,9 @@ def train_and_predict_rnn(rnn, is_random_iter, num_epochs, num_steps,
ctx
,
idx_to_char
,
char_to_idx
,
get_inputs
,
is_lstm
))
def
data_iter
(
batch_size
,
num_examples
,
features
,
labels
):
def
data_iter
(
batch_size
,
features
,
labels
):
"""Iterate through a data set."""
num_examples
=
len
(
features
)
indices
=
list
(
range
(
num_examples
))
random
.
shuffle
(
indices
)
for
i
in
range
(
0
,
num_examples
,
batch_size
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录