提交 9ad21775 编写于 作者: W wizardforcel

2021-01-15 22:35:23

上级 9e8376bc
......@@ -303,17 +303,14 @@ AttributeError: 'NoneType' object has no attribute 'data'
只需几行就可以完成将神经网络体系结构构建为一系列预定义模块的过程,如下所示:
将 torch.nn 导入为 nn
模型= nn.Sequential(nn.Linear(input_units,hidden_​​units),\
nn.ReLU(),\
nn.Linear(hidden_​​units,output_units),\
nn.Sigmoid())
loss_funct = nn.MSELoss()
```py
import torch.nn as nn
model = nn.Sequential(nn.Linear(input_units, hidden_units), \
                      nn.ReLU(), \
                      nn.Linear(hidden_units, output_units), \
                      nn.Sigmoid())
loss_funct = nn.MSELoss()
```
首先,导入模块。 然后,定义模型架构。 `input_units`表示输入数据包含的要素数量,`hidden_​​units`表示隐藏层的节点数量, `output_units`表示 输出层的节点。
......@@ -333,9 +330,10 @@ loss_funct = nn.MSELoss()
1. Import **torch** as well as the **nn** module from PyTorch:
进口火炬
将 torch.nn 导入为 nn
```py
import torch
import torch.nn as nn
```
注意
......@@ -345,41 +343,50 @@ loss_funct = nn.MSELoss()
2. Define the number of features of the input data as`10`(**input_units**) and the number of nodes of the output layer as`1`(**output_units**):
```py
input_units = 10
output_units = 1
```
3. Using the **Sequential()** container, define a single-layer network architecture and store it in a variable named **model**. Make sure to define one layer, followed by a **Sigmoid** activation function:
mo del = nn.Sequential(nn.Linear(input_units,output_units),\
nn.Sigmoid())
```py
model = nn.Sequential(nn.Linear(input_units, output_units), \
                      nn.Sigmoid())
```
4. Print your model to verify that it was created accordingly:
pr int(模型)
```py
print(model)
```
在前面的代码段中显示的 将显示以下输出:
顺序的
(0):线性(in_features = 10,out_features = 1,bias = True)
(1):Sigmoid()
```py
Sequential(
  (0): Linear(in_features=10, out_features=1, bias=True)
  (1): Sigmoid()
)
```
5. Define the loss function as the MSE and store it in a variable named **loss_funct**:
loss_funct = nn.MSELoss()
```py
loss_funct = nn.MSELoss()
```
6. Print your loss function to verify that it was created accordingly:
打印(loss_funct)
```py
print(loss_funct)
```
运行前面的代码片段将显示以下输出:
MSELoss()
```py
MSELoss()
```
注意
......@@ -399,7 +406,9 @@ loss_funct = nn.MSELoss()
要设置要使用的优化程序,在导入软件包后,以下代码行就足够了:
优化程序= torch.optim.SGD(model.parameters(),lr = 0.01)
```py
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
在此,`model.parameters()`参数是指先前创建的模型的权重和偏差,而`lr`是指学习率,该学习率已设置为`0.01`。
......@@ -413,33 +422,22 @@ loss_funct = nn.MSELoss()
以下代码段中的`#`符号表示代码注释。 注释已添加到代码中,以帮助解释特定的逻辑位。 下面的代码片段中的三引号(`"""`)用来表示多行代码注释的起点和终点。在代码中添加了注释,以帮助解释特定的逻辑位 。
对于我在范围(100)中:
#调用模型进行预测
y_pred =模型(x)
#根据 y_pred 和 y 计算损失函数
损失= loss_funct(y_pred,y)
#将渐变归零,以免之前的渐变累积
Optimizer.zero_grad()
#计算损失函数的梯度
loss.backward()
"""
调用优化器执行更新
参数的
"""
Optimizer.step()
```py
for i in range(100):
    # Call to the model to perform a prediction
    y_pred = model(x)
    # Calculation of loss function based on y_pred and y
    loss = loss_funct(y_pred, y)
    # Zero the gradients so that previous ones don't accumulate
    optimizer.zero_grad()
    # Calculate the gradients of the loss function
    loss.backward()
    """
    Call to the optimizer to perform an update
    of the parameters
    """
    optimizer.step()
```
对于每次迭代,调用模型以获得预测(`y_pred`)。 该预测和地面真实值(`y`)被馈送到损失函数,以确定模型逼近地面真实的能力。
......@@ -457,61 +455,57 @@ Optimizer.step()
1. Import **torch**, the **optim** package from PyTorch, and **matplotlib**:
进口火炬
导入 torch.optim 作为优化
导入 matplotlib.pyplot 作为 plt
```py
import torch
import torch.optim as optim
import matplotlib.pyplot as plt
```
2. Create dummy input data (`x`) of random values and dummy target data (`y`) that only contains zeros and ones. Tensor`x`should have a size of (**20,10**), while the size of`y`should be (**20,1**):
x = torch.randn(20,10)
y = torch.randint(0,2,(20,1))。type(torch.FloatTensor)
```py
x = torch.randn(20,10)
y = torch.randint(0,2, (20,1)).type(torch.FloatTensor)
```
3. Define the optimization algorithm as the Adam optimizer. Set the learning rate equal to`0.01`:
优化程序= optim.Adam(model.parameters(),lr = 0.01)
```py
optimizer = optim.Adam(model.parameters(), lr=0.01)
```
4. Run the optimization for 20 iterations, saving the value of the loss in a variable. Every five iterations, print the loss value:
损失= []
对于我在范围(20)中:
y_pred =模型(x)
损失= loss_funct(y_pred,y)
loss.append(loss.item())
Optimizer.zero_grad()
loss.backward()
Optimizer.step()
如果 i% 5 == 0:
打印(i,loss.item())
```py
losses = []
for i in range(20):
    y_pred = model(x)
    loss = loss_funct(y_pred, y)
    losses.append(loss.item())
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    if i%5 == 0:
        print(i, loss.item())
```
输出应如下所示:
```py
0 0.25244325399398804
5 0.23448510468006134
10 0.21932794153690338
15 0.20741790533065796
```
前面的输出显示纪元号以及损失函数的值,可以看出,该函数正在减小。 这意味着训练过程使损失函数最小化,这意味着模型能够理解输入特征和目标之间的关系。
5. Make a line plot to display the value of the loss function in each epoch:
plt.plot(范围(0,20),损失)
```py
plt.plot(range(0,20), losses)
plt.show()
```
输出应如下所示:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册