# PyTorch:`nn` > 原文: 经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。 此实现使用来自 PyTorch 的`nn`包来构建网络。 PyTorch Autograd 使定义计算图和获取梯度变得容易,但是原始的 Autograd 对于定义复杂的神经网络来说可能太低了。 这是`nn`包可以提供帮助的地方。 `nn`包定义了一组模块,您可以将其视为神经网络层,该神经网络层从输入产生输出并且可能具有一些可训练的权重。 ```py import torch import math # Create Tensors to hold input and outputs. x = torch.linspace(-math.pi, math.pi, 2000) y = torch.sin(x) # For this example, the output y is a linear function of (x, x^2, x^3), so # we can consider it as a linear layer neural network. Let's prepare the # tensor (x, x^2, x^3). p = torch.tensor([1, 2, 3]) xx = x.unsqueeze(-1).pow(p) # In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape # (3,), for this case, broadcasting semantics will apply to obtain a tensor # of shape (2000, 3) # Use the nn package to define our model as a sequence of layers. nn.Sequential # is a Module which contains other Modules, and applies them in sequence to # produce its output. The Linear Module computes output from input using a # linear function, and holds internal Tensors for its weight and bias. # The Flatten layer flatens the output of the linear layer to a 1D tensor, # to match the shape of `y`. model = torch.nn.Sequential( torch.nn.Linear(3, 1), torch.nn.Flatten(0, 1) ) # The nn package also contains definitions of popular loss functions; in this # case we will use Mean Squared Error (MSE) as our loss function. loss_fn = torch.nn.MSELoss(reduction='sum') learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y by passing x to the model. Module objects # override the __call__ operator so you can call them like functions. When # doing so you pass a Tensor of input data to the Module and it produces # a Tensor of output data. y_pred = model(xx) # Compute and print loss. We pass Tensors containing the predicted and true # values of y, and the loss function returns a Tensor containing the # loss. loss = loss_fn(y_pred, y) if t % 100 == 99: print(t, loss.item()) # Zero the gradients before running the backward pass. model.zero_grad() # Backward pass: compute gradient of the loss with respect to all the learnable # parameters of the model. Internally, the parameters of each Module are stored # in Tensors with requires_grad=True, so this call will compute gradients for # all learnable parameters in the model. loss.backward() # Update the weights using gradient descent. Each parameter is a Tensor, so # we can access its gradients like we did before. with torch.no_grad(): for param in model.parameters(): param -= learning_rate * param.grad # You can access the first layer of `model` like accessing the first item of a list linear_layer = model[0] # For linear layer, its parameters are stored as `weight` and `bias`. print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3') ``` **脚本的总运行时间**:(0 分钟 0.000 秒) [下载 Python 源码:`polynomial_nn.py`](https://pytorch.org/tutorials/_downloads/b4767df4367deade63dc8a0d3712c1d4/polynomial_nn.py) [下载 Jupyter 笔记本:`polynomial_nn.ipynb`](https://pytorch.org/tutorials/_downloads/7bc167d8b8308ae65a717d7461d838fa/polynomial_nn.ipynb) [由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊