# PyTorch:张量和 Autograd > 原文: 经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。 此实现使用 PyTorch 张量上的运算来计算正向传播,并使用 PyTorch Autograd 来计算梯度。 PyTorch 张量表示计算图中的一个节点。 如果`x`是具有`x.requires_grad=True`的张量,则`x.grad`是另一个张量,其保持`x`相对于某个标量值的梯度。 ```py import torch import math dtype = torch.float device = torch.device("cpu") # device = torch.device("cuda:0") # Uncomment this to run on GPU # Create Tensors to hold input and outputs. # By default, requires_grad=False, which indicates that we do not need to # compute gradients with respect to these Tensors during the backward pass. x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin(x) # Create random Tensors for weights. For a third order polynomial, we need # 4 weights: y = a + b x + c x^2 + d x^3 # Setting requires_grad=True indicates that we want to compute gradients with # respect to these Tensors during the backward pass. a = torch.randn((), device=device, dtype=dtype, requires_grad=True) b = torch.randn((), device=device, dtype=dtype, requires_grad=True) c = torch.randn((), device=device, dtype=dtype, requires_grad=True) d = torch.randn((), device=device, dtype=dtype, requires_grad=True) learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y using operations on Tensors. y_pred = a + b * x + c * x ** 2 + d * x ** 3 # Compute and print loss using operations on Tensors. # Now loss is a Tensor of shape (1,) # loss.item() gets the scalar value held in the loss. loss = (y_pred - y).pow(2).sum() if t % 100 == 99: print(t, loss.item()) # Use autograd to compute the backward pass. This call will compute the # gradient of loss with respect to all Tensors with requires_grad=True. # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding # the gradient of the loss with respect to a, b, c, d respectively. loss.backward() # Manually update weights using gradient descent. Wrap in torch.no_grad() # because weights have requires_grad=True, but we don't need to track this # in autograd. with torch.no_grad(): a -= learning_rate * a.grad b -= learning_rate * b.grad c -= learning_rate * c.grad d -= learning_rate * d.grad # Manually zero the gradients after updating weights a.grad = None b.grad = None c.grad = None d.grad = None print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3') ``` **脚本的总运行时间**:(0 分钟 0.000 秒) [下载 Python 源码:`polynomial_autograd.py`](https://pytorch.org/tutorials/_downloads/2956e289de4f5fdd59114171805b23d2/polynomial_autograd.py) [下载 Jupyter 笔记本:`polynomial_autograd.ipynb`](https://pytorch.org/tutorials/_downloads/e1d4d0ca7bd75ea2fff8032fcb79076e/polynomial_autograd.ipynb) [由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊