# PyTorch:定义新的 Autograd 函数 > 原文: > > 校对:DrDavidS 这里我们准备一个三阶多项式,通过最小化平方欧几里得距离来训练,并预测函数 `y = sin(x)` 在`-pi`到`pi`上的值。 这里我们不将多项式写为`y = a + bx + cx^2 + dx^3`,而是将多项式写为`y = a + bP_3(c + dx)`,其中`P_3(x) = 1/2 (5x ^ 3 - 3x)`是三次[勒让德多项式](https://en.wikipedia.org/wiki/Legendre_polynomials)。 此实现使用了 PyTorch 张量(tensor)运算来实现前向传播,并使用 PyTorch Autograd 来计算梯度。 在此实现中,我们实现了自己的自定义 Autograd 函数来执行`P'_3(x)`。 从数学定义上讲,`P'_3(x) = 3/2 (5x ^ 2 - 1)`: ```py import torch import math class LegendrePolynomial3(torch.autograd.Function): """ We can implement our own custom autograd Functions by subclassing torch.autograd.Function and implementing the forward and backward passes which operate on Tensors. """ @staticmethod def forward(ctx, input): """ In the forward pass we receive a Tensor containing the input and return a Tensor containing the output. ctx is a context object that can be used to stash information for backward computation. You can cache arbitrary objects for use in the backward pass using the ctx.save_for_backward method. """ ctx.save_for_backward(input) return 0.5 * (5 * input ** 3 - 3 * input) @staticmethod def backward(ctx, grad_output): """ In the backward pass we receive a Tensor containing the gradient of the loss with respect to the output, and we need to compute the gradient of the loss with respect to the input. """ input, = ctx.saved_tensors return grad_output * 1.5 * (5 * input ** 2 - 1) dtype = torch.float device = torch.device("cpu") # device = torch.device("cuda:0") # Uncomment this to run on GPU # Create Tensors to hold input and outputs. # By default, requires_grad=False, which indicates that we do not need to # compute gradients with respect to these Tensors during the backward pass. x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin(x) # Create random Tensors for weights. For this example, we need # 4 weights: y = a + b * P3(c + d * x), these weights need to be initialized # not too far from the correct result to ensure convergence. # Setting requires_grad=True indicates that we want to compute gradients with # respect to these Tensors during the backward pass. a = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True) b = torch.full((), -1.0, device=device, dtype=dtype, requires_grad=True) c = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True) d = torch.full((), 0.3, device=device, dtype=dtype, requires_grad=True) learning_rate = 5e-6 for t in range(2000): # To apply our Function, we use Function.apply method. We alias this as 'P3'. P3 = LegendrePolynomial3.apply # Forward pass: compute predicted y using operations; we compute # P3 using our custom autograd operation. y_pred = a + b * P3(c + d * x) # Compute and print loss loss = (y_pred - y).pow(2).sum() if t % 100 == 99: print(t, loss.item()) # Use autograd to compute the backward pass. loss.backward() # Update weights using gradient descent with torch.no_grad(): a -= learning_rate * a.grad b -= learning_rate * b.grad c -= learning_rate * c.grad d -= learning_rate * d.grad # Manually zero the gradients after updating weights a.grad = None b.grad = None c.grad = None d.grad = None print(f'Result: y = {a.item()} + {b.item()} * P3({c.item()} + {d.item()} x)') ``` **脚本的总运行时间**:(0 分钟 0.000 秒) [下载 Python 源码:`polynomial_custom_function.py`](https://pytorch.org/tutorials/_downloads/b7ec15fd7bec1ca3f921104cfb6a54ed/polynomial_custom_function.py) [下载 Jupyter 笔记本:`polynomial_custom_function.ipynb`](https://pytorch.org/tutorials/_downloads/0a64809624bf2f3eb497d30d5303a9a0/polynomial_custom_function.ipynb)