提交 7877c989 编写于 作者: 绝不原创的飞龙's avatar 绝不原创的飞龙

2024-02-05 13:43:01

上级 4c00d2f7
......@@ -3,28 +3,34 @@
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: 张量并行 - torch.distributed.tensor.parallel
- en: 原文:[https://pytorch.org/docs/stable/distributed.tensor.parallel.html](https://pytorch.org/docs/stable/distributed.tensor.parallel.html)
id: totrans-1
prefs:
- PREF_BQ
type: TYPE_NORMAL
zh: 原文:[https://pytorch.org/docs/stable/distributed.tensor.parallel.html](https://pytorch.org/docs/stable/distributed.tensor.parallel.html)
- en: 'Tensor Parallelism(TP) is built on top of the PyTorch DistributedTensor ([DTensor](https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/README.md))
and provides different parallelism styles: Colwise and Rowwise Parallelism.'
id: totrans-2
prefs: []
type: TYPE_NORMAL
zh: 张量并行(TP)建立在PyTorch分布式张量([DTensor](https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/README.md))之上,并提供不同的并行化样式:列并行和行并行。
- en: Warning
id: totrans-3
prefs: []
type: TYPE_NORMAL
zh: 警告
- en: Tensor Parallelism APIs are experimental and subject to change.
id: totrans-4
prefs: []
type: TYPE_NORMAL
zh: 张量并行API是实验性的,可能会发生变化。
- en: 'The entrypoint to parallelize your `nn.Module` using Tensor Parallelism is:'
id: totrans-5
prefs: []
type: TYPE_NORMAL
zh: 使用张量并行并行化您的`nn.Module`的入口点是:
- en: '[PRE0]'
id: totrans-6
prefs: []
......@@ -35,38 +41,45 @@
id: totrans-7
prefs: []
type: TYPE_NORMAL
zh: 通过根据用户指定的计划并行化模块或子模块来应用PyTorch中的张量并行。
- en: We parallelize module or sub_modules based on a parallelize_plan. The parallelize_plan
contains `ParallelStyle`, which indicates how user wants the module or sub_module
to be parallelized.
id: totrans-8
prefs: []
type: TYPE_NORMAL
zh: 我们根据并行化计划并行化模块或子模块。并行化计划包含`ParallelStyle`,指示用户希望如何并行化模块或子模块。
- en: User can also specify different parallel style per module fully qualified name
(FQN).
id: totrans-9
prefs: []
type: TYPE_NORMAL
zh: 用户还可以根据模块的完全限定名称(FQN)指定不同的并行样式。
- en: Note that `parallelize_module` only accepts a 1-D `DeviceMesh`, if you have
a 2-D or N-D `DeviceMesh`, slice the DeviceMesh to a 1-D sub DeviceMesh first
then pass to this API(i.e. `device_mesh["tp"]`)
id: totrans-10
prefs: []
type: TYPE_NORMAL
zh: 请注意,`parallelize_module`仅接受1-D的`DeviceMesh`,如果您有2-D或N-D的`DeviceMesh`,请先将DeviceMesh切片为1-D子DeviceMesh,然后将其传递给此API(即`device_mesh["tp"]`)
- en: Parameters
id: totrans-11
prefs: []
type: TYPE_NORMAL
zh: 参数
- en: '**module** (`nn.Module`) Module to be parallelized.'
id: totrans-12
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**module**(`nn.Module`)- 要并行化的模块。'
- en: '**device_mesh** (`DeviceMesh`) Object which describes the mesh topology of
devices for the DTensor.'
id: totrans-13
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**device_mesh**(`DeviceMesh`)- 描述DTensor设备网格拓扑的对象。'
- en: '**parallelize_plan** (Union[`ParallelStyle`, Dict[str, `ParallelStyle`]])
The plan used to parallelize the module. It can be either a `ParallelStyle` object
which contains how we prepare input/output for Tensor Parallelism or it can be
......@@ -75,6 +88,8 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**parallelize_plan**(Union[`ParallelStyle`, Dict[str, `ParallelStyle`]])
用于并行化模块的计划。可以是一个包含我们如何为张量并行准备输入/输出的`ParallelStyle`对象,也可以是模块FQN及其对应的`ParallelStyle`对象的字典。'
- en: '**tp_mesh_dim** ([*int*](https://docs.python.org/3/library/functions.html#int
"(in Python v3.12)")*,* *deprecated*) The dimension of `device_mesh` where we
perform Tensor Parallelism on, this field is deprecated and will be removed in
......@@ -83,26 +98,33 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**tp_mesh_dim**([*int*](https://docs.python.org/3/library/functions.html#int
"(in Python v3.12)")*,* *已弃用*)- 在其中执行张量并行的`device_mesh`的维度,此字段已弃用,并将在将来删除。如果您有一个2-D或N-D的`DeviceMesh`,请考虑传递device_mesh[“tp”]'
- en: Returns
id: totrans-16
prefs: []
type: TYPE_NORMAL
zh: 返回
- en: A `nn.Module` object parallelized.
id: totrans-17
prefs: []
type: TYPE_NORMAL
zh: 一个`nn.Module`对象并行化。
- en: Return type
id: totrans-18
prefs: []
type: TYPE_NORMAL
zh: 返回类型
- en: '[*Module*](generated/torch.nn.Module.html#torch.nn.Module "torch.nn.modules.module.Module")'
id: totrans-19
prefs: []
type: TYPE_NORMAL
zh: '[*Module*](generated/torch.nn.Module.html#torch.nn.Module "torch.nn.modules.module.Module")'
- en: 'Example::'
id: totrans-20
prefs: []
type: TYPE_NORMAL
zh: '示例::'
- en: '[PRE1]'
id: totrans-21
prefs: []
......@@ -112,16 +134,19 @@
id: totrans-22
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: For complex module architecture like Attention, MLP layers, we recommend composing
different ParallelStyles together (i.e. `ColwiseParallel` and `RowwiseParallel`)
and pass as a parallelize_plan, to achieves the desired sharding computation.
id: totrans-23
prefs: []
type: TYPE_NORMAL
zh: 对于像Attention、MLP层这样的复杂模块架构,我们建议将不同的ParallelStyles组合在一起(即`ColwiseParallel`和`RowwiseParallel`),并将其作为并行化计划传递,以实现所需的分片计算。
- en: 'Tensor Parallelism supports the following parallel styles:'
id: totrans-24
prefs: []
type: TYPE_NORMAL
zh: 张量并行支持以下并行样式:
- en: '[PRE2]'
id: totrans-25
prefs: []
......@@ -133,10 +158,12 @@
id: totrans-26
prefs: []
type: TYPE_NORMAL
zh: 以行方式对兼容的nn.Module进行分区。目前支持nn.Linear和nn.Embedding。用户可以将其与RowwiseParallel组合在一起,以实现更复杂模块的分片(即MLP、Attention)
- en: Keyword Arguments
id: totrans-27
prefs: []
type: TYPE_NORMAL
zh: 关键字参数
- en: '**input_layouts** (*Placement**,* *optional*) The DTensor layout of input
tensor for the nn.Module, this is used to annotate the input tensor to become
a DTensor. If not specified, we assume the input tensor to be replicated.'
......@@ -144,6 +171,7 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**input_layouts**(*Placement**,* *可选*)- 用于nn.Module的输入张量的DTensor布局,用于注释输入张量以成为DTensor。如果未指定,则我们假定输入张量是复制的。'
- en: '**output_layouts** (*Placement**,* *optional*) The DTensor layout of the
output for the nn.Module, this is used to ensure the output of the nn.Module with
the user desired layout. If not specified, the output tensor is sharded on the
......@@ -152,6 +180,7 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**output_layouts**(*Placement**,* *可选*)- 用于nn.Module输出的DTensor布局,用于确保nn.Module的输出具有用户期望的布局。如果未指定,则输出张量在最后一个维度上进行分片。'
- en: '**use_local_output** ([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)")*,* *optional*) Whether to use local [`torch.Tensor`](tensors.html#torch.Tensor
"torch.Tensor") instead of `DTensor` for the module output, default: True.'
......@@ -159,18 +188,24 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**use_local_output**([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)")*,* *可选*)- 是否使用本地[`torch.Tensor`](tensors.html#torch.Tensor
"torch.Tensor")而不是`DTensor`作为模块输出,默认值为True。'
- en: Returns
id: totrans-31
prefs: []
type: TYPE_NORMAL
zh: 返回
- en: A `ParallelStyle` object that represents Colwise sharding of the nn.Module.
id: totrans-32
prefs: []
type: TYPE_NORMAL
zh: 表示nn.Module的Colwise分片的`ParallelStyle`对象。
- en: 'Example::'
id: totrans-33
prefs: []
type: TYPE_NORMAL
zh: '示例::'
- en: '[PRE3]'
id: totrans-34
prefs: []
......@@ -180,6 +215,7 @@
id: totrans-35
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: By default `ColwiseParallel` output is sharded on the last dimension if the
`output_layouts` not specified, if there’re operators that require specific tensor
shape (i.e. before the paired `RowwiseParallel`), keep in mind that if the output
......@@ -187,6 +223,7 @@
id: totrans-36
prefs: []
type: TYPE_NORMAL
zh: 默认情况下,如果未指定`output_layouts`,则`ColwiseParallel`输出在最后一个维度上进行分片,如果有需要特定张量形状的运算符(即在配对的`RowwiseParallel`之前),请记住,如果输出被分片,运算符可能需要调整为分片大小。
- en: '[PRE4]'
id: totrans-37
prefs: []
......
此差异已折叠。
此差异已折叠。
- en: torch.compiler
id: totrans-0
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: torch.compiler
- en: 原文:[https://pytorch.org/docs/stable/torch.compiler.html](https://pytorch.org/docs/stable/torch.compiler.html)
id: totrans-1
prefs:
- PREF_BQ
type: TYPE_NORMAL
zh: 原文:[https://pytorch.org/docs/stable/torch.compiler.html](https://pytorch.org/docs/stable/torch.compiler.html)
- en: '`torch.compiler` is a namespace through which some of the internal compiler
methods are surfaced for user consumption. The main function and the feature in
this namespace is `torch.compile`.'
id: totrans-2
prefs: []
type: TYPE_NORMAL
zh: '`torch.compiler` 是一个命名空间,通过该命名空间,一些内部编译器方法被公开供用户使用。该命名空间中的主要函数和特性是 `torch.compile`。'
- en: '`torch.compile` is a PyTorch function introduced in PyTorch 2.x that aims to
solve the problem of accurate graph capturing in PyTorch and ultimately enable
software engineers to run their PyTorch programs faster. `torch.compile` is written
in Python and it marks the transition of PyTorch from C++ to Python.'
id: totrans-3
prefs: []
type: TYPE_NORMAL
zh: '`torch.compile` PyTorch 2.x 中引入的一个函数,旨在解决 PyTorch 中准确捕获图形的问题,最终使软件工程师能够更快地运行他们的
PyTorch 程序。`torch.compile` 是用 Python 编写的,标志着 PyTorch C++ 转向 Python。'
- en: '`torch.compile` leverages the following underlying technologies:'
id: totrans-4
prefs: []
type: TYPE_NORMAL
zh: '`torch.compile` 利用以下基础技术:'
- en: '**TorchDynamo (torch._dynamo)** is an internal API that uses a CPython feature
called the Frame Evaluation API to safely capture PyTorch graphs. Methods that
are available externally for PyTorch users are surfaced through the `torch.compiler`
namespace.'
id: totrans-5
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**TorchDynamo (torch._dynamo)** 是一个内部 API,使用了一个名为 Frame Evaluation API CPython
特性,安全地捕获 PyTorch 图。对于 PyTorch 用户可用的方法通过 `torch.compiler` 命名空间公开。'
- en: '**TorchInductor** is the default `torch.compile` deep learning compiler that
generates fast code for multiple accelerators and backends. You need to use a
backend compiler to make speedups through `torch.compile` possible. For NVIDIA
and AMD GPUs, it leverages OpenAI Triton as the key building block.'
id: totrans-6
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**TorchInductor** 是默认的 `torch.compile` 深度学习编译器,可以为多个加速器和后端生成快速代码。您需要使用后端编译器才能通过
`torch.compile` 实现加速。对于 NVIDIA AMD GPU,它利用 OpenAI Triton 作为关键构建模块。'
- en: '**AOT Autograd** captures not only the user-level code, but also backpropagation,
which results in capturing the backwards pass “ahead-of-time”. This enables acceleration
of both forwards and backwards pass using TorchInductor.'
id: totrans-7
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**AOT Autograd** 不仅捕获用户级代码,还包括反向传播,这导致提前捕获反向传播。这使得使用 TorchInductor 加速前向和反向传播成为可能。'
- en: Note
id: totrans-8
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: In some cases, the terms `torch.compile`, TorchDynamo, `torch.compiler` might
be used interchangeably in this documentation.
id: totrans-9
prefs: []
type: TYPE_NORMAL
zh: 在某些情况下,本文档中可能会互换使用术语 `torch.compile`、TorchDynamo、`torch.compiler`。
- en: As mentioned above, to run your workflows faster, `torch.compile` through TorchDynamo
requires a backend that converts the captured graphs into a fast machine code.
Different backends can result in various optimization gains. The default backend
is called TorchInductor, also known as *inductor*, TorchDynamo has a list of supported
backends developed by our partners, which can be see by running `torch.compiler.list_backends()`
each of which with its optional dependencies.
id: totrans-10
prefs: []
type: TYPE_NORMAL
zh: 如上所述,为了更快地运行您的工作流程,通过 TorchDynamo 的 `torch.compile` 需要一个将捕获的图转换为快速机器代码的后端。不同的后端可能会导致不同的优化收益。默认后端称为
TorchInductor,也称为 *inductor*,TorchDynamo 有一个由我们的合作伙伴开发的支持后端列表,可以通过运行 `torch.compiler.list_backends()`
查看,每个后端都有其可选依赖项。
- en: 'Some of the most commonly used backends include:'
id: totrans-11
prefs: []
type: TYPE_NORMAL
zh: 一些最常用的后端包括:
- en: '**Training & inference backends**'
id: totrans-12
prefs: []
type: TYPE_NORMAL
zh: '**训练和推理后端**'
- en: '| Backend | Description |'
id: totrans-13
prefs: []
type: TYPE_TB
zh: '| 后端 | 描述 |'
- en: '| --- | --- |'
id: totrans-14
prefs: []
type: TYPE_TB
zh: '| --- | --- |'
- en: '| `torch.compile(m, backend="inductor")` | Uses the TorchInductor backend.
[Read more](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747)
|'
id: totrans-15
prefs: []
type: TYPE_TB
zh: '| `torch.compile(m, backend="inductor")` | 使用 TorchInductor 后端。[阅读更多](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747)
|'
- en: '| `torch.compile(m, backend="cudagraphs")` | CUDA graphs with AOT Autograd.
[Read more](https://github.com/pytorch/torchdynamo/pull/757) |'
id: totrans-16
prefs: []
type: TYPE_TB
zh: '| `torch.compile(m, backend="cudagraphs")` | 使用 CUDA 图形与 AOT Autograd。[阅读更多](https://github.com/pytorch/torchdynamo/pull/757)
|'
- en: '| `torch.compile(m, backend="ipex")` | Uses IPEX on CPU. [Read more](https://github.com/intel/intel-extension-for-pytorch)
|'
id: totrans-17
prefs: []
type: TYPE_TB
zh: '| `torch.compile(m, backend="ipex")` | CPU 上使用 IPEX。[阅读更多](https://github.com/intel/intel-extension-for-pytorch)
|'
- en: '| `torch.compile(m, backend="onnxrt")` | Uses ONNX Runtime for training on
CPU/GPU. [Read more](onnx_dynamo_onnxruntime_backend.html) |'
id: totrans-18
prefs: []
type: TYPE_TB
zh: '| `torch.compile(m, backend="onnxrt")` | 使用 ONNX Runtime CPU/GPU 上进行训练。[阅读更多](onnx_dynamo_onnxruntime_backend.html)
|'
- en: '**Inference-only backends**'
id: totrans-19
prefs: []
type: TYPE_NORMAL
zh: '**仅推理后端**'
- en: '| Backend | Description |'
id: totrans-20
prefs: []
type: TYPE_TB
zh: '| 后端 | 描述 |'
- en: '| --- | --- |'
id: totrans-21
prefs: []
type: TYPE_TB
zh: '| --- | --- |'
- en: '| `torch.compile(m, backend="tensorrt")` | Uses Torch-TensorRT for inference
optimizations. Requires `import torch_tensorrt` in the calling script to register
backend. [Read more](https://github.com/pytorch/TensorRT) |'
id: totrans-22
prefs: []
type: TYPE_TB
zh: '| `torch.compile(m, backend="tensorrt")` | 使用 Torch-TensorRT 进行推理优化。需要在调用脚本中导入
`torch_tensorrt` 来注册后端。[阅读更多](https://github.com/pytorch/TensorRT) |'
- en: '| `torch.compile(m, backend="ipex")` | Uses IPEX for inference on CPU. [Read
more](https://github.com/intel/intel-extension-for-pytorch) |'
id: totrans-23
prefs: []
type: TYPE_TB
zh: '| `torch.compile(m, backend="ipex")` | CPU 上使用 IPEX 进行推理。[阅读更多](https://github.com/intel/intel-extension-for-pytorch)
|'
- en: '| `torch.compile(m, backend="tvm")` | Uses Apache TVM for inference optimizations.
[Read more](https://tvm.apache.org/) |'
id: totrans-24
prefs: []
type: TYPE_TB
zh: '| `torch.compile(m, backend="tvm")` | 使用 Apache TVM 进行推理优化。[阅读更多](https://tvm.apache.org/)
|'
- en: '| `torch.compile(m, backend="openvino")` | Uses OpenVINO for inference optimizations.
[Read more](https://docs.openvino.ai/2023.1/pytorch_2_0_torch_compile.html) |'
id: totrans-25
prefs: []
type: TYPE_TB
zh: '| `torch.compile(m, backend="openvino")` | 使用 OpenVINO 进行推理优化。[阅读更多](https://docs.openvino.ai/2023.1/pytorch_2_0_torch_compile.html)
|'
- en: Read More
id: totrans-26
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 阅读更多
- en: Getting Started for PyTorch Users
id: totrans-27
prefs: []
type: TYPE_NORMAL
zh: PyTorch 用户入门
- en: '[Getting Started](torch.compiler_get_started.html)'
id: totrans-28
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[入门指南](torch.compiler_get_started.html)'
- en: '[torch.compiler API reference](torch.compiler_api.html)'
id: totrans-29
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[torch.compiler API 参考](torch.compiler_api.html)'
- en: '[TorchDynamo APIs for fine-grained tracing](torch.compiler_fine_grain_apis.html)'
id: totrans-30
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[TorchDynamo 用于细粒度跟踪的 API](torch.compiler_fine_grain_apis.html)'
- en: '[AOTInductor: Ahead-Of-Time Compilation for Torch.Export-ed Models](torch.compiler_aot_inductor.html)'
id: totrans-31
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[AOTInductor:Torch.Export-ed 模型的预编译](torch.compiler_aot_inductor.html)'
- en: '[TorchInductor GPU Profiling](torch.compiler_inductor_profiling.html)'
id: totrans-32
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[TorchInductor GPU Profiling](torch.compiler_inductor_profiling.html)'
- en: '[Profiling to understand torch.compile performance](torch.compiler_profiling_torch_compile.html)'
id: totrans-33
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[分析以了解 torch.compile 性能](torch.compiler_profiling_torch_compile.html)'
- en: '[Frequently Asked Questions](torch.compiler_faq.html)'
id: totrans-34
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[常见问题解答](torch.compiler_faq.html)'
- en: '[PyTorch 2.0 Troubleshooting](torch.compiler_troubleshooting.html)'
id: totrans-35
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[PyTorch 2.0 故障排除](torch.compiler_troubleshooting.html)'
- en: '[PyTorch 2.0 Performance Dashboard](torch.compiler_performance_dashboard.html)'
id: totrans-36
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[PyTorch 2.0 性能仪表板](torch.compiler_performance_dashboard.html)'
- en: Deep Dive for PyTorch Developers
id: totrans-37
prefs: []
type: TYPE_NORMAL
zh: PyTorch 开发者深入研究
- en: '[TorchDynamo Deep Dive](torch.compiler_deepdive.html)'
id: totrans-38
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[TorchDynamo 深入研究](torch.compiler_deepdive.html)'
- en: '[Guards Overview](torch.compiler_guards_overview.html)'
id: totrans-39
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[守卫概述](torch.compiler_guards_overview.html)'
- en: '[Dynamic shapes](torch.compiler_dynamic_shapes.html)'
id: totrans-40
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[动态形状](torch.compiler_dynamic_shapes.html)'
- en: '[PyTorch 2.0 NNModule Support](torch.compiler_nn_module.html)'
id: totrans-41
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[PyTorch 2.0 NNModule 支持](torch.compiler_nn_module.html)'
- en: '[Best Practices for Backends](torch.compiler_best_practices_for_backends.html)'
id: totrans-42
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[后端最佳实践](torch.compiler_best_practices_for_backends.html)'
- en: '[CUDAGraph Trees](torch.compiler_cudagraph_trees.html)'
id: totrans-43
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[CUDAGraph 树](torch.compiler_cudagraph_trees.html)'
- en: '[Fake tensor](torch.compiler_fake_tensor.html)'
id: totrans-44
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[伪张量](torch.compiler_fake_tensor.html)'
- en: HowTo for PyTorch Backend Vendors
id: totrans-45
prefs: []
type: TYPE_NORMAL
zh: PyTorch 后端供应商操作指南
- en: '[Custom Backends](torch.compiler_custom_backends.html)'
id: totrans-46
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[自定义后端](torch.compiler_custom_backends.html)'
- en: '[Writing Graph Transformations on ATen IR](torch.compiler_transformations.html)'
id: totrans-47
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[在 ATen IR 上编写图转换](torch.compiler_transformations.html)'
- en: '[IRs](torch.compiler_ir.html)'
id: totrans-48
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[IRs](torch.compiler_ir.html)'
- en: torch.fft
id: totrans-0
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: torch.fft
- en: 原文:[https://pytorch.org/docs/stable/fft.html](https://pytorch.org/docs/stable/fft.html)
id: totrans-1
prefs:
- PREF_BQ
type: TYPE_NORMAL
zh: 原文:[https://pytorch.org/docs/stable/fft.html](https://pytorch.org/docs/stable/fft.html)
- en: Discrete Fourier transforms and related functions.
id: totrans-2
prefs: []
type: TYPE_NORMAL
zh: 离散傅里叶变换和相关函数。
- en: '## Fast Fourier Transforms'
id: totrans-3
prefs: []
type: TYPE_NORMAL
zh: '## 快速傅里叶变换'
- en: '| [`fft`](generated/torch.fft.fft.html#torch.fft.fft "torch.fft.fft") | Computes
the one dimensional discrete Fourier transform of `input`. |'
id: totrans-4
prefs: []
type: TYPE_TB
zh: '| [`fft`](generated/torch.fft.fft.html#torch.fft.fft "torch.fft.fft") | 计算`input`的一维离散傅里叶变换。
|'
- en: '| [`ifft`](generated/torch.fft.ifft.html#torch.fft.ifft "torch.fft.ifft") |
Computes the one dimensional inverse discrete Fourier transform of `input`. |'
id: totrans-5
prefs: []
type: TYPE_TB
zh: '| [`ifft`](generated/torch.fft.ifft.html#torch.fft.ifft "torch.fft.ifft") |
计算`input`的一维逆离散傅里叶变换。 |'
- en: '| [`fft2`](generated/torch.fft.fft2.html#torch.fft.fft2 "torch.fft.fft2") |
Computes the 2 dimensional discrete Fourier transform of `input`. |'
id: totrans-6
prefs: []
type: TYPE_TB
zh: '| [`fft2`](generated/torch.fft.fft2.html#torch.fft.fft2 "torch.fft.fft2") |
计算`input`的二维离散傅里叶变换。 |'
- en: '| [`ifft2`](generated/torch.fft.ifft2.html#torch.fft.ifft2 "torch.fft.ifft2")
| Computes the 2 dimensional inverse discrete Fourier transform of `input`. |'
id: totrans-7
prefs: []
type: TYPE_TB
zh: '| [`ifft2`](generated/torch.fft.ifft2.html#torch.fft.ifft2 "torch.fft.ifft2")
| 计算`input`的二维逆离散傅里叶变换。 |'
- en: '| [`fftn`](generated/torch.fft.fftn.html#torch.fft.fftn "torch.fft.fftn") |
Computes the N dimensional discrete Fourier transform of `input`. |'
id: totrans-8
prefs: []
type: TYPE_TB
zh: '| [`fftn`](generated/torch.fft.fftn.html#torch.fft.fftn "torch.fft.fftn") |
计算`input`的N维离散傅里叶变换。 |'
- en: '| [`ifftn`](generated/torch.fft.ifftn.html#torch.fft.ifftn "torch.fft.ifftn")
| Computes the N dimensional inverse discrete Fourier transform of `input`. |'
id: totrans-9
prefs: []
type: TYPE_TB
zh: '| [`ifftn`](generated/torch.fft.ifftn.html#torch.fft.ifftn "torch.fft.ifftn")
| 计算`input`的N维逆离散傅里叶变换。 |'
- en: '| [`rfft`](generated/torch.fft.rfft.html#torch.fft.rfft "torch.fft.rfft") |
Computes the one dimensional Fourier transform of real-valued `input`. |'
id: totrans-10
prefs: []
type: TYPE_TB
zh: '| [`rfft`](generated/torch.fft.rfft.html#torch.fft.rfft "torch.fft.rfft") |
计算实值`input`的一维傅里叶变换。 |'
- en: '| [`irfft`](generated/torch.fft.irfft.html#torch.fft.irfft "torch.fft.irfft")
| Computes the inverse of [`rfft()`](generated/torch.fft.rfft.html#torch.fft.rfft
"torch.fft.rfft"). |'
id: totrans-11
prefs: []
type: TYPE_TB
zh: '| [`irfft`](generated/torch.fft.irfft.html#torch.fft.irfft "torch.fft.irfft")
| 计算[`rfft()`](generated/torch.fft.rfft.html#torch.fft.rfft "torch.fft.rfft")的逆变换。
|'
- en: '| [`rfft2`](generated/torch.fft.rfft2.html#torch.fft.rfft2 "torch.fft.rfft2")
| Computes the 2-dimensional discrete Fourier transform of real `input`. |'
id: totrans-12
prefs: []
type: TYPE_TB
zh: '| [`rfft2`](generated/torch.fft.rfft2.html#torch.fft.rfft2 "torch.fft.rfft2")
| 计算实数`input`的二维离散傅里叶变换。 |'
- en: '| [`irfft2`](generated/torch.fft.irfft2.html#torch.fft.irfft2 "torch.fft.irfft2")
| Computes the inverse of [`rfft2()`](generated/torch.fft.rfft2.html#torch.fft.rfft2
"torch.fft.rfft2"). |'
id: totrans-13
prefs: []
type: TYPE_TB
zh: '| [`irfft2`](generated/torch.fft.irfft2.html#torch.fft.irfft2 "torch.fft.irfft2")
| 计算[`rfft2()`](generated/torch.fft.rfft2.html#torch.fft.rfft2 "torch.fft.rfft2")的逆变换。
|'
- en: '| [`rfftn`](generated/torch.fft.rfftn.html#torch.fft.rfftn "torch.fft.rfftn")
| Computes the N-dimensional discrete Fourier transform of real `input`. |'
id: totrans-14
prefs: []
type: TYPE_TB
zh: '| [`rfftn`](generated/torch.fft.rfftn.html#torch.fft.rfftn "torch.fft.rfftn")
| 计算实数`input`的N维离散傅里叶变换。 |'
- en: '| [`irfftn`](generated/torch.fft.irfftn.html#torch.fft.irfftn "torch.fft.irfftn")
| Computes the inverse of [`rfftn()`](generated/torch.fft.rfftn.html#torch.fft.rfftn
"torch.fft.rfftn"). |'
id: totrans-15
prefs: []
type: TYPE_TB
zh: '| [`irfftn`](generated/torch.fft.irfftn.html#torch.fft.irfftn "torch.fft.irfftn")
| 计算[`rfftn()`](generated/torch.fft.rfftn.html#torch.fft.rfftn "torch.fft.rfftn")的逆变换。
|'
- en: '| [`hfft`](generated/torch.fft.hfft.html#torch.fft.hfft "torch.fft.hfft") |
Computes the one dimensional discrete Fourier transform of a Hermitian symmetric
`input` signal. |'
id: totrans-16
prefs: []
type: TYPE_TB
zh: '| [`hfft`](generated/torch.fft.hfft.html#torch.fft.hfft "torch.fft.hfft") |
计算埃尔米特对称`input`信号的一维离散傅里叶变换。 |'
- en: '| [`ihfft`](generated/torch.fft.ihfft.html#torch.fft.ihfft "torch.fft.ihfft")
| Computes the inverse of [`hfft()`](generated/torch.fft.hfft.html#torch.fft.hfft
"torch.fft.hfft"). |'
id: totrans-17
prefs: []
type: TYPE_TB
zh: '| [`ihfft`](generated/torch.fft.ihfft.html#torch.fft.ihfft "torch.fft.ihfft")
| 计算[`hfft()`](generated/torch.fft.hfft.html#torch.fft.hfft "torch.fft.hfft")的逆变换。
|'
- en: '| [`hfft2`](generated/torch.fft.hfft2.html#torch.fft.hfft2 "torch.fft.hfft2")
| Computes the 2-dimensional discrete Fourier transform of a Hermitian symmetric
`input` signal. |'
id: totrans-18
prefs: []
type: TYPE_TB
zh: '| [`hfft2`](generated/torch.fft.hfft2.html#torch.fft.hfft2 "torch.fft.hfft2")
| 计算埃尔米特对称`input`信号的二维离散傅里叶变换。 |'
- en: '| [`ihfft2`](generated/torch.fft.ihfft2.html#torch.fft.ihfft2 "torch.fft.ihfft2")
| Computes the 2-dimensional inverse discrete Fourier transform of real `input`.
|'
id: totrans-19
prefs: []
type: TYPE_TB
zh: '| [`ihfft2`](generated/torch.fft.ihfft2.html#torch.fft.ihfft2 "torch.fft.ihfft2")
| 计算实数`input`的二维逆离散傅里叶变换。 |'
- en: '| [`hfftn`](generated/torch.fft.hfftn.html#torch.fft.hfftn "torch.fft.hfftn")
| Computes the n-dimensional discrete Fourier transform of a Hermitian symmetric
`input` signal. |'
id: totrans-20
prefs: []
type: TYPE_TB
zh: '| [`hfftn`](generated/torch.fft.hfftn.html#torch.fft.hfftn "torch.fft.hfftn")
| 计算埃尔米特对称`input`信号的n维离散傅里叶变换。 |'
- en: '| [`ihfftn`](generated/torch.fft.ihfftn.html#torch.fft.ihfftn "torch.fft.ihfftn")
| Computes the N-dimensional inverse discrete Fourier transform of real `input`.
|'
id: totrans-21
prefs: []
type: TYPE_TB
zh: '| [`ihfftn`](generated/torch.fft.ihfftn.html#torch.fft.ihfftn "torch.fft.ihfftn")
| 计算实数`input`的N维逆离散傅里叶变换。 |'
- en: Helper Functions
id: totrans-22
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 辅助函数
- en: '| [`fftfreq`](generated/torch.fft.fftfreq.html#torch.fft.fftfreq "torch.fft.fftfreq")
| Computes the discrete Fourier Transform sample frequencies for a signal of size
`n`. |'
id: totrans-23
prefs: []
type: TYPE_TB
zh: '| [`fftfreq`](generated/torch.fft.fftfreq.html#torch.fft.fftfreq "torch.fft.fftfreq")
| 计算大小为`n`的信号的离散傅里叶变换采样频率。 |'
- en: '| [`rfftfreq`](generated/torch.fft.rfftfreq.html#torch.fft.rfftfreq "torch.fft.rfftfreq")
| Computes the sample frequencies for [`rfft()`](generated/torch.fft.rfft.html#torch.fft.rfft
"torch.fft.rfft") with a signal of size `n`. |'
id: totrans-24
prefs: []
type: TYPE_TB
zh: '| [`rfftfreq`](generated/torch.fft.rfftfreq.html#torch.fft.rfftfreq "torch.fft.rfftfreq")
| 计算具有大小`n`的信号的[`rfft()`](generated/torch.fft.rfft.html#torch.fft.rfft "torch.fft.rfft")的采样频率。
|'
- en: '| [`fftshift`](generated/torch.fft.fftshift.html#torch.fft.fftshift "torch.fft.fftshift")
| Reorders n-dimensional FFT data, as provided by [`fftn()`](generated/torch.fft.fftn.html#torch.fft.fftn
"torch.fft.fftn"), to have negative frequency terms first. |'
id: totrans-25
prefs: []
type: TYPE_TB
zh: '| [`fftshift`](generated/torch.fft.fftshift.html#torch.fft.fftshift "torch.fft.fftshift")
| 重新排列n维FFT数据,如[`fftn()`](generated/torch.fft.fftn.html#torch.fft.fftn "torch.fft.fftn")提供的,以使负频率项优先。
|'
- en: '| [`ifftshift`](generated/torch.fft.ifftshift.html#torch.fft.ifftshift "torch.fft.ifftshift")
| Inverse of [`fftshift()`](generated/torch.fft.fftshift.html#torch.fft.fftshift
"torch.fft.fftshift"). |'
id: totrans-26
prefs: []
type: TYPE_TB
zh: '| [`ifftshift`](生成/torch.fft.ifftshift.html#torch.fft.ifftshift "torch.fft.ifftshift")
| [`fftshift()`](生成/torch.fft.fftshift.html#torch.fft.fftshift "torch.fft.fftshift")
的逆操作。 |'
- en: torch.func
id: totrans-0
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: torch.func
- en: 原文:[https://pytorch.org/docs/stable/func.html](https://pytorch.org/docs/stable/func.html)
id: totrans-1
prefs:
- PREF_BQ
type: TYPE_NORMAL
zh: 原文:[https://pytorch.org/docs/stable/func.html](https://pytorch.org/docs/stable/func.html)
- en: torch.func, previously known as “functorch”, is [JAX-like](https://github.com/google/jax)
composable function transforms for PyTorch.
id: totrans-2
prefs: []
type: TYPE_NORMAL
zh: torch.func,以前称为“functorch”,是PyTorch的[JAX-like](https://github.com/google/jax)可组合函数变换。
- en: Note
id: totrans-3
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: This library is currently in [beta](https://pytorch.org/blog/pytorch-feature-classification-changes/#beta).
What this means is that the features generally work (unless otherwise documented)
and we (the PyTorch team) are committed to bringing this library forward. However,
the APIs may change under user feedback and we don’t have full coverage over PyTorch
operations.
id: totrans-4
prefs: []
type: TYPE_NORMAL
zh: 该库目前处于[测试版](https://pytorch.org/blog/pytorch-feature-classification-changes/#beta)。这意味着功能通常可用(除非另有说明),我们(PyTorch团队)致力于推进该库。但是,API可能会根据用户反馈进行更改,我们对PyTorch操作的覆盖范围不完整。
- en: If you have suggestions on the API or use-cases you’d like to be covered, please
open an GitHub issue or reach out. We’d love to hear about how you’re using the
library.
id: totrans-5
prefs: []
type: TYPE_NORMAL
zh: 如果您对API或用例有建议,请打开GitHub问题或联系我们。我们很乐意听听您如何使用库。
- en: What are composable function transforms?[](#what-are-composable-function-transforms
"Permalink to this heading")
id: totrans-6
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 什么是可组合的函数变换?[](#what-are-composable-function-transforms "跳转到此标题")
- en: A “function transform” is a higher-order function that accepts a numerical function
and returns a new function that computes a different quantity.
id: totrans-7
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: “函数变换”是一个高阶函数,接受一个数值函数并返回一个计算不同量的新函数。
- en: '[`torch.func`](func.api.html#module-torch.func "torch.func") has auto-differentiation
transforms (`grad(f)` returns a function that computes the gradient of `f`), a
vectorization/batching transform (`vmap(f)` returns a function that computes `f`
over batches of inputs), and others.'
id: totrans-8
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[`torch.func`](func.api.html#module-torch.func "torch.func")具有自动微分变换(`grad(f)`返回一个计算`f`梯度的函数),矢量化/批处理变换(`vmap(f)`返回一个计算输入批次上的`f`的函数)等。'
- en: These function transforms can compose with each other arbitrarily. For example,
composing `vmap(grad(f))` computes a quantity called per-sample-gradients that
stock PyTorch cannot efficiently compute today.
id: totrans-9
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 这些函数变换可以任意组合。例如,组合`vmap(grad(f))`计算一种称为每样本梯度的量,目前原始PyTorch无法高效计算。
- en: Why composable function transforms?[](#why-composable-function-transforms "Permalink
to this heading")
id: totrans-10
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 为什么使用可组合的函数变换?[](#why-composable-function-transforms "跳转到此标题")
- en: 'There are a number of use cases that are tricky to do in PyTorch today:'
id: totrans-11
prefs: []
type: TYPE_NORMAL
zh: 目前在PyTorch中有一些棘手的用例:
- en: computing per-sample-gradients (or other per-sample quantities)
id: totrans-12
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 计算每样本梯度(或其他每样本量)
- en: running ensembles of models on a single machine
id: totrans-13
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 在单台机器上运行模型集合
- en: efficiently batching together tasks in the inner-loop of MAML
id: totrans-14
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 在MAML的内循环中高效批处理任务
- en: efficiently computing Jacobians and Hessians
id: totrans-15
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 高效计算雅可比矩阵和海森矩阵
- en: efficiently computing batched Jacobians and Hessians
id: totrans-16
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 高效计算批量雅可比矩阵和海森矩阵
- en: Composing [`vmap()`](generated/torch.func.vmap.html#torch.func.vmap "torch.func.vmap"),
[`grad()`](generated/torch.func.grad.html#torch.func.grad "torch.func.grad"),
and [`vjp()`](generated/torch.func.vjp.html#torch.func.vjp "torch.func.vjp") transforms
allows us to express the above without designing a separate subsystem for each.
This idea of composable function transforms comes from the [JAX framework](https://github.com/google/jax).
id: totrans-17
prefs: []
type: TYPE_NORMAL
zh: 组合[`vmap()`](generated/torch.func.vmap.html#torch.func.vmap "torch.func.vmap")、[`grad()`](generated/torch.func.grad.html#torch.func.grad
"torch.func.grad")和[`vjp()`](generated/torch.func.vjp.html#torch.func.vjp "torch.func.vjp")变换使我们能够表达上述内容,而无需为每个设计单独的子系统。这种可组合函数变换的想法来自[JAX框架](https://github.com/google/jax)。
- en: Read More
id: totrans-18
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 阅读更多
- en: '[torch.func Whirlwind Tour](func.whirlwind_tour.html)'
id: totrans-19
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[torch.func快速浏览](func.whirlwind_tour.html)'
- en: '[What is torch.func?](func.whirlwind_tour.html#what-is-torch-func)'
id: totrans-20
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[什么是torch.func?](func.whirlwind_tour.html#what-is-torch-func)'
- en: '[Why composable function transforms?](func.whirlwind_tour.html#why-composable-function-transforms)'
id: totrans-21
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[为什么使用可组合的函数变换?](func.whirlwind_tour.html#why-composable-function-transforms)'
- en: '[What are the transforms?](func.whirlwind_tour.html#what-are-the-transforms)'
id: totrans-22
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[什么是变换?](func.whirlwind_tour.html#what-are-the-transforms)'
- en: '[torch.func API Reference](func.api.html)'
id: totrans-23
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[torch.func API参考](func.api.html)'
- en: '[Function Transforms](func.api.html#function-transforms)'
id: totrans-24
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[函数变换](func.api.html#function-transforms)'
- en: '[Utilities for working with torch.nn.Modules](func.api.html#utilities-for-working-with-torch-nn-modules)'
id: totrans-25
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[与torch.nn.Modules一起工作的实用程序](func.api.html#utilities-for-working-with-torch-nn-modules)'
- en: '[UX Limitations](func.ux_limitations.html)'
id: totrans-26
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[用户体验限制](func.ux_limitations.html)'
- en: '[General limitations](func.ux_limitations.html#general-limitations)'
id: totrans-27
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[一般限制](func.ux_limitations.html#general-limitations)'
- en: '[torch.autograd APIs](func.ux_limitations.html#torch-autograd-apis)'
id: totrans-28
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[torch.autograd API](func.ux_limitations.html#torch-autograd-apis)'
- en: '[vmap limitations](func.ux_limitations.html#vmap-limitations)'
id: totrans-29
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[vmap限制](func.ux_limitations.html#vmap-limitations)'
- en: '[Randomness](func.ux_limitations.html#randomness)'
id: totrans-30
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[随机性](func.ux_limitations.html#randomness)'
- en: '[Migrating from functorch to torch.func](func.migrating.html)'
id: totrans-31
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '[从functorch迁移到torch.func](func.migrating.html)'
- en: '[function transforms](func.migrating.html#function-transforms)'
id: totrans-32
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[函数变换](func.migrating.html#function-transforms)'
- en: '[NN module utilities](func.migrating.html#nn-module-utilities)'
id: totrans-33
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[NN模块实用程序](func.migrating.html#nn-module-utilities)'
- en: '[functorch.compile](func.migrating.html#functorch-compile)'
id: totrans-34
prefs:
- PREF_IND
- PREF_UL
type: TYPE_NORMAL
zh: '[functorch.compile](func.migrating.html#functorch-compile)'
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册