提交 08918adb 编写于 作者: 绝不原创的飞龙's avatar 绝不原创的飞龙

2024-02-05 13:53:43

上级 1c7a243f
......@@ -3,15 +3,18 @@
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: torch.profiler
- en: 原文:[https://pytorch.org/docs/stable/profiler.html](https://pytorch.org/docs/stable/profiler.html)
id: totrans-1
prefs:
- PREF_BQ
type: TYPE_NORMAL
zh: 原文:[https://pytorch.org/docs/stable/profiler.html](https://pytorch.org/docs/stable/profiler.html)
- en: '## Overview'
id: totrans-2
prefs: []
type: TYPE_NORMAL
zh: '## 概述'
- en: PyTorch Profiler is a tool that allows the collection of performance metrics
during training and inference. Profiler’s context manager API can be used to better
understand what model operators are the most expensive, examine their input shapes
......@@ -19,20 +22,24 @@
id: totrans-3
prefs: []
type: TYPE_NORMAL
zh: PyTorch Profiler是一个工具,允许在训练和推断过程中收集性能指标。Profiler的上下文管理器API可用于更好地了解哪些模型操作符是最昂贵的,检查它们的输入形状和堆栈跟踪,研究设备内核活动并可视化执行跟踪。
- en: Note
id: totrans-4
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: An earlier version of the API in [`torch.autograd`](autograd.html#module-torch.autograd
"torch.autograd") module is considered legacy and will be deprecated.
id: totrans-5
prefs: []
type: TYPE_NORMAL
zh: '[`torch.autograd`](autograd.html#module-torch.autograd "torch.autograd")模块中的早期版本被视为遗留版本,并将被弃用。'
- en: API Reference
id: totrans-6
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: API参考
- en: '[PRE0]'
id: totrans-7
prefs: []
......@@ -42,10 +49,12 @@
id: totrans-8
prefs: []
type: TYPE_NORMAL
zh: 低级别分析器包装自动梯度分析
- en: Parameters
id: totrans-9
prefs: []
type: TYPE_NORMAL
zh: 参数
- en: '**activities** (*iterable*) list of activity groups (CPU, CUDA) to use in
profiling, supported values: `torch.profiler.ProfilerActivity.CPU`, `torch.profiler.ProfilerActivity.CUDA`.
Default value: ProfilerActivity.CPU and (when available) ProfilerActivity.CUDA.'
......@@ -53,12 +62,15 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**activities**(*可迭代对象*)- 要在分析中使用的活动组(CPU、CUDA)列表,支持的值:`torch.profiler.ProfilerActivity.CPU`、`torch.profiler.ProfilerActivity.CUDA`。默认值:ProfilerActivity.CPU和(如果可用)ProfilerActivity.CUDA。'
- en: '**record_shapes** ([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)")) save information about operator’s input shapes.'
id: totrans-11
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**record_shapes**([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)"))- 保存有关操作符输入形状的信息。'
- en: '**profile_memory** ([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)")) track tensor memory allocation/deallocation (see `export_memory_timeline`
for more details).'
......@@ -66,6 +78,8 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**profile_memory**([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)"))- 跟踪张量内存分配/释放(有关更多详细信息,请参阅`export_memory_timeline`)。'
- en: '**with_stack** ([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)")) record source information (file and line number) for the
ops.'
......@@ -73,6 +87,8 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**with_stack**([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)"))- 记录操作的源信息(文件和行号)。'
- en: '**with_flops** ([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)")) use formula to estimate the FLOPS of specific operators
(matrix multiplication and 2D convolution).'
......@@ -80,6 +96,8 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**with_flops**([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)"))- 使用公式估计特定操作符的FLOPS(矩阵乘法和2D卷积)。'
- en: '**with_modules** ([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)")) record module hierarchy (including function names) corresponding
to the callstack of the op. e.g. If module A’s forward call’s module B’s forward
......@@ -90,20 +108,26 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**with_modules**([*bool*](https://docs.python.org/3/library/functions.html#bool
"(in Python v3.12)"))- 记录模块层次结构(包括函数名称),对应于操作的调用堆栈。例如,如果模块A的前向调用的模块B的前向包含一个aten::add操作,则aten::add的模块层次结构是A.B
请注意,此支持目前仅适用于TorchScript模型,而不适用于急切模式模型。'
- en: '**experimental_config** (*_ExperimentalConfig*) A set of experimental options
used by profiler libraries like Kineto. Note, backward compatibility is not guaranteed.'
id: totrans-16
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: '**experimental_config**(*_ExperimentalConfig*)- 由像Kineto这样的分析器库使用的一组实验选项。请注意,不保证向后兼容性。'
- en: Note
id: totrans-17
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: This API is experimental and subject to change in the future.
id: totrans-18
prefs: []
type: TYPE_NORMAL
zh: 此API是实验性的,未来可能会更改。
- en: Enabling shape and stack tracing results in additional overhead. When record_shapes=True
is specified, profiler will temporarily hold references to the tensors; that may
further prevent certain optimizations that depend on the reference count and introduce
......@@ -111,6 +135,7 @@
id: totrans-19
prefs: []
type: TYPE_NORMAL
zh: 启用形状和堆栈跟踪会导致额外的开销。当指定record_shapes=True时,分析器将暂时保留对张量的引用;这可能进一步阻止依赖引用计数的某些优化,并引入额外的张量副本。
- en: '[PRE1]'
id: totrans-20
prefs: []
......@@ -121,6 +146,7 @@
id: totrans-21
prefs: []
type: TYPE_NORMAL
zh: 向跟踪文件中添加具有字符串键和字符串值的用户定义的元数据
- en: '[PRE2]'
id: totrans-22
prefs: []
......@@ -131,6 +157,7 @@
id: totrans-23
prefs: []
type: TYPE_NORMAL
zh: 向跟踪文件中添加具有字符串键和有效json值的用户定义的元数据
- en: '[PRE3]'
id: totrans-24
prefs: []
......@@ -141,6 +168,7 @@
id: totrans-25
prefs: []
type: TYPE_NORMAL
zh: 返回未聚合的分析器事件列表,用于在跟踪回调中使用或在分析完成后使用
- en: '[PRE4]'
id: totrans-26
prefs: []
......@@ -150,6 +178,7 @@
id: totrans-27
prefs: []
type: TYPE_NORMAL
zh: 以Chrome JSON格式导出收集的跟踪信息。
- en: '[PRE5]'
id: totrans-28
prefs: []
......@@ -161,12 +190,14 @@
id: totrans-29
prefs: []
type: TYPE_NORMAL
zh: 从收集的树中导出分析器的内存事件信息,用于给定设备,并导出时间线图。使用`export_memory_timeline`有3个可导出的文件,每个文件由`path`的后缀控制。
- en: For an HTML compatible plot, use the suffix `.html`, and a memory timeline plot
will be embedded as a PNG file in the HTML file.
id: totrans-30
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 要生成HTML兼容的绘图,请使用后缀`.html`,内存时间线图将嵌入到HTML文件中作为PNG文件。
- en: For plot points consisting of `[times, [sizes by category]]`, where `times`
are timestamps and `sizes` are memory usage for each category. The memory timeline
plot will be saved a JSON (`.json`) or gzipped JSON (`.json.gz`) depending on
......@@ -175,6 +206,7 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 对于由`[times, [sizes by category]]`组成的绘图点,其中`times`是时间戳,`sizes`是每个类别的内存使用量。内存时间线图将保存为JSON(`.json`)或经过gzip压缩的JSON(`.json.gz`),具体取决于后缀。
- en: For raw memory points, use the suffix `.raw.json.gz`. Each raw memory event
will consist of `(timestamp, action, numbytes, category)`, where `action` is one
of `[PREEXISTING, CREATE, INCREMENT_VERSION, DESTROY]`, and `category` is one
......@@ -183,10 +215,13 @@
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 对于原始内存点,请使用后缀`.raw.json.gz`。每个原始内存事件将包括`(时间戳,操作,字节数,类别)`,其中`操作`是`[PREEXISTING,
CREATE, INCREMENT_VERSION, DESTROY]`之一,`类别`是`torch.profiler._memory_profiler.Category`中的枚举之一。
- en: 'Output: Memory timeline written as gzipped JSON, JSON, or HTML.'
id: totrans-33
prefs: []
type: TYPE_NORMAL
zh: 输出:内存时间线以gzipped JSON、JSON或HTML形式编写。
- en: '[PRE6]'
id: totrans-34
prefs: []
......@@ -196,10 +231,12 @@
id: totrans-35
prefs: []
type: TYPE_NORMAL
zh: 将堆栈跟踪保存在适合可视化的文件中。
- en: Parameters
id: totrans-36
prefs: []
type: TYPE_NORMAL
zh: 参数
- en: '**path** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in
Python v3.12)")) save stacks file to this location;'
id: totrans-37
......
此差异已折叠。
- en: torch.onnx
id: totrans-0
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: torch.onnx
- en: 原文:[https://pytorch.org/docs/stable/onnx.html](https://pytorch.org/docs/stable/onnx.html)
id: totrans-1
prefs:
- PREF_BQ
type: TYPE_NORMAL
zh: 原文:[https://pytorch.org/docs/stable/onnx.html](https://pytorch.org/docs/stable/onnx.html)
- en: Overview
id: totrans-2
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 概述
- en: '[Open Neural Network eXchange (ONNX)](https://onnx.ai/) is an open standard
format for representing machine learning models. The `torch.onnx` module captures
the computation graph from a native PyTorch [`torch.nn.Module`](generated/torch.nn.Module.html#torch.nn.Module
"torch.nn.Module") model and converts it into an [ONNX graph](https://github.com/onnx/onnx/blob/main/docs/IR.md).'
id: totrans-3
prefs: []
type: TYPE_NORMAL
zh: '[Open Neural Network eXchange (ONNX)](https://onnx.ai/) 是表示机器学习模型的开放标准格式。`torch.onnx`
模块从本机 PyTorch [`torch.nn.Module`](generated/torch.nn.Module.html#torch.nn.Module
"torch.nn.Module") 模型中捕获计算图,并将其转换为 [ONNX 图](https://github.com/onnx/onnx/blob/main/docs/IR.md)。'
- en: The exported model can be consumed by any of the many [runtimes that support
ONNX](https://onnx.ai/supported-tools.html#deployModel), including Microsoft’s
[ONNX Runtime](https://www.onnxruntime.ai).
id: totrans-4
prefs: []
type: TYPE_NORMAL
zh: 导出的模型可以被支持 ONNX 的许多 [运行时](https://onnx.ai/supported-tools.html#deployModel)
使用,包括微软的 [ONNX Runtime](https://www.onnxruntime.ai)。
- en: '**There are two flavors of ONNX exporter API that you can use, as listed below:**'
id: totrans-5
prefs: []
type: TYPE_NORMAL
zh: '**您可以使用以下两种 ONNX 导出器 API:**'
- en: TorchDynamo-based ONNX Exporter[](#torchdynamo-based-onnx-exporter "Permalink
to this heading")
id: totrans-6
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 基于 TorchDynamo 的 ONNX 导出器[](#torchdynamo-based-onnx-exporter "跳转到此标题")
- en: '*The TorchDynamo-based ONNX exporter is the newest (and Beta) exporter for
PyTorch 2.0 and newer*'
id: totrans-7
prefs: []
type: TYPE_NORMAL
zh: '*基于 TorchDynamo ONNX 导出器是 PyTorch 2.0 及更新版本的最新(Beta)导出器*'
- en: TorchDynamo engine is leveraged to hook into Python’s frame evaluation API and
dynamically rewrite its bytecode into an FX Graph. The resulting FX Graph is then
polished before it is finally translated into an ONNX graph.
id: totrans-8
prefs: []
type: TYPE_NORMAL
zh: TorchDynamo 引擎被用来钩入 Python 的帧评估 API 并动态重写其字节码为 FX 图。然后,生成的 FX 图在最终转换为 ONNX 图之前被优化。
- en: The main advantage of this approach is that the [FX graph](https://pytorch.org/docs/stable/fx.html)
is captured using bytecode analysis that preserves the dynamic nature of the model
instead of using traditional static tracing techniques.
id: totrans-9
prefs: []
type: TYPE_NORMAL
zh: 这种方法的主要优势在于,[FX 图](https://pytorch.org/docs/stable/fx.html) 是通过保留模型的动态特性而不是使用传统的静态追踪技术来捕获的。
- en: '[Learn more about the TorchDynamo-based ONNX Exporter](onnx_dynamo.html)'
id: totrans-10
prefs: []
type: TYPE_NORMAL
zh: '[了解基于 TorchDynamo ONNX 导出器](onnx_dynamo.html)'
- en: TorchScript-based ONNX Exporter[](#torchscript-based-onnx-exporter "Permalink
to this heading")
id: totrans-11
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 基于 TorchScript 的 ONNX 导出器[](#torchscript-based-onnx-exporter "跳转到此标题")
- en: '*The TorchScript-based ONNX exporter is available since PyTorch 1.2.0*'
id: totrans-12
prefs: []
type: TYPE_NORMAL
zh: '*基于 TorchScript ONNX 导出器自 PyTorch 1.2.0 起可用*'
- en: '[TorchScript](https://pytorch.org/docs/stable/jit.html) is leveraged to trace
(through [`torch.jit.trace()`](generated/torch.jit.trace.html#torch.jit.trace
"torch.jit.trace")) the model and capture a static computation graph.'
id: totrans-13
prefs: []
type: TYPE_NORMAL
zh: '[TorchScript](https://pytorch.org/docs/stable/jit.html) 被利用来追踪(通过 [`torch.jit.trace()`](generated/torch.jit.trace.html#torch.jit.trace
"torch.jit.trace"))模型并捕获静态计算图。'
- en: 'As a consequence, the resulting graph has a couple limitations:'
id: totrans-14
prefs: []
type: TYPE_NORMAL
zh: 因此,生成的图有一些限制:
- en: It does not record any control-flow, like if-statements or loops;
id: totrans-15
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 它不记录任何控制流,比如 if 语句或循环;
- en: Does not handle nuances between `training` and `eval` mode;
id: totrans-16
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 不处理 `training` 和 `eval` 模式之间的细微差别;
- en: Does not truly handle dynamic inputs
id: totrans-17
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 不真正处理动态输入
- en: As an attempt to support the static tracing limitations, the exporter also supports
TorchScript scripting (through [`torch.jit.script()`](generated/torch.jit.script.html#torch.jit.script
"torch.jit.script")), which adds support for data-dependent control-flow, for
example. However, TorchScript itself is a subset of the Python language, so not
all features in Python are supported, such as in-place operations.
id: totrans-18
prefs: []
type: TYPE_NORMAL
zh: 为了支持静态追踪的限制,导出器还支持 TorchScript 脚本化(通过 [`torch.jit.script()`](generated/torch.jit.script.html#torch.jit.script
"torch.jit.script")),这增加了对数据相关控制流的支持,例如。然而,TorchScript 本身是 Python 语言的一个子集,因此并不支持
Python 中的所有功能,比如原地操作。
- en: '[Learn more about the TorchScript-based ONNX Exporter](onnx_torchscript.html)'
id: totrans-19
prefs: []
type: TYPE_NORMAL
zh: '[了解基于 TorchScript ONNX 导出器](onnx_torchscript.html)'
- en: Contributing / Developing
id: totrans-20
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 贡献 / 开发
- en: The ONNX exporter is a community project and we welcome contributions. We follow
the [PyTorch guidelines for contributions](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md),
but you might also be interested in reading our [development wiki](https://github.com/pytorch/pytorch/wiki/PyTorch-ONNX-exporter).
id: totrans-21
prefs: []
type: TYPE_NORMAL
zh: ONNX 导出器是一个社区项目,我们欢迎贡献。我们遵循 [PyTorch 的贡献指南](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md),但您可能也对阅读我们的
[开发维基](https://github.com/pytorch/pytorch/wiki/PyTorch-ONNX-exporter)感兴趣。
此差异已折叠。
- en: Complex Numbers
id: totrans-0
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: '[https://pytorch.org/docs/stable/complex_numbers.html](https://pytorch.org/docs/stable/complex_numbers.html)'
- en: 原文:[https://pytorch.org/docs/stable/complex_numbers.html](https://pytorch.org/docs/stable/complex_numbers.html)
id: totrans-1
prefs:
- PREF_BQ
type: TYPE_NORMAL
- en: Complex numbers are numbers that can be expressed in the form $a + bj$a+bj, where
a and b are real numbers, and *j* is called the imaginary unit, which satisfies
the equation $j^2 = -1$j2=−1. Complex
numbers frequently occur in mathematics and engineering, especially in topics
like signal processing. Traditionally many users and libraries (e.g., TorchAudio)
have handled complex numbers by representing the data in float tensors with shape
$(..., 2)$(...,2)
where the last dimension contains the real and imaginary values.
zh: 复数
- en: Complex numbers are numbers that can be expressed in the form $a + bj$a+bj,
where a and b are real numbers, and *j* is called the imaginary unit, which satisfies
the equation $j^2 = -1$j2=−1. Complex numbers frequently occur in mathematics
and engineering, especially in topics like signal processing. Traditionally many
users and libraries (e.g., TorchAudio) have handled complex numbers by representing
the data in float tensors with shape $(..., 2)$(...,2) where the last dimension
contains the real and imaginary values.
id: totrans-2
prefs: []
type: TYPE_NORMAL
zh: 复数是可以用形式$a + bj$a+bj表示的数,其中a和b是实数,*j*称为虚数单位,满足方程$j^2 = -1$j2=−1。复数在数学和工程中经常出现,特别是在信号处理等主题中。传统上,许多用户和库(例如TorchAudio)通过使用形状为$(...,
2)$(...,2)的浮点张量来处理复数,其中最后一个维度包含实部和虚部值。
- en: Tensors of complex dtypes provide a more natural user experience while working
with complex numbers. Operations on complex tensors (e.g., [`torch.mv()`](generated/torch.mv.html#torch.mv
"torch.mv"), [`torch.matmul()`](generated/torch.matmul.html#torch.matmul "torch.matmul"))
......@@ -23,145 +29,223 @@
mimicking them. Operations involving complex numbers in PyTorch are optimized
to use vectorized assembly instructions and specialized kernels (e.g. LAPACK,
cuBlas).
id: totrans-3
prefs: []
type: TYPE_NORMAL
zh: 复数dtype的张量在处理复数时提供更自然的用户体验。对复数张量的操作(例如[`torch.mv()`](generated/torch.mv.html#torch.mv
"torch.mv")、[`torch.matmul()`](generated/torch.matmul.html#torch.matmul "torch.matmul"))可能比在模拟它们的浮点张量上的操作更快速、更节省内存。PyTorch中涉及复数的操作经过优化,使用矢量化汇编指令和专门的内核(例如LAPACK、cuBlas)。
- en: Note
id: totrans-4
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: Spectral operations in the [torch.fft module](https://pytorch.org/docs/stable/fft.html#torch-fft)
support native complex tensors.
id: totrans-5
prefs: []
type: TYPE_NORMAL
zh: 在[torch.fft模块](https://pytorch.org/docs/stable/fft.html#torch-fft)中的频谱操作支持本机复数张量。
- en: Warning
id: totrans-6
prefs: []
type: TYPE_NORMAL
zh: 警告
- en: Complex tensors is a beta feature and subject to change.
id: totrans-7
prefs: []
type: TYPE_NORMAL
zh: 复数张量是一个测试功能,可能会发生变化。
- en: Creating Complex Tensors
id: totrans-8
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 创建复数张量
- en: 'We support two complex dtypes: torch.cfloat and torch.cdouble'
id: totrans-9
prefs: []
type: TYPE_NORMAL
zh: 我们支持两种复数dtype:torch.cfloat和torch.cdouble
- en: '[PRE0]'
id: totrans-10
prefs: []
type: TYPE_PRE
zh: '[PRE0]'
- en: Note
id: totrans-11
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: The default dtype for complex tensors is determined by the default floating
point dtype. If the default floating point dtype is torch.float64 then complex
numbers are inferred to have a dtype of torch.complex128, otherwise they are assumed
to have a dtype of torch.complex64.
id: totrans-12
prefs: []
type: TYPE_NORMAL
zh: 复数张量的默认dtype由默认浮点dtype确定。如果默认浮点dtype是torch.float64,则推断复数的dtype为torch.complex128,否则假定为torch.complex64。
- en: All factory functions apart from [`torch.linspace()`](generated/torch.linspace.html#torch.linspace
"torch.linspace"), [`torch.logspace()`](generated/torch.logspace.html#torch.logspace
"torch.logspace"), and [`torch.arange()`](generated/torch.arange.html#torch.arange
"torch.arange") are supported for complex tensors.
id: totrans-13
prefs: []
type: TYPE_NORMAL
zh: 除了[`torch.linspace()`](generated/torch.linspace.html#torch.linspace "torch.linspace")、[`torch.logspace()`](generated/torch.logspace.html#torch.logspace
"torch.logspace")和[`torch.arange()`](generated/torch.arange.html#torch.arange
"torch.arange")之外的所有工厂函数都支持复数张量。
- en: Transition from the old representation[](#transition-from-the-old-representation
"Permalink to this heading")
id: totrans-14
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 从旧表示形式过渡[](#transition-from-the-old-representation "Permalink to this heading")
- en: Users who currently worked around the lack of complex tensors with real tensors
of shape $(..., 2)$(...,2)
can easily to switch using the complex tensors in their code using [`torch.view_as_complex()`](generated/torch.view_as_complex.html#torch.view_as_complex
of shape $(..., 2)$(...,2) can easily to switch using the complex tensors in their
code using [`torch.view_as_complex()`](generated/torch.view_as_complex.html#torch.view_as_complex
"torch.view_as_complex") and [`torch.view_as_real()`](generated/torch.view_as_real.html#torch.view_as_real
"torch.view_as_real"). Note that these functions don’t perform any copy and return
a view of the input tensor.
id: totrans-15
prefs: []
type: TYPE_NORMAL
zh: 目前通过使用形状为$(..., 2)$(...,2)的实数张量绕过缺少复数张量的用户可以轻松地在其代码中使用复数张量切换,使用[`torch.view_as_complex()`](generated/torch.view_as_complex.html#torch.view_as_complex
"torch.view_as_complex")和[`torch.view_as_real()`](generated/torch.view_as_real.html#torch.view_as_real
"torch.view_as_real")。请注意,这些函数不执行任何复制操作,返回输入张量的视图。
- en: '[PRE1]'
id: totrans-16
prefs: []
type: TYPE_PRE
zh: '[PRE1]'
- en: Accessing real and imag
id: totrans-17
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 访问real和imag
- en: The real and imaginary values of a complex tensor can be accessed using the
`real` and `imag`.
id: totrans-18
prefs: []
type: TYPE_NORMAL
zh: 可以使用`real`和`imag`访问复数张量的实部和虚部值。
- en: Note
id: totrans-19
prefs: []
type: TYPE_NORMAL
zh: 注意
- en: Accessing real and imag attributes doesn’t allocate any memory, and in-place
updates on the real and imag tensors will update the original complex tensor.
Also, the returned real and imag tensors are not contiguous.
id: totrans-20
prefs: []
type: TYPE_NORMAL
zh: 访问real和imag属性不会分配任何内存,并且对real和imag张量的原位更新将更新原始复数张量。此外,返回的real和imag张量不是连续的。
- en: '[PRE2]'
id: totrans-21
prefs: []
type: TYPE_PRE
zh: '[PRE2]'
- en: Angle and abs
id: totrans-22
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 角度和绝对值
- en: The angle and absolute values of a complex tensor can be computed using [`torch.angle()`](generated/torch.angle.html#torch.angle
"torch.angle") and [`torch.abs()`](generated/torch.abs.html#torch.abs "torch.abs").
id: totrans-23
prefs: []
type: TYPE_NORMAL
zh: 可以使用[`torch.angle()`](generated/torch.angle.html#torch.angle "torch.angle")和[`torch.abs()`](generated/torch.abs.html#torch.abs
"torch.abs")计算复数张量的角度和绝对值。
- en: '[PRE3]'
id: totrans-24
prefs: []
type: TYPE_PRE
zh: '[PRE3]'
- en: Linear Algebra
id: totrans-25
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 线性代数
- en: Many linear algebra operations, like [`torch.matmul()`](generated/torch.matmul.html#torch.matmul
"torch.matmul"), [`torch.linalg.svd()`](generated/torch.linalg.svd.html#torch.linalg.svd
"torch.linalg.svd"), [`torch.linalg.solve()`](generated/torch.linalg.solve.html#torch.linalg.solve
"torch.linalg.solve") etc., support complex numbers. If you’d like to request
an operation we don’t currently support, please [search](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+complex)
if an issue has already been filed and if not, [file one](https://github.com/pytorch/pytorch/issues/new/choose).
id: totrans-26
prefs: []
type: TYPE_NORMAL
zh: 许多线性代数操作,如[`torch.matmul()`](generated/torch.matmul.html#torch.matmul "torch.matmul")、[`torch.linalg.svd()`](generated/torch.linalg.svd.html#torch.linalg.svd
"torch.linalg.svd")、[`torch.linalg.solve()`](generated/torch.linalg.solve.html#torch.linalg.solve
"torch.linalg.solve")等,支持复数。如果您想请求我们目前不支持的操作,请[搜索](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+complex)是否已经提交了问题,如果没有,请[提交一个](https://github.com/pytorch/pytorch/issues/new/choose)。
- en: Serialization
id: totrans-27
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 序列化
- en: Complex tensors can be serialized, allowing data to be saved as complex values.
id: totrans-28
prefs: []
type: TYPE_NORMAL
zh: 复数张量可以被序列化,允许数据保存为复数值。
- en: '[PRE4]'
id: totrans-29
prefs: []
type: TYPE_PRE
zh: '[PRE4]'
- en: Autograd
id: totrans-30
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 自动求导
- en: PyTorch supports autograd for complex tensors. The gradient computed is the
Conjugate Wirtinger derivative, the negative of which is precisely the direction
of steepest descent used in Gradient Descent algorithm. Thus, all the existing
optimizers work out of the box with complex parameters. For more details, check
out the note [Autograd for Complex Numbers](notes/autograd.html#complex-autograd-doc).
id: totrans-31
prefs: []
type: TYPE_NORMAL
zh: PyTorch支持复杂张量的自动求导。计算的梯度是共轭Wirtinger导数,其负值恰好是梯度下降算法中使用的最陡下降方向。因此,所有现有的优化器都可以直接与复杂参数一起使用。更多详情,请查看说明[复数的自动求导](notes/autograd.html#complex-autograd-doc)。
- en: 'We do not fully support the following subsystems:'
id: totrans-32
prefs: []
type: TYPE_NORMAL
zh: 我们不完全支持以下子系统:
- en: Quantization
id: totrans-33
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 量化
- en: JIT
id: totrans-34
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 即时编译
- en: Sparse Tensors
id: totrans-35
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 稀疏张量
- en: Distributed
id: totrans-36
prefs:
- PREF_UL
type: TYPE_NORMAL
zh: 分布式
- en: If any of these would help your use case, please [search](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+complex)
if an issue has already been filed and if not, [file one](https://github.com/pytorch/pytorch/issues/new/choose).
id: totrans-37
prefs: []
type: TYPE_NORMAL
zh: 如果其中任何一个对您的用例有帮助,请搜索是否已经提交了问题,如果没有,请提交一个。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册