Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
OpenDocCN
pytorch-doc-zh
提交
d0822658
P
pytorch-doc-zh
项目概览
OpenDocCN
/
pytorch-doc-zh
通知
121
Star
3932
Fork
992
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
pytorch-doc-zh
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
d0822658
编写于
2月 05, 2024
作者:
绝不原创的飞龙
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
2024-02-05 13:30:38
上级
537658ff
变更
10
展开全部
隐藏空白更改
内联
并排
Showing
10 changed file
with
5748 addition
and
116 deletion
+5748
-116
totrans/doc22_028.yaml
totrans/doc22_028.yaml
+24
-0
totrans/doc22_029.yaml
totrans/doc22_029.yaml
+6
-0
totrans/doc22_030.yaml
totrans/doc22_030.yaml
+2
-0
totrans/doc22_031.yaml
totrans/doc22_031.yaml
+1715
-37
totrans/doc22_032.yaml
totrans/doc22_032.yaml
+828
-36
totrans/doc22_033.yaml
totrans/doc22_033.yaml
+438
-38
totrans/doc22_034.yaml
totrans/doc22_034.yaml
+2040
-5
totrans/doc22_035.yaml
totrans/doc22_035.yaml
+178
-0
totrans/doc22_036.yaml
totrans/doc22_036.yaml
+115
-0
totrans/doc22_037.yaml
totrans/doc22_037.yaml
+402
-0
未找到文件。
totrans/doc22_028.yaml
浏览文件 @
d0822658
...
...
@@ -3,29 +3,35 @@
prefs
:
-
PREF_H1
type
:
TYPE_NORMAL
zh
:
C++
-
en
:
原文:[https://pytorch.org/docs/stable/cpp_index.html](https://pytorch.org/docs/stable/cpp_index.html)
id
:
totrans-1
prefs
:
-
PREF_BQ
type
:
TYPE_NORMAL
zh
:
原文:[https://pytorch.org/docs/stable/cpp_index.html](https://pytorch.org/docs/stable/cpp_index.html)
-
en
:
Note
id
:
totrans-2
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
注意
-
en
:
If you are looking for the PyTorch C++ API docs, directly go [here](https://pytorch.org/cppdocs/).
id
:
totrans-3
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
如果您正在寻找PyTorch C++ API文档,请直接前往[此处](https://pytorch.org/cppdocs/)。
-
en
:
'
PyTorch
provides
several
features
for
working
with
C++,
and
it’s
best
to
choose
from
them
based
on
your
needs.
At
a
high
level,
the
following
support
is
available:'
id
:
totrans-4
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
PyTorch提供了几个用于处理C++的功能,最好根据您的需求选择其中之一。在高层次上,以下支持可用:
-
en
:
TorchScript C++ API
id
:
totrans-5
prefs
:
-
PREF_H2
type
:
TYPE_NORMAL
zh
:
TorchScript C++ API
-
en
:
'
[TorchScript](https://pytorch.org/docs/stable/jit.html)
allows
PyTorch
models
defined
in
Python
to
be
serialized
and
then
loaded
and
run
in
C++
capturing
the
model
code
via
compilation
or
tracing
its
execution.
You
can
learn
more
in
the
...
...
@@ -37,27 +43,34 @@
id
:
totrans-6
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
'
[TorchScript](https://pytorch.org/docs/stable/jit.html)允许将在Python中定义的PyTorch模型序列化,然后在C++中加载和运行,通过编译或跟踪其执行来捕获模型代码。您可以在[在C++中加载TorchScript模型](https://pytorch.org/tutorials/advanced/cpp_export.html)教程中了解更多信息。这意味着您可以尽可能在Python中定义模型,但随后通过TorchScript导出它们,以在生产或嵌入式环境中进行无Python执行。TorchScript
C++
API用于与这些模型和TorchScript执行引擎进行交互,包括:'
-
en
:
Loading serialized TorchScript models saved from Python
id
:
totrans-7
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
加载从Python保存的序列化TorchScript模型
-
en
:
Doing simple model modifications if needed (e.g. pulling out submodules)
id
:
totrans-8
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
如果需要,进行简单的模型修改(例如,提取子模块)
-
en
:
Constructing the input and doing preprocessing using C++ Tensor API
id
:
totrans-9
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
使用C++ Tensor API构建输入并进行预处理
-
en
:
Extending PyTorch and TorchScript with C++ Extensions[](#extending-pytorch-and-torchscript-with-c-extensions
"Permalink to this heading")
id
:
totrans-10
prefs
:
-
PREF_H2
type
:
TYPE_NORMAL
zh
:
使用C++扩展扩展PyTorch和TorchScript[](#extending-pytorch-and-torchscript-with-c-extensions
"跳转到此标题的永久链接")
-
en
:
TorchScript can be augmented with user-supplied code through custom operators
and custom classes. Once registered with TorchScript, these operators and classes
can be invoked in TorchScript code run from Python or from C++ as part of a serialized
...
...
@@ -70,28 +83,33 @@
id
:
totrans-11
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
TorchScript可以通过自定义运算符和自定义类增强用户提供的代码。一旦在TorchScript中注册了这些运算符和类,这些运算符和类可以在Python中运行的TorchScript代码中被调用,或者作为序列化的TorchScript模型的一部分在C++中被调用。[使用自定义C++运算符扩展TorchScript](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html)教程介绍了如何将TorchScript与OpenCV进行接口。除了使用自定义运算符包装函数调用外,C++类和结构体还可以通过类似于pybind11的接口绑定到TorchScript中,这在[使用自定义C++类扩展TorchScript](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html)教程中有解释。
-
en
:
Tensor and Autograd in C++
id
:
totrans-12
prefs
:
-
PREF_H2
type
:
TYPE_NORMAL
zh
:
在C++中的Tensor和Autograd
-
en
:
'
Most
of
the
tensor
and
autograd
operations
in
PyTorch
Python
API
are
also
available
in
the
C++
API.
These
include:'
id
:
totrans-13
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
PyTorch Python API中的大多数张量和自动求导操作也可在C++ API中使用。包括:
-
en
:
'
`torch::Tensor`
methods
such
as
`add`
/
`reshape`
/
`clone`.
For
the
full
list
of
methods
available,
please
see:
[https://pytorch.org/cppdocs/api/classat_1_1_tensor.html](https://pytorch.org/cppdocs/api/classat_1_1_tensor.html)'
id
:
totrans-14
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
`torch::Tensor`方法,如`add`
/
`reshape`
/
`clone`。有关可用方法的完整列表,请参阅:[https://pytorch.org/cppdocs/api/classat_1_1_tensor.html](https://pytorch.org/cppdocs/api/classat_1_1_tensor.html)'
-
en
:
'
C++
tensor
indexing
API
that
looks
and
behaves
the
same
as
the
Python
API.
For
details
on
its
usage,
please
see:
[https://pytorch.org/cppdocs/notes/tensor_indexing.html](https://pytorch.org/cppdocs/notes/tensor_indexing.html)'
id
:
totrans-15
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
C++张量索引API,看起来和行为与Python API相同。有关其用法的详细信息,请参阅:[https://pytorch.org/cppdocs/notes/tensor_indexing.html](https://pytorch.org/cppdocs/notes/tensor_indexing.html)
-
en
:
'
The
tensor
autograd
APIs
and
the
`torch::autograd`
package
that
are
crucial
for
building
dynamic
neural
networks
in
C++
frontend.
For
more
details,
please
see:
[https://pytorch.org/tutorials/advanced/cpp_autograd.html](https://pytorch.org/tutorials/advanced/cpp_autograd.html)'
...
...
@@ -99,11 +117,13 @@
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
在C++前端构建动态神经网络至关重要的张量自动求导API和`torch::autograd`包。有关更多详细信息,请参阅:[https://pytorch.org/tutorials/advanced/cpp_autograd.html](https://pytorch.org/tutorials/advanced/cpp_autograd.html)
-
en
:
Authoring Models in C++
id
:
totrans-17
prefs
:
-
PREF_H2
type
:
TYPE_NORMAL
zh
:
在C++中编写模型
-
en
:
The “author in TorchScript, infer in C++” workflow requires model authoring
to be done in TorchScript. However, there might be cases where the model has to
be authored in C++ (e.g. in workflows where a Python component is undesirable).
...
...
@@ -113,17 +133,21 @@
id
:
totrans-18
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
“在TorchScript中编写,使用C++进行推断”的工作流程要求在TorchScript中进行模型编写。但是,可能存在必须在C++中编写模型的情况(例如,在不希望使用Python组件的工作流程中)。为了满足这种用例,我们提供了在C++中完全编写和训练神经网络模型的完整功能,其中包括`torch::nn`
/ `torch::nn::functional` / `torch::optim`等熟悉的组件,这些组件与Python API非常相似。
-
en
:
'
For
an
overview
of
the
PyTorch
C++
model
authoring
and
training
API,
please
see:
[https://pytorch.org/cppdocs/frontend.html](https://pytorch.org/cppdocs/frontend.html)'
id
:
totrans-19
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
有关PyTorch C++模型编写和训练API的概述,请参阅:[https://pytorch.org/cppdocs/frontend.html](https://pytorch.org/cppdocs/frontend.html)
-
en
:
'
For
a
detailed
tutorial
on
how
to
use
the
API,
please
see:
[https://pytorch.org/tutorials/advanced/cpp_frontend.html](https://pytorch.org/tutorials/advanced/cpp_frontend.html)'
id
:
totrans-20
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
有关如何使用API的详细教程,请参阅:[https://pytorch.org/tutorials/advanced/cpp_frontend.html](https://pytorch.org/tutorials/advanced/cpp_frontend.html)
-
en
:
'
Docs
for
components
such
as
`torch::nn`
/
`torch::nn::functional`
/
`torch::optim`
can
be
found
at:
[https://pytorch.org/cppdocs/api/library_root.html](https://pytorch.org/cppdocs/api/library_root.html)'
id
:
totrans-21
...
...
totrans/doc22_029.yaml
浏览文件 @
d0822658
-
en
:
torch::deploy has been moved to pytorch/multipy
id
:
totrans-0
prefs
:
-
PREF_H1
type
:
TYPE_NORMAL
zh
:
torch::deploy 已经迁移到 pytorch/multipy
-
en
:
原文:[https://pytorch.org/docs/stable/deploy.html](https://pytorch.org/docs/stable/deploy.html)
id
:
totrans-1
prefs
:
-
PREF_BQ
type
:
TYPE_NORMAL
zh
:
原文:[https://pytorch.org/docs/stable/deploy.html](https://pytorch.org/docs/stable/deploy.html)
-
en
:
'
`torch::deploy`
has
been
moved
to
its
new
home
at
[https://github.com/pytorch/multipy](https://github.com/pytorch/multipy).'
id
:
totrans-2
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
'
`torch::deploy`
已经迁移到了它的新家
[https://github.com/pytorch/multipy](https://github.com/pytorch/multipy)。'
totrans/doc22_030.yaml
浏览文件 @
d0822658
-
en
:
Python API
id
:
totrans-0
prefs
:
-
PREF_H1
type
:
TYPE_NORMAL
zh
:
Python API
totrans/doc22_031.yaml
浏览文件 @
d0822658
因为 它太大了无法显示 source diff 。你可以改为
查看blob
。
totrans/doc22_032.yaml
浏览文件 @
d0822658
此差异已折叠。
点击以展开。
totrans/doc22_033.yaml
浏览文件 @
d0822658
此差异已折叠。
点击以展开。
totrans/doc22_034.yaml
浏览文件 @
d0822658
此差异已折叠。
点击以展开。
totrans/doc22_035.yaml
浏览文件 @
d0822658
此差异已折叠。
点击以展开。
totrans/doc22_036.yaml
浏览文件 @
d0822658
-
en
:
Tensor Views
id
:
totrans-0
prefs
:
-
PREF_H1
type
:
TYPE_NORMAL
zh
:
Tensor Views
-
en
:
原文:[https://pytorch.org/docs/stable/tensor_view.html](https://pytorch.org/docs/stable/tensor_view.html)
id
:
totrans-1
prefs
:
-
PREF_BQ
type
:
TYPE_NORMAL
zh
:
原文:[https://pytorch.org/docs/stable/tensor_view.html](https://pytorch.org/docs/stable/tensor_view.html)
-
en
:
PyTorch allows a tensor to be a `View` of an existing tensor. View tensor shares
the same underlying data with its base tensor. Supporting `View` avoids explicit
data copy, thus allows us to do fast and memory efficient reshaping, slicing and
element-wise operations.
id
:
totrans-2
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
PyTorch 允许一个张量是现有张量的 `View`。视图张量与其基本张量共享相同的基础数据。支持 `View` 避免了显式数据复制,因此允许我们进行快速和内存高效的重塑、切片和逐元素操作。
-
en
:
For example, to get a view of an existing tensor `t`, you can call `t.view(...)`.
id
:
totrans-3
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
例如,要获取现有张量 `t` 的视图,可以调用 `t.view(...)`。
-
en
:
'
[PRE0]'
id
:
totrans-4
prefs
:
[]
type
:
TYPE_PRE
zh
:
'
[PRE0]'
-
en
:
Since views share underlying data with its base tensor, if you edit the data
in the view, it will be reflected in the base tensor as well.
id
:
totrans-5
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
由于视图与其基本张量共享基础数据,如果在视图中编辑数据,将会反映在基本张量中。
-
en
:
Typically a PyTorch op returns a new tensor as output, e.g. [`add()`](generated/torch.Tensor.add.html#torch.Tensor.add
"torch.Tensor.add"). But in case of view ops, outputs are views of input tensors
to avoid unnecessary data copy. No data movement occurs when creating a view,
...
...
@@ -30,198 +42,301 @@
pay additional attention as contiguity might have implicit performance impact.
[`transpose()`](generated/torch.Tensor.transpose.html#torch.Tensor.transpose "torch.Tensor.transpose")
is a common example.
id
:
totrans-6
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
通常,PyTorch 操作会返回一个新的张量作为输出,例如 [`add()`](generated/torch.Tensor.add.html#torch.Tensor.add
"torch.Tensor.add")。但是在视图操作中,输出是输入张量的视图,以避免不必要的数据复制。创建视图时不会发生数据移动,视图张量只是改变了解释相同数据的方式。对连续张量进行视图操作可能会产生非连续张量。用户应额外注意,因为连续性可能会对性能产生隐含影响。[`transpose()`](generated/torch.Tensor.transpose.html#torch.Tensor.transpose
"torch.Tensor.transpose") 是一个常见示例。
-
en
:
'
[PRE1]'
id
:
totrans-7
prefs
:
[]
type
:
TYPE_PRE
zh
:
'
[PRE1]'
-
en
:
'
For
reference,
here’s
a
full
list
of
view
ops
in
PyTorch:'
id
:
totrans-8
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
作为参考,以下是 PyTorch 中所有视图操作的完整列表:
-
en
:
Basic slicing and indexing op, e.g. `tensor[0, 2:, 1:7:2]` returns a view of
base `tensor`, see note below.
id
:
totrans-9
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
基本的切片和索引操作,例如 `tensor[0, 2:, 1:7:2]` 返回基本 `tensor` 的视图,请参见下面的说明。
-
en
:
'
[`adjoint()`](generated/torch.Tensor.adjoint.html#torch.Tensor.adjoint
"torch.Tensor.adjoint")'
id
:
totrans-10
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`adjoint()`](generated/torch.Tensor.adjoint.html#torch.Tensor.adjoint
"torch.Tensor.adjoint")'
-
en
:
'
[`as_strided()`](generated/torch.Tensor.as_strided.html#torch.Tensor.as_strided
"torch.Tensor.as_strided")'
id
:
totrans-11
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`as_strided()`](generated/torch.Tensor.as_strided.html#torch.Tensor.as_strided
"torch.Tensor.as_strided")'
-
en
:
'
[`detach()`](generated/torch.Tensor.detach.html#torch.Tensor.detach
"torch.Tensor.detach")'
id
:
totrans-12
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`detach()`](generated/torch.Tensor.detach.html#torch.Tensor.detach
"torch.Tensor.detach")'
-
en
:
'
[`diagonal()`](generated/torch.Tensor.diagonal.html#torch.Tensor.diagonal
"torch.Tensor.diagonal")'
id
:
totrans-13
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`diagonal()`](generated/torch.Tensor.diagonal.html#torch.Tensor.diagonal
"torch.Tensor.diagonal")'
-
en
:
'
[`expand()`](generated/torch.Tensor.expand.html#torch.Tensor.expand
"torch.Tensor.expand")'
id
:
totrans-14
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`expand()`](generated/torch.Tensor.expand.html#torch.Tensor.expand
"torch.Tensor.expand")'
-
en
:
'
[`expand_as()`](generated/torch.Tensor.expand_as.html#torch.Tensor.expand_as
"torch.Tensor.expand_as")'
id
:
totrans-15
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`expand_as()`](generated/torch.Tensor.expand_as.html#torch.Tensor.expand_as
"torch.Tensor.expand_as")'
-
en
:
'
[`movedim()`](generated/torch.Tensor.movedim.html#torch.Tensor.movedim
"torch.Tensor.movedim")'
id
:
totrans-16
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`movedim()`](generated/torch.Tensor.movedim.html#torch.Tensor.movedim
"torch.Tensor.movedim")'
-
en
:
'
[`narrow()`](generated/torch.Tensor.narrow.html#torch.Tensor.narrow
"torch.Tensor.narrow")'
id
:
totrans-17
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`narrow()`](generated/torch.Tensor.narrow.html#torch.Tensor.narrow
"torch.Tensor.narrow")'
-
en
:
'
[`permute()`](generated/torch.Tensor.permute.html#torch.Tensor.permute
"torch.Tensor.permute")'
id
:
totrans-18
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`permute()`](generated/torch.Tensor.permute.html#torch.Tensor.permute
"torch.Tensor.permute")'
-
en
:
'
[`select()`](generated/torch.Tensor.select.html#torch.Tensor.select
"torch.Tensor.select")'
id
:
totrans-19
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`select()`](generated/torch.Tensor.select.html#torch.Tensor.select
"torch.Tensor.select")'
-
en
:
'
[`squeeze()`](generated/torch.Tensor.squeeze.html#torch.Tensor.squeeze
"torch.Tensor.squeeze")'
id
:
totrans-20
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`squeeze()`](generated/torch.Tensor.squeeze.html#torch.Tensor.squeeze
"torch.Tensor.squeeze")'
-
en
:
'
[`transpose()`](generated/torch.Tensor.transpose.html#torch.Tensor.transpose
"torch.Tensor.transpose")'
id
:
totrans-21
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`transpose()`](generated/torch.Tensor.transpose.html#torch.Tensor.transpose
"torch.Tensor.transpose")'
-
en
:
'
[`t()`](generated/torch.Tensor.t.html#torch.Tensor.t
"torch.Tensor.t")'
id
:
totrans-22
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`t()`](generated/torch.Tensor.t.html#torch.Tensor.t
"torch.Tensor.t")'
-
en
:
'
[`T`](tensors.html#torch.Tensor.T
"torch.Tensor.T")'
id
:
totrans-23
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`T`](tensors.html#torch.Tensor.T
"torch.Tensor.T")'
-
en
:
'
[`H`](tensors.html#torch.Tensor.H
"torch.Tensor.H")'
id
:
totrans-24
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`H`](tensors.html#torch.Tensor.H
"torch.Tensor.H")'
-
en
:
'
[`mT`](tensors.html#torch.Tensor.mT
"torch.Tensor.mT")'
id
:
totrans-25
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`mT`](tensors.html#torch.Tensor.mT
"torch.Tensor.mT")'
-
en
:
'
[`mH`](tensors.html#torch.Tensor.mH
"torch.Tensor.mH")'
id
:
totrans-26
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`mH`](tensors.html#torch.Tensor.mH
"torch.Tensor.mH")'
-
en
:
'
[`real`](generated/torch.Tensor.real.html#torch.Tensor.real
"torch.Tensor.real")'
id
:
totrans-27
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`real`](generated/torch.Tensor.real.html#torch.Tensor.real
"torch.Tensor.real")'
-
en
:
'
[`imag`](generated/torch.Tensor.imag.html#torch.Tensor.imag
"torch.Tensor.imag")'
id
:
totrans-28
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`imag`](generated/torch.Tensor.imag.html#torch.Tensor.imag
"torch.Tensor.imag")'
-
en
:
'
`view_as_real()`'
id
:
totrans-29
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
`view_as_real()`'
-
en
:
'
[`unflatten()`](generated/torch.Tensor.unflatten.html#torch.Tensor.unflatten
"torch.Tensor.unflatten")'
id
:
totrans-30
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`unflatten()`](generated/torch.Tensor.unflatten.html#torch.Tensor.unflatten
"torch.Tensor.unflatten")'
-
en
:
'
[`unfold()`](generated/torch.Tensor.unfold.html#torch.Tensor.unfold
"torch.Tensor.unfold")'
id
:
totrans-31
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`unfold()`](generated/torch.Tensor.unfold.html#torch.Tensor.unfold
"torch.Tensor.unfold")'
-
en
:
'
[`unsqueeze()`](generated/torch.Tensor.unsqueeze.html#torch.Tensor.unsqueeze
"torch.Tensor.unsqueeze")'
id
:
totrans-32
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`unsqueeze()`](generated/torch.Tensor.unsqueeze.html#torch.Tensor.unsqueeze
"torch.Tensor.unsqueeze")'
-
en
:
'
[`view()`](generated/torch.Tensor.view.html#torch.Tensor.view
"torch.Tensor.view")'
id
:
totrans-33
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`view()`](generated/torch.Tensor.view.html#torch.Tensor.view
"torch.Tensor.view")'
-
en
:
'
[`view_as()`](generated/torch.Tensor.view_as.html#torch.Tensor.view_as
"torch.Tensor.view_as")'
id
:
totrans-34
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`view_as()`](generated/torch.Tensor.view_as.html#torch.Tensor.view_as
"torch.Tensor.view_as")'
-
en
:
'
[`unbind()`](generated/torch.Tensor.unbind.html#torch.Tensor.unbind
"torch.Tensor.unbind")'
id
:
totrans-35
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`unbind()`](generated/torch.Tensor.unbind.html#torch.Tensor.unbind
"torch.Tensor.unbind")'
-
en
:
'
[`split()`](generated/torch.Tensor.split.html#torch.Tensor.split
"torch.Tensor.split")'
id
:
totrans-36
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`split()`](generated/torch.Tensor.split.html#torch.Tensor.split
"torch.Tensor.split")'
-
en
:
'
[`hsplit()`](generated/torch.Tensor.hsplit.html#torch.Tensor.hsplit
"torch.Tensor.hsplit")'
id
:
totrans-37
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`hsplit()`](generated/torch.Tensor.hsplit.html#torch.Tensor.hsplit
"torch.Tensor.hsplit")'
-
en
:
'
[`vsplit()`](generated/torch.Tensor.vsplit.html#torch.Tensor.vsplit
"torch.Tensor.vsplit")'
id
:
totrans-38
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`vsplit()`](generated/torch.Tensor.vsplit.html#torch.Tensor.vsplit
"torch.Tensor.vsplit")'
-
en
:
'
[`tensor_split()`](generated/torch.Tensor.tensor_split.html#torch.Tensor.tensor_split
"torch.Tensor.tensor_split")'
id
:
totrans-39
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`tensor_split()`](generated/torch.Tensor.tensor_split.html#torch.Tensor.tensor_split
"torch.Tensor.tensor_split")'
-
en
:
'
`split_with_sizes()`'
id
:
totrans-40
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
`split_with_sizes()`'
-
en
:
'
[`swapaxes()`](generated/torch.Tensor.swapaxes.html#torch.Tensor.swapaxes
"torch.Tensor.swapaxes")'
id
:
totrans-41
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`swapaxes()`](generated/torch.Tensor.swapaxes.html#torch.Tensor.swapaxes
"torch.Tensor.swapaxes")'
-
en
:
'
[`swapdims()`](generated/torch.Tensor.swapdims.html#torch.Tensor.swapdims
"torch.Tensor.swapdims")'
id
:
totrans-42
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`swapdims()`](generated/torch.Tensor.swapdims.html#torch.Tensor.swapdims
"torch.Tensor.swapdims")'
-
en
:
'
[`chunk()`](generated/torch.Tensor.chunk.html#torch.Tensor.chunk
"torch.Tensor.chunk")'
id
:
totrans-43
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`chunk()`](generated/torch.Tensor.chunk.html#torch.Tensor.chunk
"torch.Tensor.chunk")'
-
en
:
'
[`indices()`](generated/torch.Tensor.indices.html#torch.Tensor.indices
"torch.Tensor.indices")
(sparse
tensor
only)'
id
:
totrans-44
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`indices()`](generated/torch.Tensor.indices.html#torch.Tensor.indices
"torch.Tensor.indices")(仅适用于稀疏张量)'
-
en
:
'
[`values()`](generated/torch.Tensor.values.html#torch.Tensor.values
"torch.Tensor.values")
(sparse
tensor
only)'
id
:
totrans-45
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`values()`](generated/torch.Tensor.values.html#torch.Tensor.values
"torch.Tensor.values")(仅适用于稀疏张量)'
-
en
:
Note
id
:
totrans-46
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
注意
-
en
:
When accessing the contents of a tensor via indexing, PyTorch follows Numpy
behaviors that basic indexing returns views, while advanced indexing returns a
copy. Assignment via either basic or advanced indexing is in-place. See more examples
in [Numpy indexing documentation](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html).
id
:
totrans-47
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
当通过索引访问张量的内容时,PyTorch遵循Numpy的行为,基本索引返回视图,而高级索引返回副本。通过基本或高级索引进行赋值是原地的。在[Numpy索引文档](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html)中查看更多示例。
-
en
:
'
It’s
also
worth
mentioning
a
few
ops
with
special
behaviors:'
id
:
totrans-48
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
还值得一提的是一些具有特殊行为的操作:
-
en
:
'
[`reshape()`](generated/torch.Tensor.reshape.html#torch.Tensor.reshape
"torch.Tensor.reshape"),
[`reshape_as()`](generated/torch.Tensor.reshape_as.html#torch.Tensor.reshape_as
"torch.Tensor.reshape_as")
and
[`flatten()`](generated/torch.Tensor.flatten.html#torch.Tensor.flatten
"torch.Tensor.flatten")
can
return
either
a
view
or
new
tensor,
user
code
shouldn’t
rely
on
whether
it’s
view
or
not.'
id
:
totrans-49
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`reshape()`](generated/torch.Tensor.reshape.html#torch.Tensor.reshape
"torch.Tensor.reshape")、[`reshape_as()`](generated/torch.Tensor.reshape_as.html#torch.Tensor.reshape_as
"torch.Tensor.reshape_as")和[`flatten()`](generated/torch.Tensor.flatten.html#torch.Tensor.flatten
"torch.Tensor.flatten")可能返回视图或新张量,用户代码不应该依赖于它是视图还是不是。'
-
en
:
'
[`contiguous()`](generated/torch.Tensor.contiguous.html#torch.Tensor.contiguous
"torch.Tensor.contiguous")
returns
**itself**
if
input
tensor
is
already
contiguous,
otherwise
it
returns
a
new
contiguous
tensor
by
copying
data.'
id
:
totrans-50
prefs
:
-
PREF_UL
type
:
TYPE_NORMAL
zh
:
'
[`contiguous()`](generated/torch.Tensor.contiguous.html#torch.Tensor.contiguous
"torch.Tensor.contiguous")如果输入张量已经是连续的,则返回**自身**,否则通过复制数据返回一个新的连续张量。'
-
en
:
For a more detailed walk-through of PyTorch internal implementation, please
refer to [ezyang’s blogpost about PyTorch Internals](http://blog.ezyang.com/2019/05/pytorch-internals/).
id
:
totrans-51
prefs
:
[]
type
:
TYPE_NORMAL
zh
:
有关PyTorch内部实现的更详细介绍,请参考[ezyang关于PyTorch内部的博文](http://blog.ezyang.com/2019/05/pytorch-internals/)。
totrans/doc22_037.yaml
浏览文件 @
d0822658
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录