未验证 提交 3dadc5eb 编写于 作者: C Chen Long 提交者: GitHub

Fix docs (#2670)

* fix some docs

* fix links

* fix sample code
上级 619e0a45
.. _cn_user_guide_broadcasting:
==================
广播 (broadcasting)
==================
飞桨(PaddlePaddle,以下简称Paddle)和其他框架一样,提供的一些API支持广播(broadcasting)机制,允许在一些运算时使用不同形状的张量。
通常来讲,如果有一个形状较小和一个形状较大的张量,我们希望多次使用较小的张量来对较大的张量执行一些操作,看起来像是较小形状的张量的形状首先被扩展到和较大形状的张量一致,然后做运算。
值得注意的是,这期间并没有对较小形状张量的数据拷贝操作。
飞桨的广播机制主要遵循如下规则(参考 `Numpy 广播机制 <https://numpy.org/doc/stable/user/basics.broadcasting.html#module-numpy.doc.broadcasting>`_ ):
1. 每个张量至少为一维张量
2. 从后往前比较张量的形状,当前维度的大小要么相等,要么其中一个等于一,要么其中一个不存在
例如:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.ones((2, 3, 4), np.float32))
y = paddle.to_tensor(np.ones((2, 3, 4), np.float32))
# 两个张量 形状一致,可以广播
x = paddle.to_tensor(np.ones((2, 3, 1, 5), np.float32))
y = paddle.to_tensor(np.ones((3, 4, 1), np.float32))
# 从后向前依次比较:
# 第一次:y的维度大小是1
# 第二次:x的维度大小是1
# 第三次:x和y的维度大小相等
# 第四次:y的维度不存在
# 所以 x和y是可以广播的
# 相反
x = paddle.to_tensor(np.ones((2, 3, 4), np.float32))
y = paddle.to_tensor(np.ones((2, 3, 6), np.float32))
# 此时x和y是不可广播的,因为第一次比较 4不等于6
现在我们知道什么情况下两个张量是可以广播的,两个张量进行广播语义后的结果张量的形状计算规则如下:
1. 如果两个张量的形状的长度不一致,那么需要在较小形状长度的矩阵像前添加1,只到两个张量的形状长度相等。
2. 保证两个张量形状相等之后,每个维度上的结果维度就是当前维度上较大的那个。
例如:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.ones((2, 1, 4), np.float32))
y = paddle.to_tensor(np.ones((3, 1), np.float32))
z = x + y
print(z.shape)
# z的形状: [2,3,4]
x = paddle.to_tensor(np.ones((2, 1, 4), np.float32))
y = paddle.to_tensor(np.ones((3, 2), np.float32))
z = x + y
print(z.shape)
# InvalidArgumentError: Broadcast dimension mismatch.
除此之外,飞桨的elementwise系列API针对广播机制增加了axis参数,当使用较小形状的y来来匹配较大形状的x的时候,且满足y的形状的长度小于x的形状长度,
axis表示y在x上应用广播机制的时候的起始维度的位置,当设置了asis参数后,张量的维度比较顺序变成了从axis开始,从前向后比较。当axis=-1时,axis = rank(x) - rank(y),
同时y的大小为1的尾部维度将被忽略。
例如:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.ones((2, 1, 4), np.float32))
y = paddle.to_tensor(np.ones((3, 1), np.float32))
z = paddle.elementwise_add(x, y, axis=1)
# z的形状 [2, 3, 4]
x = paddle.to_tensor(np.ones((2, 3, 4, 5), np.float32))
y = paddle.to_tensor(np.ones((4, 5), np.float32))
z = paddle.elementwise_add(x, y, axis=1)
print(z.shape)
# InvalidArgumentError: Broadcast dimension mismatch.
# 因为指定了axis之后,计算广播的维度从axis开始从前向后比较
x = paddle.to_tensor(np.ones((2, 3, 4, 5), np.float32))
y = paddle.to_tensor(np.ones((3), np.float32))
z = paddle.elementwise_add(x, y, axis=1)
print(z.shape)
# z的形状 [2, 3, 4, 5]
# 因为此时是从axis=1的维度开始,从前向后比较维度进行广播
.. _user_guide_broadcasting:
==================
Broadcasting
==================
PaddlePaddle provides broadcasting semantics in some APIs like other deep learning frameworks, which allows using tensors with different shapes while operating.
In General, broadcast is the rule how the smaller tensor is “broadcast” across the larger tsnsor so that they have same shapes.
Note that no copies happened while broadcasting.
In Paddlepaddle, tensors are broadcastable when following rulrs hold(ref: `Numpy Broadcasting <https://numpy.org/doc/stable/user/basics.broadcasting.html#module-numpy.doc.broadcasting>`_ ):
1. there should be at least one dimention in each tensor
2. when we compare their shapes element-wise from backward to forward, two dimensions are compatible when
they are equal, or one of them is 1, or one of them does not exist.
For example:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.ones((2, 3, 4), np.float32))
y = paddle.to_tensor(np.ones((2, 3, 4), np.float32))
# Two tensor have some shpes are broadcastable
x = paddle.to_tensor(np.ones((2, 3, 1, 5), np.float32))
y = paddle.to_tensor(np.ones((3, 4, 1), np.float32))
# compare from backward to forward:
# 1st step:y's dimention is 1
# 2nd step:x's dimention is 1
# 3rd step:two dimentions are the same
# 4st step:y's dimention does not exist
# So, x and y are broadcastable
# In Compare
x = paddle.to_tensor(np.ones((2, 3, 4), np.float32))
y = paddle.to_tensor(np.ones((2, 3, 6), np.float32))
# x and y are not broadcastable because in first step form tail, x's dimention 4 is not equal to y's dimention 6
Now we know in what condition two tensors are broadcastable, how to calculate the resulting tensor's size follows the rules:
1. If the number of dimensions of x and y are not equal, prepend 1 to the dimensions of the tensor with fewer dimensions to make them equal length.
2. Then, for each dimension size, the resulting dimension size is the max of the sizes of x and y along that dimension.
For example:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.ones((2, 1, 4), np.float32))
y = paddle.to_tensor(np.ones((3, 1), np.float32))
z = x + y
print(z.shape)
# z'shape: [2,3,4]
x = paddle.to_tensor(np.ones((2, 1, 4), np.float32))
y = paddle.to_tensor(np.ones((3, 2), np.float32))
z = x + y
print(z.shape)
# InvalidArgumentError: Broadcast dimension mismatch.
In addition, axis is introduced to PaddlePaddle's broadcasting semantics. when using smaller shape tensor y to broadcast a larger tensor x,
and y's length of dimentions is smaller than x's, we can specify a aixs to indicate the starting dimention to do broadcasting.
In this case, the comparation on dimentions runs from forward to backward started at axis. when axis=-1, axis = rank(x) - rank(y).
when the last dimention of y is 1, it will be ignored.
For example:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.ones((2, 1, 4), np.float32))
y = paddle.to_tensor(np.ones((3, 1), np.float32))
z = paddle.elementwise_add(x,y,axis=1)
# z'shape [2, 3, 4]
x = paddle.to_tensor(np.ones((2, 3, 4, 5), np.float32))
y = paddle.to_tensor(np.ones((4, 5), np.float32))
z = paddle.elementwise_add(x, y, axis=1)
print(z.shape)
# InvalidArgumentError: Broadcast dimension mismatch.
# axis is indicated, comparation between dimentions starts at axis.
x = paddle.to_tensor(np.ones((2, 3, 4, 5), np.float32))
y = paddle.to_tensor(np.ones((3), np.float32))
z = paddle.elementwise_add(x, y, axis=1)
print(z.shape)
# z'shape [2, 3, 4, 5]
# Start comparation at axis=1 from forward to backward.
......@@ -8,7 +8,8 @@
让我们从学习飞桨的基本概念这里开始:
- `Tensor概念介绍 <tensor_introduction.html>`_ : 飞桨中数据的表示方式,Tensor概念介绍,
- `Tensor概念介绍 <./tensor_introduction_cn.html>`_ : 飞桨中数据的表示方式,Tensor概念介绍。
- `飞桨广播介绍 <./broadcasting_cn.html>`_ : 飞桨中广播概念的介绍。
- `飞桨框架2.0beta升级指南 <./upgrade_guide_cn.html>`_: 介绍飞桨开源框架2.0beta的主要变化和如何升级。
- `版本迁移工具 <./migration_cn.html>`_: 介绍paddle1to2转换工具的使用。
- `动态图转静态图 <./dygraph_to_static/index_cn.html>`_: 介绍飞桨动态图转静态图的方法
......@@ -18,7 +19,8 @@
.. toctree::
:hidden:
tensor_introduction.md
tensor_introduction_cn.md
broadcasting_cn.rst
upgrade_guide_cn.md
migration_cn.rst
dygraph_to_static/index_cn.rst
......
......@@ -10,6 +10,7 @@ Please refer to `PaddlePaddle Github <https://github.com/PaddlePaddle/Paddle>`_
Let's start with studying basic concept of PaddlePaddle:
- `Introduction to Tensor <tensor_introduction_en.html>`_ : Introduction of Tensor, which is the representation of data in Paddle.
- `broadcasting <./broadcasting_en.html>`_ : Introduction of broadcasting.
- `migration tools <./migration_en.html>`_:how to use migration tools to upgrade your code.
- `dynamic to static <./dygraph_to_static/index_en.html>`_:how to convert your model from dynamic graph to static graph.
......@@ -17,5 +18,6 @@ Let's start with studying basic concept of PaddlePaddle:
:hidden:
tensor_introduction_en.md
broadcasting_en.md
migration_en.rst
dygraph_to_static/index_en.rst
......@@ -15,7 +15,7 @@ paddle1to2工具可以通过pip的方式安装,方式如下:
.. code:: ipython3
! pip install -U paddle1to2
$ pip install -U paddle1to2
基本用法
~~~~~~~~
......@@ -24,13 +24,13 @@ Paddle1to2 可以使用下面的方式,快速使用:
.. code:: ipython3
! paddle1to2 --inpath /path/to/model.py
$ paddle1to2 --inpath /path/to/model.py
这将在命令行中,以\ ``diff``\ 的形式,展示model.py从Paddle1.x转换为Paddle2.0beta的变化。如果您确认上述变化没有问题,只需要再执行:
.. code:: ipython3
! paddle1to2 --inpath /path/to/model.py --write
$ paddle1to2 --inpath /path/to/model.py --write
就会原地改写model.py,将上述变化改写到您的源文件中。
注意:我们会默认备份源文件,到~/.paddle1to2/下。
......@@ -75,7 +75,7 @@ Paddle1to2 可以使用下面的方式,快速使用:
.. code:: ipython3
! git clone https://github.com/PaddlePaddle/models
$ git clone https://github.com/PaddlePaddle/models
.. parsed-literal::
......@@ -95,7 +95,7 @@ Paddle1to2 可以使用下面的方式,快速使用:
.. code:: ipython3
! paddle1to2 -h
$ paddle1to2 -h
.. parsed-literal::
......@@ -132,7 +132,7 @@ paddle1.x的例子
.. code:: ipython3
! head -n 198 models/dygraph/mnist/train.py | tail -n 20
$ head -n 198 models/dygraph/mnist/train.py | tail -n 20
.. code:: ipython3
......@@ -166,14 +166,14 @@ paddle1to2支持单文件的转化,您可以通过下方的命令直接转化
.. code:: ipython3
!paddle1to2 --inpath models/dygraph/mnist/train.py
$ paddle1to2 --inpath models/dygraph/mnist/train.py
注意,对于参数的删除及一些特殊情况,我们都会打印WARNING信息,需要您仔细核对相关内容。
如果您觉得上述信息没有问题,可以直接对文件进行原地修改,方式如下:
.. code:: ipython3
!paddle1to2 --inpath models/dygraph/mnist/train.py --write
$ paddle1to2 --inpath models/dygraph/mnist/train.py --write
此时,命令行会弹出下方的提示:
......@@ -189,7 +189,7 @@ paddle1to2支持单文件的转化,您可以通过下方的命令直接转化
.. code:: ipython3
! cat report.log
$ cat report.log
注意事项
~~~~~~~~
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册