PaddlePaddle provides broadcasting semantics in some APIs like other deep learning frameworks, which allows using tensors with different shapes while operating.
PaddlePaddle provides broadcasting semantics in some APIs like other deep learning frameworks, which allows using tensors with different shapes while operating.
In General, broadcast is the rule how the smaller tensor is “broadcast” across the larger tsnsor so that they have same shapes.
In General, broadcast is the rule how the smaller tensor is “broadcast” across the larger tsnsor so that they have same shapes.
...
@@ -98,4 +98,4 @@ For example:
...
@@ -98,4 +98,4 @@ For example:
z = paddle.elementwise_add(x,y,axis=1)
z = paddle.elementwise_add(x,y,axis=1)
print(z.shape)
print(z.shape)
# z'shape [2, 3, 4, 5]
# z'shape [2, 3, 4, 5]
# Start comparation at axis=1 from forward to backward.
# Start comparation at axis=1 from forward to backward.
@@ -6,13 +6,13 @@ This paper introduces the basic concepts of Paddle:
...
@@ -6,13 +6,13 @@ This paper introduces the basic concepts of Paddle:
- `Guide to Fluid Programming <./programming_guide/programming_guide_en.html>`_ :introduces the basic concept and usage of Paddle.
- `Guide to Fluid Programming <./programming_guide/programming_guide_en.html>`_ :introduces the basic concept and usage of Paddle.
- `LoD-Tensor User Guide <lod_tensor_en.html>`_ : LoD-Tensor is a high-level feature of Paddle. It adds sequence information on the basis of tensor and supports processing variable length data.
- `LoD-Tensor User Guide <lod_tensor_en.html>`_ : LoD-Tensor is a high-level feature of Paddle. It adds sequence information on the basis of tensor and supports processing variable length data.
@@ -109,7 +108,7 @@ Please follow the steps below to install:
...
@@ -109,7 +108,7 @@ Please follow the steps below to install:
`mkdir -p /paddle/build && cd /paddle/build`
`mkdir -p /paddle/build && cd /paddle/build`
7. Use the following command to install the dependencies:
7. Use the following command to install the dependencies:
For Python2: pip install protobuf
For Python2: pip install protobuf
...
@@ -119,7 +118,7 @@ Please follow the steps below to install:
...
@@ -119,7 +118,7 @@ Please follow the steps below to install:
> Install protobuf 3.1.0
> Install protobuf 3.1.0
`apt install patchelf`
`yum install patchelf`
> Installing patchelf, PatchELF is a small and useful program for modifying the dynamic linker and RPATH of ELF executables.
> Installing patchelf, PatchELF is a small and useful program for modifying the dynamic linker and RPATH of ELF executables.
...
@@ -145,7 +144,7 @@ Please follow the steps below to install:
...
@@ -145,7 +144,7 @@ Please follow the steps below to install:
10. After compiling successfully, go to the `/paddle/build/python/dist` directory and find the generated `.whl` package: `cd /paddle/build/python/dist`
10. After compiling successfully, go to the `/paddle/build/python/dist` directory and find the generated `.whl` package: `cd /paddle/build/python/dist`
11. Install the compiled `.whl` package on the current machine or target machine:
11. Install the compiled `.whl` package on the current machine or target machine:
For Python2: pip install -U (whl package name)
For Python2: pip install -U (whl package name)
For Python3: pip3.5 install -U (whl package name)
For Python3: pip3.5 install -U (whl package name)
...
@@ -154,7 +153,7 @@ Please follow the steps below to install:
...
@@ -154,7 +153,7 @@ Please follow the steps below to install:
Congratulations, now that you have successfully installed PaddlePaddle using Docker, you only need to run PaddlePaddle after entering the Docker container. For more Docker usage, please refer to the [official Docker documentation](https://docs.docker.com/).
Congratulations, now that you have successfully installed PaddlePaddle using Docker, you only need to run PaddlePaddle after entering the Docker container. For more Docker usage, please refer to the [official Docker documentation](https://docs.docker.com/).
> Note: In order to reduce the size, `vim` is not installed in PaddlePaddle Docker image by default. You can edit the code in the container after executing `apt-get install -y vim` in the container.
> Note: In order to reduce the size, `vim` is not installed in PaddlePaddle Docker image by default. You can edit the code in the container after executing `yum install -y vim` in the container.
<aname="ct_source"></a>
<aname="ct_source"></a>
### **Local compilation**
### **Local compilation**
...
@@ -215,7 +214,7 @@ Congratulations, now that you have successfully installed PaddlePaddle using Doc
...
@@ -215,7 +214,7 @@ Congratulations, now that you have successfully installed PaddlePaddle using Doc
* Here is the installation method for `patchELF`. Other dependencies can be installed using `yum install` or `pip install`/`pip3 install` followed by the name and version:
* Here is the installation method for `patchELF`. Other dependencies can be installed using `yum install` or `pip install`/`pip3 install` followed by the name and version:
`yum install patchelf`
`yum install patchelf`
> Users who can't use apt installation can refer to patchElF github [official documentation](https://gist.github.com/ruario/80fefd174b3395d34c14).
> Users who can't use yum installation can refer to patchElF github [official documentation](https://gist.github.com/ruario/80fefd174b3395d34c14).
7. Put the PaddlePaddle source cloned in the Paddle folder in the current directory and go to the Paddle directory:
7. Put the PaddlePaddle source cloned in the Paddle folder in the current directory and go to the Paddle directory:
include_sublayers (bool, optional): Whether include the sublayers. If True, return list includes the sublayers weights. Default is True.
include_sublayers (bool, optional): Whether include the sublayers. If True, return list includes the sublayers weights. Default is True.
stride (tuple|int): The stride size. It can be a single integer or a tuple containing two integers, representing the strides of the convolution along the height and width. If it is a single integer, the height and width are equal to the integer. Default is 1.
stride (tuple|int): The stride size. It can be a single integer or a tuple containing two integers, representing the strides of the convolution along the height and width. If it is a single integer, the height and width are equal to the integer. Default is 1.
groups (int, optional): The group number of convolution layer. When group=n, the input and convolution kernels are divided into n groups equally, the first group of convolution kernels and the first group of inputs are subjected to convolution calculation, the second group of convolution kernels and the second group of inputs are subjected to convolution calculation, ……, the nth group of convolution kernels and the nth group of inputs perform convolution calculations. Default is 1.
groups (int, optional): The group number of convolution layer. When group=n, the input and convolution kernels are divided into n groups equally, the first group of convolution kernels and the first group of inputs are subjected to convolution calculation, the second group of convolution kernels and the second group of inputs are subjected to convolution calculation, ……, the nth group of convolution kernels and the nth group of inputs perform convolution calculations. Default is 1.
regularization (WeightDecayRegularizer, optional) – The strategy of regularization. There are two method: :ref:`api_fluid_regularizer_L1Decay` 、 :ref:`api_fluid_regularizer_L2Decay` . If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.
regularization (WeightDecayRegularizer, optional): The strategy of regularization. There are two method: :ref:`api_fluid_regularizer_L1Decay` 、 :ref:`api_fluid_regularizer_L2Decay` . If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
dilation (tuple|int) – The dilation size. It can be a single integer or a tuple containing two integers, representing the height and width of dilation of the convolution kernel elements. If it is a single integer,the height and width of dilation are equal to the integer. Default is 1.
dilation (tuple|int): The dilation size. It can be a single integer or a tuple containing two integers, representing the height and width of dilation of the convolution kernel elements. If it is a single integer,the height and width of dilation are equal to the integer. Default is 1.
stop_gradient (bool, optional) – A boolean that mentions whether gradient should flow. Default is True, means stop calculate gradients.
stop_gradient (bool, optional): A boolean that mentions whether gradient should flow. Default is True, means stop calculate gradients.
force_cpu (bool, optional): Whether force to store the output tensor in CPU memory. If force_cpu is False, the output tensor will be stored in running device memory, otherwise it will be stored to the CPU memory. Default is False.
data_format (str, optional): Specify the input data format, the output data format will be consistent with the input, which can be "NCHW" or "NHWC". N is batch size, C is channels, H is height, and W is width. Default is "NCHW".
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default is None, meaning there is no gradient clipping.
num_filters (int): The number of filter. It is as same as the output channals numbers.
dim (int, optional): A dimension along which to operate. Default is 0.
is_sparse (bool, optional): Whether use sparse updating. For more information, please refer to :ref:`api_guide_sparse_update_en` . If it’s True, it will ues sparse updating.
place (fluid.CPUPlace()|fluid.CUDAPlace(N)|None): This parameter represents which device the executor runs on, and N means the GPU's id. When this parameter is None, PaddlePaddle will set the default device according to its installation version. If Paddle is CPU version, the default device would be set to CPUPlace(). If Paddle is GPU version, the default device would be set to CUDAPlace(0). Default is None.
num_filters (int): the number of convolution kernels, is also the number of output channels.
"""
"""
common_args_cn="""
common_args_cn="""
x (Tensor) - 输入的Tensor,数据类型为:float32、float64、int32、int64。
x (Tensor) - 输入的 `Tensor` ,数据类型为:float32、float64、int32、int64。
y (Tensor) - 输入的Tensor,数据类型为:float32、float64、int32、int64。
y (Tensor) - 输入的 `Tensor` ,数据类型为:float32、float64、int32、int64。
name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
name (str,可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。