提交 de2bbb47 编写于 作者: C chenlong

Merge branch 'develop' of https://github.com/PaddlePaddle/FluidDoc into develop

...@@ -10,7 +10,7 @@ python gen_doc.py --module_name "" --module_prefix "" --output fluid --output_na ...@@ -10,7 +10,7 @@ python gen_doc.py --module_name "" --module_prefix "" --output fluid --output_na
python gen_module_index.py fluid fluid python gen_module_index.py fluid fluid
# tensor # tensor
for module in math random stat linalg for module in math random stat linalg search
do do
python gen_doc.py --module_name ${module} --module_prefix ${module} --output ${module} --output_name tensor --to_multiple_files True --output_dir tensor python gen_doc.py --module_name ${module} --module_prefix ${module} --output ${module} --output_name tensor --to_multiple_files True --output_dir tensor
python gen_module_index.py tensor.${module} ${module} python gen_module_index.py tensor.${module} ${module}
......
...@@ -4,7 +4,7 @@ import glob ...@@ -4,7 +4,7 @@ import glob
import os import os
if __name__ == '__main__': if __name__ == '__main__':
with open('index_en.rst', 'w') as file_object: with open('index_en.rst', 'w') as file_object:
file_object = open('index_en.rst', 'w') file_object = open('index_en.rst', 'w')
file_object.write('''============= file_object.write('''=============
API Reference API Reference
...@@ -25,16 +25,16 @@ API Reference ...@@ -25,16 +25,16 @@ API Reference
else: else:
pattern = target_dir + '/*.rst' pattern = target_dir + '/*.rst'
file_names.extend(glob.glob(pattern)) file_names.extend(glob.glob(pattern))
for file_name in sorted(file_names): for file_name in sorted(file_names):
with open(file_name, 'r')as f: with open(file_name, 'r') as f:
for i in range(2): for i in range(2):
line = f.readline().strip() line = f.readline().strip()
if line.find('paddle.') != -1: if line.find('paddle.') != -1:
file_object.write(' '+file_name + "\n") file_object.write(' ' + file_name + "\n")
file_names.remove(file_name) file_names.remove(file_name)
file_object.write(' '+'fluid.rst' + "\n") file_object.write(' ' + 'fluid.rst' + "\n")
for file_name in sorted(file_names): for file_name in sorted(file_names):
if file_name not in ['index_en.rst', 'fluid.rst']: if file_name not in ['index_en.rst']:
file_object.write(' '+file_name + "\n") file_object.write(' ' + file_name + "\n")
...@@ -6,15 +6,29 @@ API Reference ...@@ -6,15 +6,29 @@ API Reference
:maxdepth: 1 :maxdepth: 1
../api_guides/index_en.rst ../api_guides/index_en.rst
paddle.rst
dataset.rst dataset.rst
tensor.rst
nn.rst
imperative.rst
declarative.rst declarative.rst
optimizer.rst
metric.rst
framework.rst framework.rst
imperative.rst
io.rst io.rst
utils.rst metric.rst
incubate.rst nn.rst
optimizer.rst
tensor.rst
fluid.rst
backward.rst
clip.rst
data/data_reader.rst
data/dataset.rst
dygraph.rst
executor.rst
fluid.rst
initializer.rst
layers.rst
metrics.rst
nets.rst
paddle.rst
profiler.rst
regularizer.rst
transpiler.rst
unique_name.rst
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_nn_softmax: .. _api_nn_softmax:
softmax softmax
------------------------------- -------
:doc_source: paddle.fluid.layers.softmax
.. autofunction:: paddle.nn.functional.softmax
:noindex:
...@@ -2,6 +2,6 @@ ...@@ -2,6 +2,6 @@
cumsum cumsum
------------------------------- -------------------------------
:doc_source: paddle.fluid.layers.cumsum :doc_source: paddle.tensor.cumsum
...@@ -93,6 +93,7 @@ paddle.tensor ...@@ -93,6 +93,7 @@ paddle.tensor
tensor/scatter.rst tensor/scatter.rst
tensor/scatter_nd.rst tensor/scatter_nd.rst
tensor/scatter_nd_add.rst tensor/scatter_nd_add.rst
tensor/search.rst
tensor/shape.rst tensor/shape.rst
tensor/shard_index.rst tensor/shard_index.rst
tensor/shuffle.rst tensor/shuffle.rst
......
...@@ -2,6 +2,6 @@ ...@@ -2,6 +2,6 @@
cumsum cumsum
------------------------------- -------------------------------
:doc_source: paddle.fluid.layers.cumsum :doc_source: paddle.tensor.cumsum
...@@ -46,7 +46,7 @@ Conv2D ...@@ -46,7 +46,7 @@ Conv2D
参数: 参数:
- **num_channels** (int) - 输入图像的通道数。 - **num_channels** (int) - 输入图像的通道数。
- **num_fliters** (int) - 滤波器的个数,和输出特征图个数相同。 - **num_filters** (int) - 滤波器的个数,和输出特征图个数相同。
- **filter_size** (int|tuple) - 滤波器大小。如果 ``filter_size`` 是一个元组,则必须包含两个整型数,分别表示滤波器高度和宽度。否则,表示滤波器高度和宽度均为 ``filter_size`` 。 - **filter_size** (int|tuple) - 滤波器大小。如果 ``filter_size`` 是一个元组,则必须包含两个整型数,分别表示滤波器高度和宽度。否则,表示滤波器高度和宽度均为 ``filter_size`` 。
- **stride** (int|tuple, 可选) - 步长大小。如果 ``stride`` 为元组,则必须包含两个整型数,分别表示垂直和水平滑动步长。否则,表示垂直和水平滑动步长均为 ``stride`` 。默认值:1。 - **stride** (int|tuple, 可选) - 步长大小。如果 ``stride`` 为元组,则必须包含两个整型数,分别表示垂直和水平滑动步长。否则,表示垂直和水平滑动步长均为 ``stride`` 。默认值:1。
- **padding** (int|tuple, 可选) - 填充大小。如果 ``padding`` 为元组,则必须包含两个整型数,分别表示竖直和水平边界填充大小。否则,表示竖直和水平边界填充大小均为 ``padding`` 。默认值:0。 - **padding** (int|tuple, 可选) - 填充大小。如果 ``padding`` 为元组,则必须包含两个整型数,分别表示竖直和水平边界填充大小。否则,表示竖直和水平边界填充大小均为 ``padding`` 。默认值:0。
......
...@@ -39,7 +39,7 @@ Executor支持单GPU、多GPU以及CPU运行。 ...@@ -39,7 +39,7 @@ Executor支持单GPU、多GPU以及CPU运行。
train_program = fluid.Program() train_program = fluid.Program()
startup_program = fluid.Program() startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program): with fluid.program_guard(train_program, startup_program):
data = fluid.layers.data(name='X', shape=[1], dtype='float32') data = fluid.data(name='X', shape=[None, 1], dtype='float32')
hidden = fluid.layers.fc(input=data, size=10) hidden = fluid.layers.fc(input=data, size=10)
loss = fluid.layers.mean(hidden) loss = fluid.layers.mean(hidden)
fluid.optimizer.SGD(learning_rate=0.01).minimize(loss) fluid.optimizer.SGD(learning_rate=0.01).minimize(loss)
...@@ -130,7 +130,7 @@ Executor支持单GPU、多GPU以及CPU运行。 ...@@ -130,7 +130,7 @@ Executor支持单GPU、多GPU以及CPU运行。
place = fluid.CPUPlace() # fluid.CUDAPlace(0) place = fluid.CPUPlace() # fluid.CUDAPlace(0)
exe = fluid.Executor(place) exe = fluid.Executor(place)
data = fluid.layers.data(name='X', shape=[1], dtype='float32') data = fluid.data(name='X', shape=[None, 1], dtype='float32')
hidden = fluid.layers.fc(input=data, size=10) hidden = fluid.layers.fc(input=data, size=10)
loss = fluid.layers.mean(hidden) loss = fluid.layers.mean(hidden)
adam = fluid.optimizer.Adam() adam = fluid.optimizer.Adam()
...@@ -175,8 +175,8 @@ train_from_dataset可以非常容易扩展到大规模分布式在线和离线 ...@@ -175,8 +175,8 @@ train_from_dataset可以非常容易扩展到大规模分布式在线和离线
place = fluid.CPUPlace() # 通过设置place = fluid.CUDAPlace(0)使用GPU place = fluid.CPUPlace() # 通过设置place = fluid.CUDAPlace(0)使用GPU
exe = fluid.Executor(place) exe = fluid.Executor(place)
x = fluid.layers.data(name="x", shape=[10, 10], dtype="int64") x = fluid.data(name="x", shape=[None, 10, 10], dtype="int64")
y = fluid.layers.data(name="y", shape=[1], dtype="int64", lod_level=1) y = fluid.data(name="y", shape=[None, 1], dtype="int64", lod_level=1)
dataset = fluid.DatasetFactory().create_dataset() dataset = fluid.DatasetFactory().create_dataset()
dataset.set_use_var([x, y]) dataset.set_use_var([x, y])
dataset.set_thread(1) dataset.set_thread(1)
...@@ -210,12 +210,13 @@ train_from_dataset可以非常容易扩展到大规模分布式在线和离线 ...@@ -210,12 +210,13 @@ train_from_dataset可以非常容易扩展到大规模分布式在线和离线
import paddle.fluid as fluid import paddle.fluid as fluid
place = fluid.CPUPlace() # 使用GPU时可设置place = fluid.CUDAPlace(0) place = fluid.CPUPlace() # 使用GPU时可设置place = fluid.CUDAPlace(0)
exe = fluid.Executor(place) exe = fluid.Executor(place)
x = fluid.layers.data(name="x", shape=[10, 10], dtype="int64") x = fluid.data(name="x", shape=[None, 10, 10], dtype="int64")
y = fluid.layers.data(name="y", shape=[1], dtype="int64", lod_level=1) y = fluid.data(name="y", shape=[None, 1], dtype="int64", lod_level=1)
dataset = fluid.DatasetFactory().create_dataset() dataset = fluid.DatasetFactory().create_dataset()
dataset.set_use_var([x, y]) dataset.set_use_var([x, y])
dataset.set_thread(1) dataset.set_thread(1)
filelist = [] # 您可以设置您自己的filelist,如filelist = ["dataA.txt"] filelist = [] # 您可以设置您自己的filelist,如filelist = ["dataA.txt"]
dataset.set_filelist(filelist) dataset.set_filelist(filelist)
exe.run(fluid.default_startup_program()) exe.run(fluid.default_startup_program())
exe.infer_from_dataset(program=fluid.default_main_program(),dataset=dataset) exe.infer_from_dataset(program=fluid.default_main_program(),
dataset=dataset)
...@@ -107,3 +107,21 @@ Note。 ...@@ -107,3 +107,21 @@ Note。
io_cn.rst io_cn.rst
utils_cn.rst utils_cn.rst
incubate_cn.rst incubate_cn.rst
fluid_cn.rst
backward_cn.rst
clip_cn.rst
data_cn/data_reader_cn.rst
data_cn/dataset_cn.rst
dataset_cn.rst
dygraph_cn.rst
executor_cn.rst
initializer_cn.rst
io_cn.rst
layers_cn.rst
metrics_cn.rst
nets_cn.rst
optimizer_cn.rst
profiler_cn.rst
regularizer_cn.rst
transpiler_cn.rst
unique_name_cn.rst
...@@ -11,7 +11,7 @@ MSRAInitializer ...@@ -11,7 +11,7 @@ MSRAInitializer
该接口实现MSRA方式的权重初始化(a.k.a. Kaiming初始化) 该接口实现MSRA方式的权重初始化(a.k.a. Kaiming初始化)
该接口为权重初始化函数,方法来自Kaiming He,Xiangyu Zhang,Shaoqing Ren 和 Jian Sun所写的论文: `Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification <https://arxiv.org/abs/1502.01852>`_ 。这是一个鲁棒性特别强的初始化方法,并且适应了非线性激活函数(rectifier nonlinearities)。 该接口为权重初始化函数,方法来自Kaiming He,Xiangyu Zhang,Shaoqing Ren 和 Jian Sun所写的论文: `Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification <https://arxiv.org/abs/1502.01852>`_ 。这是一个鲁棒性特别强的初始化方法,并且适应了非线性激活函数(rectifier nonlinearities)。
可以选择使用均匀分布或者正分布初始化权重; 可以选择使用均匀分布或者正分布初始化权重;
在均匀分布中,范围为[-x,x],其中: 在均匀分布中,范围为[-x,x],其中:
.. math:: .. math::
......
...@@ -3,24 +3,24 @@ ...@@ -3,24 +3,24 @@
concat concat
------------------------------- -------------------------------
.. py:function:: paddle.fluid.layers.concat(input,axis=0,name=None) .. py:function:: paddle.fluid.layers.concat(input, axis=0, name=None)
:alias_main: paddle.concat
:alias: paddle.concat,paddle.tensor.concat,paddle.tensor.manipulation.concat
:old_api: paddle.fluid.layers.concat
该OP对输入沿 ``axis`` 轴进行联结,返回一个新的Tensor。
该OP对输入沿 ``axis`` 轴进行联结。
参数: 参数:
- **input** (list) - 输入是待联结的多维 ``Tensor`` 组成的 ``list`` ,支持的数据类型为:float32、float64、int32、int64 - **input** (list|tuple|Tensor) - 待联结的Tensor list,Tensor tuple或者Tensor,支持的数据类型为:bool、float16、 float32、float64、int32、int64。 ``input`` 中所有Tensor的数据类型必须一致
- **axis** (int|Variable,可选) - 整数或者形状为[1]的 ``Tensor``,数据类型为 ``int32``。指定对输入Tensor进行运算的轴, ``axis`` 的有效范围是[-R, R),R是输入 ``input`` 中 ``Tensor`` 的维度, ``axis`` 为负值时与 :math:`axis + R` 等价。默认值为0。 - **axis** (int|Tensor,可选) - 指定对输入Tensor进行运算的轴,可以是整数或者形状为[1]的Tensor,数据类型为int32或者int64。 ``axis`` 的有效范围是[-R, R),R是输入 ``input`` 中Tensor 的维度, ``axis`` 为负值时与 :math:`axis + R` 等价。默认值为0。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:联结后的 ``Tensor`` ,数据类型和 ``input`` 相同。 返回:联结后的 ``Tensor`` ,数据类型和 ``input`` 中的Tensor相同。
返回类型:Variable 抛出异常:
- ``TypeError``: - 当输入 ``input`` 的类型不是list、tuple或者Tensor的时候。
- ``TypeError``: - 当输入 ``input`` 的数据类型不是 bool,float16, float32, float64, int32, int64时。
- ``TypeError``: - 当 ``axis`` 的类型不是int或者Tensor时。当 ``axis`` 是Tensor的时候其数据类型不是int32或者int64时。
- ``TypeError``: - 当输入 ``input`` 中的Tensor存在数据类型不一致时。
**代码示例**: **代码示例**:
...@@ -29,18 +29,18 @@ concat ...@@ -29,18 +29,18 @@ concat
import paddle.fluid as fluid import paddle.fluid as fluid
import numpy as np import numpy as np
in1 = np.array([[1,2,3], in1 = np.array([[1, 2, 3],
[4,5,6]]) [4, 5, 6]])
in2 = np.array([[11,12,13], in2 = np.array([[11, 12, 13],
[14,15,16]]) [14, 15, 16]])
in3 = np.array([[21,22], in3 = np.array([[21, 22],
[23,24]]) [23, 24]])
with fluid.dygraph.guard(): with fluid.dygraph.guard():
x1 = fluid.dygraph.to_variable(in1) x1 = fluid.dygraph.to_variable(in1)
x2 = fluid.dygraph.to_variable(in2) x2 = fluid.dygraph.to_variable(in2)
x3 = fluid.dygraph.to_variable(in3) x3 = fluid.dygraph.to_variable(in3)
out1 = fluid.layers.concat(input=[x1,x2,x3], axis=-1) out1 = fluid.layers.concat(input=[x1, x2, x3], axis=-1)
out2 = fluid.layers.concat(input=[x1,x2], axis=0) out2 = fluid.layers.concat(input=[x1, x2], axis=0)
print(out1.numpy()) print(out1.numpy())
# [[ 1 2 3 11 12 13 21 22] # [[ 1 2 3 11 12 13 21 22]
# [ 4 5 6 14 15 16 23 24]] # [ 4 5 6 14 15 16 23 24]]
......
...@@ -5,11 +5,6 @@ cumsum ...@@ -5,11 +5,6 @@ cumsum
.. py:function:: paddle.fluid.layers.cumsum(x,axis=None,exclusive=None,reverse=None) .. py:function:: paddle.fluid.layers.cumsum(x,axis=None,exclusive=None,reverse=None)
:alias_main: paddle.cumsum
:alias: paddle.cumsum,paddle.tensor.cumsum,paddle.tensor.math.cumsum
:old_api: paddle.fluid.layers.cumsum
沿给定轴(axis)的元素的累加和。默认结果的第一个元素和输入的第一个元素一致。如果exlusive为True,结果的第一个元素则为0。 沿给定轴(axis)的元素的累加和。默认结果的第一个元素和输入的第一个元素一致。如果exlusive为True,结果的第一个元素则为0。
......
...@@ -3,25 +3,20 @@ ...@@ -3,25 +3,20 @@
eye eye
------------------------------- -------------------------------
.. py:function:: paddle.fluid.layers.eye(num_rows, num_columns=None, batch_shape=None, dtype='float32') .. py:function:: paddle.fluid.layers.eye(num_rows, num_columns=None, batch_shape=None, dtype='float32', name=None)
:alias_main: paddle.eye
:alias: paddle.eye,paddle.tensor.eye,paddle.tensor.creation.eye
:update_api: paddle.fluid.layers.eye
该OP用来构建二维Tensor,或一个批次的二维Tensor。
该OP用来构建二维张量,或一个批次的二维张量。
参数: 参数:
- **num_rows** (int) - 该批次二维张量的行数,数据类型为非负int32。 - **num_rows** (int) - 该批次二维Tensor的行数,数据类型为非负int32。
- **num_columns** (int, 可选) - 该批次二维张量的列数,数据类型为非负int32。若为None,则默认等于num_rows。 - **num_columns** (int, 可选) - 该批次二维Tensor的列数,数据类型为非负int32。若为None,则默认等于num_rows。
- **batch_shape** (list(int), 可选) - 如若提供,则返回向量的主批次维度将为batch_shape。 - **batch_shape** (list(int), 可选) - 如若提供,则返回Tensor的主批次维度将为batch_shape。
- **dtype** (np.dtype|core.VarDesc.VarType|str,可选) - 返回张量的数据类型,可为int32,int64,float16,float32,float64,默认数据类型为float32。 - **dtype** (np.dtype|core.VarDesc.VarType|str,可选) - 返回Tensor的数据类型,可为int32,int64,float16,float32,float64,默认数据类型为float32。
- **name** (str) – 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` ,默认值为None。
返回:shape为batch_shape + [num_rows, num_columns]的张量 返回: ``shape`` 为batch_shape + [num_rows, num_columns]的Tensor
返回类型:Variable(Tensor|LoDTensor)数据类型为int32,int64,float16,float32,float64的Tensor或者LoDTensor。
抛出异常: 抛出异常:
- ``TypeError``: - 如果 ``dtype`` 的类型不是float16, float32, float64, int32, int64其中之一。 - ``TypeError``: - 如果 ``dtype`` 的类型不是float16, float32, float64, int32, int64其中之一。
......
...@@ -44,6 +44,6 @@ fill_constant ...@@ -44,6 +44,6 @@ fill_constant
positive_2 = fluid.layers.fill_constant([1], "int32", 2) positive_2 = fluid.layers.fill_constant([1], "int32", 2)
data3 = fluid.layers.fill_constant(shape=[1, positive_2], dtype='float32', value=1.5) # data3=[1.5, 1.5] data3 = fluid.layers.fill_constant(shape=[1, positive_2], dtype='float32', value=1.5) # data3=[1.5, 1.5]
# attr shape is an Variable Tensor. # attr shape is a Variable Tensor.
shape = fluid.layers.fill_constant([1,2], "int32", 2) # shape=[2,2] shape = fluid.layers.fill_constant([1,2], "int32", 2) # shape=[2,2]
data4 = fluid.layers.fill_constant(shape=shape, dtype='bool', value=True) # data4=[[True,True],[True,True]] data4 = fluid.layers.fill_constant(shape=shape, dtype='bool', value=True) # data4=[[True,True],[True,True]]
...@@ -3,22 +3,27 @@ ...@@ -3,22 +3,27 @@
linspace linspace
------------------------------- -------------------------------
.. py:function:: paddle.fluid.layers.linspace(start, stop, num, dtype) .. py:function:: paddle.fluid.layers.linspace(start, stop, num, dtype=None, name=None)
该OP在给定区间内返回固定数目的均匀间隔的值。 该OP返回一个Tensor,Tensor的值为在区间start和stop上均匀间隔的num个值,输出Tensor的长度为num。
**注意:该OP不进行梯度计算**
参数: 参数:
- **start** (float|Variable) – start是区间开始的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。 - **start** (float|Tensor) – ``start`` 是区间开始的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **stop** (float|Variable) – end是区间结束的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。 - **stop** (float|Tensor) – ``end`` 是区间结束的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **num** (int|Variable) – num是给定区间内需要划分的区间数,可以是一个整型标量,或是一个shape为[1]的Tensor,该Tensor的数据类型需为int32。 - **num** (int|Tensor) – ``num`` 是给定区间内需要划分的区间数,可以是一个整型标量,或是一个shape为[1]的Tensor,该Tensor的数据类型需为int32。
- **dtype** (string) – 输出Tensor的数据类型,可以是‘float32’或者是‘float64’。 - **dtype** (string, 可选) – 输出Tensor的数据类型,可以是float32或者是float64,如果dtype的数据类型为None,输出Tensor数据类型为float32。
- **name** (str, 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:表示等间隔划分结果的1-D Tensor,该Tensor的shape大小为 :math:`[num]` ,在mum为1的情况下,仅返回包含start元素值的Tensor。 返回:表示等间隔划分结果的1-D Tensor,该Tensor的shape大小为 :math:`[num]` ,在mum为1的情况下,仅返回包含start元素值的Tensor。
返回类型:Variable 抛出异常:
- ``TypeError`` - 当start或者stop的数据类型不是float32或者float64。
- ``TypeError`` - 当num的数据类型不是float32或者float64。
- ``TypeError`` - 当dtype的类型不是float32或者float64。
**代码示例**: **代码示例**:
......
...@@ -5,21 +5,18 @@ ones ...@@ -5,21 +5,18 @@ ones
.. py:function:: paddle.fluid.layers.ones(shape,dtype,force_cpu=False) .. py:function:: paddle.fluid.layers.ones(shape,dtype,force_cpu=False)
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为1的Tensor。
**ones**
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为1的Tensor,该OP会将stop_gradient设置为True,即停止梯度更新。
参数: 参数:
- **shape** (tuple|list) - 输出Tensor的形状 - **shape** (tuple|list|Tensor) - 输出Tensor的形状, ``shape`` 的数据类型为int32或者int64
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为float16、float32、float64、int32或int64。 - **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为bool、 float16、float32、float64、int32或int64。
- **force_cpu** (bool) – 是否强制将输出Tensor写入CPU内存。如果 ``force_cpu`` 为False,则将输出Tensor写入当前所在运算设备的内存,默认为False。 - **force_cpu** (bool, 可选) – 是否强制将输出Tensor写入CPU内存。如果 ``force_cpu`` 为False,则将输出Tensor写入当前所在运算设备的内存,默认为False。
返回:值全为1的Tensor,数据类型和 ``dtype`` 定义的类型一致。 返回:值全为1的Tensor,数据类型和 ``dtype`` 定义的类型一致。
返回类型:Variable 抛出异常:
- ``TypeError`` - 当 ``dtype`` 不是bool、 float16、float32、float64、int32、int64和None时。
- ``TypeError`` - 当 ``shape`` 不是tuple、list、或者Tensor时, 当 ``shape`` 为Tensor,其数据类型不是int32或者int64时。
**代码示例**: **代码示例**:
......
...@@ -5,12 +5,6 @@ softmax ...@@ -5,12 +5,6 @@ softmax
.. py:function:: paddle.fluid.layers.softmax(input, use_cudnn=False, name=None, axis=-1) .. py:function:: paddle.fluid.layers.softmax(input, use_cudnn=False, name=None, axis=-1)
:alias_main: paddle.nn.functional.softmax
:alias: paddle.nn.functional.softmax,paddle.nn.functional.activation.softmax
:old_api: paddle.fluid.layers.softmax
该OP实现了softmax层。OP的计算过程如下: 该OP实现了softmax层。OP的计算过程如下:
步骤1:输入 ``input`` 的 ``axis`` 维会被置换到最后一维; 步骤1:输入 ``input`` 的 ``axis`` 维会被置换到最后一维;
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
split split
------------------------------- -------------------------------
.. py:function:: paddle.fluid.layers.split(input,num_or_sections,dim=-1,name=None) .. py:function:: paddle.fluid.layers.split(input, num_or_sections, dim=-1, name=None)
...@@ -11,18 +11,18 @@ split ...@@ -11,18 +11,18 @@ split
该OP将输入Tensor分割成多个子Tensor。 该OP将输入Tensor分割成多个子Tensor。
参数: 参数:
- **input** (Variable) - 输入变量,数据类型为float32,float64,int32,int64的多维Tensor或者LoDTensor。 - **input** (Tensor) - 输入变量,数据类型为bool, float16,float32,float64,int32,int64的多维Tensor。
- **num_or_sections** (int|list|tuple) - 如果 ``num_or_sections`` 是一个整数,则表示Tensor平均划分为相同大小子Tensor的数量。如果 ``num_or_sections`` 是一个list或tuple,那么它的长度代表子Tensor的数量,它的元素可以是整数或者形状为[1]的Tensor,依次代表子Tensor需要分割成的维度的大小。list或tuple的长度不能超过输入Tensor待分割的维度的大小。至多有一个元素值为-1,-1表示该值是由 ``input`` 待分割的维度值和 ``num_or_sections`` 的剩余元素推断出来的。 - **num_or_sections** (int|list|tuple) - 如果 ``num_or_sections`` 是一个整数,则表示Tensor平均划分为相同大小子Tensor的数量。如果 ``num_or_sections`` 是一个list或tuple,那么它的长度代表子Tensor的数量,它的元素可以是整数或者形状为[1]的Tensor,依次代表子Tensor需要分割成的维度的大小。list或tuple的长度不能超过输入Tensor待分割的维度的大小。至多有一个元素值为-1,-1表示该值是由 ``input`` 待分割的维度值和 ``num_or_sections`` 的剩余元素推断出来的。
- **dim** (int|Variable,可选) - 整数或者形状为[1]的Tensor,数据类型为int32或int64。表示需要分割的维度。如果dim < 0,则划分的维度为rank(input) + dim。默认值为-1。 - **dim** (int|Tenspr,可选) - 整数或者形状为[1]的Tensor,数据类型为int32或int64。表示需要分割的维度。如果 ``dim < 0`` ,则划分的维度为 ``rank(input) + dim`` 。默认值为-1。
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:分割后的Tensor列表。 返回:分割后的Tensor列表。
返回类型:列表(Variable(Tensor|LoDTensor)),数据类型为int32,int64,float32,float64。
抛出异常: 抛出异常:
- :code:`TypeError`:``num_or_sections`` 不是int、list 或 tuple。 - :code:`TypeError`:``input`` 的数据类型不是bool、float16、float32、float64、int32或int64时 。
- :code:`TypeError`:``dim`` 不是 int 或 Variable。 - :code:`TypeError`:``num_or_sections`` 不是int、list 或 tuple时。
- :code:`TypeError`:``dim`` 不是 int 或 Tensor时。当 ``dim`` 为Tensor,其数据类型不是int32或int64时。
**代码示例**: **代码示例**:
...@@ -30,27 +30,31 @@ split ...@@ -30,27 +30,31 @@ split
import paddle.fluid as fluid import paddle.fluid as fluid
# 输入是维度为[3, 9, 5]的Tensor: # input is a Tensor which shape is [3, 9, 5]
input = fluid.data( input = fluid.data(
name="input", shape=[3, 9, 5], dtype="float32") name="input", shape=[3, 9, 5], dtype="float32")
# 传入num_or_sections为一个整数 out0, out1, out2 = fluid.layers.split(input, num_or_sections=3, dim=1)
x0, x1, x2 = fluid.layers.split(input, num_or_sections=3, dim=1) # out0.shape [3, 3, 5]
x0.shape # [3, 3, 5] # out1.shape [3, 3, 5]
x1.shape # [3, 3, 5] # out2.shape [3, 3, 5]
x2.shape # [3, 3, 5]
out0, out1, out2 = fluid.layers.split(input, num_or_sections=[2, 3, 4], dim=1)
# 传入num_or_sections为一个整数列表 # out0.shape [3, 2, 5]
x0, x1, x2 = fluid.layers.split(input, num_or_sections=[2, 3, 4], dim=1) # out1.shape [3, 3, 5]
x0.shape # [3, 2, 5] # out2.shape [3, 4, 5]
x1.shape # [3, 3, 5]
x2.shape # [3, 4, 5] out0, out1, out2 = fluid.layers.split(input, num_or_sections=[2, 3, -1], dim=1)
# out0.shape [3, 2, 5]
# 传入num_or_sections为一个整数列表,其中有一个元素为-1 # out1.shape [3, 3, 5]
x0, x1, x2 = fluid.layers.split(input, num_or_sections=[2, 3, -1], dim=1) # out2.shape [3, 4, 5]
x0.shape # [3, 2, 5]
x1.shape # [3, 3, 5] # dim is negative, the real dim is (rank(input) + axis) which real
x2.shape # [3, 4, 5] # value is 1.
out0, out1, out2 = fluid.layers.split(input, num_or_sections=3, dim=-2)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]
......
...@@ -5,21 +5,18 @@ zeros ...@@ -5,21 +5,18 @@ zeros
.. py:function:: paddle.fluid.layers.zeros(shape,dtype,force_cpu=False) .. py:function:: paddle.fluid.layers.zeros(shape,dtype,force_cpu=False)
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为0的Tensor。
**zeros**
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为0的Tensor,该OP会将stop_gradient设置为True,即停止梯度更新。
参数: 参数:
- **shape** (tuple|list) - 输出Tensor的形状 - **shape** (tuple|list|Tensor) - 输出Tensor的形状, ``shape`` 的数据类型为int32或者int64
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为float16、float32、float64、int32或int64。 - **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为bool、 float16、float32、float64、int32或int64。
- **force_cpu** (bool) - 是否强制将输出Tensor写入CPU内存。如果 ``force_cpu`` 为False,则将输出Tensor写入当前所在运算设备的内存,默认为False。 - **force_cpu** (bool, 可选) - 是否强制将输出Tensor写入CPU内存。如果 ``force_cpu`` 为False,则将输出Tensor写入当前所在运算设备的内存,默认为False。
返回:值全为0的Tensor,数据类型和 ``dtype`` 定义的类型一致。 返回:值全为0的Tensor,数据类型和 ``dtype`` 定义的类型一致。
返回类型:Variable 抛出异常:
- ``TypeError`` - 当 ``dtype`` 不是bool、 float16、float32、float64、int32、int64。
- ``TypeError`` - 当 ``shape`` 不是tuple、list、或者Tensor时。 当 ``shape`` 为Tensor,其数据类型不是int32或者int64时。
**代码示例**: **代码示例**:
......
...@@ -8,4 +8,5 @@ activation ...@@ -8,4 +8,5 @@ activation
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
activation_cn/LeakyReLU_cn.rst
activation_cn/Sigmoid_cn.rst activation_cn/Sigmoid_cn.rst
.. _cn_api_nn_LeakyReLU:
LeakyReLU
-------------------------------
.. py:class:: paddle.nn.LeakyReLU(alpha=0.01, name=None)
ReLU (Rectified Linear Unit)激活层
.. math::
\\Out = max(x, alpha*x)\\
其中,:math:`x` 为输入的 Tensor
参数
::::::::::
- alpha (float,可选) - :math:`x < 0` 时的斜率。默认值为0.01。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
形状:
- input: 任意形状的Tensor。
- output: 和input具有相同形状的Tensor。
代码示例
:::::::::
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
lrelu = paddle.nn.LeakyReLU()
x = paddle.imperative.to_variable(np.array([-2, 0, 1], 'float32'))
out = lrelu(x) # [-0.02, 0, 1]
MSELoss MSELoss
------------------------------- -------------------------------
.. py:function:: paddle.nn.loss.MSELoss(input,label) .. py:function:: paddle.nn.loss.MSELoss(reduction='mean')
该OP用于计算预测值和目标值的均方差误差。 该OP用于计算预测值和目标值的均方差误差。
...@@ -23,13 +23,15 @@ MSELoss ...@@ -23,13 +23,15 @@ MSELoss
Out = \operatorname{sum}((input - label)^2) Out = \operatorname{sum}((input - label)^2)
参数: 参数:
- **input** (Variable) - 预测值,维度为 :math:`[N_1, N_2, ..., N_k, D]` 的多维Tensor,其中最后一维D是类别数目。数据类型为float32或float64。
- **label** (Variable) - 目标值,维度为 :math:`[N_1, N_2, ..., N_k, D]` 的多维Tensor,其中最后一维D是类别数目。数据类型为float32或float64。
- **reduction** (str, 可选) - 约简方式,可以是 'none' | 'mean' | 'sum'。设为'none'时不使用约简,设为'mean'时返回loss的均值,设为'sum'时返回loss的和。 - **reduction** (str, 可选) - 约简方式,可以是 'none' | 'mean' | 'sum'。设为'none'时不使用约简,设为'mean'时返回loss的均值,设为'sum'时返回loss的和。
返回:预测值和目标值的均方差 形状:
- **input** (Tensor) - 预测值,维度为 :math:`[N_1, N_2, ..., N_k]` 的多维Tensor。数据类型为float32或float64。
- **label** (Tensor) - 目标值,维度为 :math:`[N_1, N_2, ..., N_k]` 的多维Tensor。数据类型为float32或float64。
返回:变量(Tensor), 预测值和目标值的均方差, 数值类型与输入相同
返回类型:变量(Variable)
**代码示例**: **代码示例**:
...@@ -37,32 +39,32 @@ MSELoss ...@@ -37,32 +39,32 @@ MSELoss
import numpy as np import numpy as np
import paddle import paddle
from paddle import fluid
import paddle.fluid.dygraph as dg
# static graph mode
paddle.enable_static()
mse_loss = paddle.nn.loss.MSELoss() mse_loss = paddle.nn.loss.MSELoss()
input = fluid.data(name="input", shape=[1]) input = paddle.data(name="input", shape=[1])
label = fluid.data(name="label", shape=[1]) label = paddle.data(name="label", shape=[1])
place = fluid.CPUPlace() place = paddle.CPUPlace()
input_data = np.array([1.5]).astype("float32") input_data = np.array([1.5]).astype("float32")
label_data = np.array([1.7]).astype("float32") label_data = np.array([1.7]).astype("float32")
# declarative mode
output = mse_loss(input,label) output = mse_loss(input,label)
exe = fluid.Executor(place) exe = paddle.static.Executor(place)
exe.run(fluid.default_startup_program()) exe.run(paddle.static.default_startup_program())
output_data = exe.run( output_data = exe.run(
fluid.default_main_program(), paddle.static.default_main_program(),
feed={"input":input_data, "label":label_data}, feed={"input":input_data, "label":label_data},
fetch_list=[output], fetch_list=[output],
return_numpy=True) return_numpy=True)
print(output_data) print(output_data)
# [array([0.04000002], dtype=float32)] # [array([0.04000002], dtype=float32)]
# imperative mode # dynamic graph mode
with dg.guard(place) as g: paddle.disable_static()
input = dg.to_variable(input_data) input = paddle.to_variable(input_data)
label = dg.to_variable(label_data) label = paddle.to_variable(label_data)
output = mse_loss(input, label) output = mse_loss(input, label)
print(output.numpy()) print(output.numpy())
# [0.04000002] # [0.04000002]
...@@ -2,6 +2,118 @@ ...@@ -2,6 +2,118 @@
softmax softmax
------------------------------- -------------------------------
:doc_source: paddle.fluid.layers.softmax .. py:class:: paddle.nn.functional.softmax(x, axis=-1, name=None)
该OP实现了softmax层。OP的计算过程如下:
步骤1:输入 ``x`` 的 ``axis`` 维会被置换到最后一维;
步骤2:将输入 ``x`` 在逻辑上变换为二维矩阵。二维矩阵第一维(列长度)是输入除最后一维之外的其他维度值的乘积,第二维(行长度)和输入 ``axis`` 维的长度相同;对于矩阵的每一行,softmax操作对其进行重新缩放,使得该行的每个元素在 \[0,1\] 范围内,并且总和为1;
步骤3:softmax操作执行完成后,执行步骤1和步骤2的逆运算,将二维矩阵恢复至和输入 ``x`` 相同的维度。
上述步骤2中softmax操作计算过程如下:
- 对于二维矩阵的每一行,计算K维向量(K是输入第 ``axis`` 维的长度)中指定位置的指数值和全部位置指数值的和。
- 指定位置指数值与全部位置指数值之和的比值就是softmax操作的输出。
对于二维矩阵中的第i行和第j列有:
.. math::
Out[i,j] = \frac{exp(X[i,j])}{\sum_j exp(X[i,j])}
- 示例1(矩阵一共有三维。axis = -1,表示沿着最后一维(即第三维)做softmax操作)
.. code-block:: python
输入
x.shape = [2, 3, 4]
x.data = [[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0],
[7.0, 8.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]]
axis = -1
输出
out.shape = [2, 3, 4]
out.data = [[[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
[0.07232949, 0.19661193, 0.19661193, 0.53444665]],
[[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
[0.0320586 , 0.08714432, 0.23688282, 0.64391426]]]
- 示例2(矩阵一共有三维。axis = 1,表示沿着第二维做softmax操作)
.. code-block:: python
输入
x.shape = [2, 3, 4]
x.data = [[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0],
[7.0, 8.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]]
axis = 1
输出
out.shape = [2, 3, 4]
out.data = [[[0.00657326, 0.00657326, 0.01714783, 0.01714783],
[0.01786798, 0.01786798, 0.04661262, 0.04661262],
[0.97555875, 0.97555875, 0.93623955, 0.93623955]],
[[0.00490169, 0.00490169, 0.00490169, 0.00490169],
[0.26762315, 0.26762315, 0.26762315, 0.26762315],
[0.72747516, 0.72747516, 0.72747516, 0.72747516]]]
参数
::::::::::
- x (Tensor) - 输入的多维 ``Tensor`` ,数据类型为:float32、float64。
- axis (int, 可选) - 指定对输入 ``x`` 进行运算的轴。``axis`` 的有效范围是[-D, D),D是输入 ``x`` 的维度, ``axis`` 为负值时与 :math:`axis + D` 等价。默认值为-1。
- name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
返回
::::::::::
``Tensor`` ,数据类型和形状同 ``x`` 一致。
代码示例
::::::::::
.. code-block:: python
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.enable_imperative()
x = np.array([[[2.0, 3.0, 4.0, 5.0],
[3.0, 4.0, 5.0, 6.0],
[7.0, 8.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[6.0, 7.0, 8.0, 9.0]]], 'float32')
x = paddle.imperative.to_variable(x)
out = F.softmax(x)
# [[[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.07232949, 0.19661193, 0.19661193, 0.53444665]],
# [[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.0320586 , 0.08714432, 0.23688282, 0.64391426],
# [0.0320586 , 0.08714432, 0.23688282, 0.64391426]]]
...@@ -2,6 +2,6 @@ ...@@ -2,6 +2,6 @@
cumsum cumsum
------------------------------- -------------------------------
:doc_source: paddle.fluid.layers.cumsum :doc_source: paddle.tensor.cumsum
...@@ -8,7 +8,7 @@ argsort ...@@ -8,7 +8,7 @@ argsort
:alias_main: paddle.argsort :alias_main: paddle.argsort
:alias: paddle.argsort,paddle.tensor.argsort,paddle.tensor.search.argsort :alias: paddle.argsort,paddle.tensor.argsort,paddle.tensor.search.argsort
对输入变量沿给定轴进行排序,输出排序好的数据的相应索引,其维度和输入相同。**默认升序排列,如果需要降序排列设置** ``descending=True`` 。 对输入变量沿给定轴进行排序,输出排序好的数据的相应索引,其维度和输入相同。默认升序排列,如果需要降序排列设置 ``descending=True`` 。
参数: 参数:
...@@ -17,9 +17,8 @@ argsort ...@@ -17,9 +17,8 @@ argsort
- **descending** (bool,可选) - 指定算法排序的方向。如果设置为True,算法按照降序排序。如果设置为False或者不设置,按照升序排序。默认值为False。 - **descending** (bool,可选) - 指定算法排序的方向。如果设置为True,算法按照降序排序。如果设置为False或者不设置,按照升序排序。默认值为False。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:排序后索引信息(与 ``x`` 维度信息一致),数据类型为int64。 返回:Tensor, 排序后索引信息(与 ``x`` 维度信息一致),数据类型为int64。
返回类型:Tensor
**代码示例**: **代码示例**:
......
.. _cn_api_tensor_concat:
concat concat
------------------------------- -------------------------------
**版本升级,文档正在开发中**
.. py:function:: paddle.tensor.concat(x, axis=0, name=None)
该OP对输入沿 ``axis`` 轴进行联结,返回一个新的Tensor。
参数:
- **x** (list|tuple) - 待联结的Tensor list或者Tensor tuple ,支持的数据类型为:bool, float16, float32、float64、int32、int64, ``x`` 中所有Tensor的数据类型应该一致。
- **axis** (int|Tensor,可选) - 指定对输入 ``x`` 进行运算的轴,可以是整数或者形状为[1]的Tensor,数据类型为int32或者int64。 ``axis`` 的有效范围是[-R, R),R是输入 ``x`` 中Tensor的维度, ``axis`` 为负值时与 :math:`axis + R` 等价。默认值为0。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:联结后的Tensor ,数据类型和 ``x`` 中的Tensor相同。
抛出异常:
- ``TypeError``: - 当输入 ``x`` 的类型不是list或者tuple时。
- ``TypeError``: - 当输入 ``x`` 的数据类型不是 bool,float16, float32, float64, int32, int64时。
- ``TypeError``: - 当 ``axis`` 的类型不是int或者Tensor时。 当 ``axis`` 是Tensor的时候其数据类型不是int32或者int64时。
- ``TypeError``: - 当输入 ``x`` 中的Tensor存在数据类型不一致时。
**代码示例**:
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative() # Now we are in imperative mode
in1 = np.array([[1, 2, 3],
[4, 5, 6]])
in2 = np.array([[11, 12, 13],
[14, 15, 16]])
in3 = np.array([[21, 22],
[23, 24]])
x1 = paddle.imperative.to_variable(in1)
x2 = paddle.imperative.to_variable(in2)
x3 = paddle.imperative.to_variable(in3)
zero = paddle.full(shape=[1], dtype='int32', fill_value=0)
# When the axis is negative, the real axis is (axis + Rank(x))
# As follow, axis is -1, Rank(x) is 2, the real axis is 1
out1 = paddle.concat(x=[x1, x2, x3], axis=-1)
out2 = paddle.concat(x=[x1, x2], axis=0)
out3 = paddle.concat(x=[x1, x2], axis=zero)
# out1
# [[ 1 2 3 11 12 13 21 22]
# [ 4 5 6 14 15 16 23 24]]
# out2 out3
# [[ 1 2 3]
# [ 4 5 6]
# [11 12 13]
# [14 15 16]]
...@@ -2,6 +2,53 @@ ...@@ -2,6 +2,53 @@
cumsum cumsum
------------------------------- -------------------------------
:doc_source: paddle.fluid.layers.cumsum
.. py:function:: paddle.cumsum(x, axis=None, dtype=None, name=None)
沿给定 ``axis`` 计算张量 ``x`` 的累加和。结果的第一个元素和输入的第一个元素相同。
参数:
- **x** (Tensor) - 累加的输入,需要进行累加操作的Tensor.
- **axis** (int,可选) - 指明需要累加的维度。-1代表最后一维。默认:None,将输入展开为一维变量再进行累加计算。
- **dtype** (str,可选) - 输出Tensor的数据类型,支持int32、int64、float32、float64. 如果指定了,那么在执行操作之前,输入张量将被转换为dtype. 这对于防止数据类型溢出非常有用。默认为:None.
- **name** (str,可选)- 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name` 。
返回:累加的结果,即累加器的输出。
返回类型:Tensor
**代码示例**:
.. code-block:: python
import paddle
from paddle.imperative import to_variable
import numpy as np
paddle.enable_imperative()
data_np = np.arange(12).reshape(3, 4)
data = to_variable(data_np)
y = paddle.cumsum(data)
print(y.numpy())
# [ 0 1 3 6 10 15 21 28 36 45 55 66]
y = paddle.cumsum(data, axis=0)
print(y.numpy())
# [[ 0 1 2 3]
# [ 4 6 8 10]
# [12 15 18 21]]
y = paddle.cumsum(data, axis=-1)
print(y.numpy())
# [[ 0 1 3 6]
# [ 4 9 15 22]
# [ 8 17 27 38]]
y = paddle.cumsum(data, dtype='float64')
print(y.dtype)
# VarType.FP64
...@@ -5,17 +5,16 @@ eye ...@@ -5,17 +5,16 @@ eye
.. py:function:: paddle.tensor.eye(num_rows, num_columns=None, dtype=None, name=None) .. py:function:: paddle.tensor.eye(num_rows, num_columns=None, dtype=None, name=None)
该OP用来构建二维张量(主对角线元素为1,其他元素为0)。 该OP用来构建二维Tensor(主对角线元素为1,其他元素为0)。
参数: 参数:
- **num_rows** (int) - 生成二维张量的行数,数据类型为非负int32。 - **num_rows** (int) - 生成二维Tensor的行数,数据类型为非负int32。
- **num_columns** (int,可选) - 生成二维张量的列数,数据类型为非负int32。若为None,则默认等于num_rows。 - **num_columns** (int,可选) - 生成二维Tensor的列数,数据类型为非负int32。若为None,则默认等于num_rows。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选) - 返回张量的数据类型,可为float16,float32,float64, int32, int64。若为None, 则默认等于float32。 - **dtype** (np.dtype|core.VarDesc.VarType|str, 可选) - 返回Tensor的数据类型,可为float16,float32,float64, int32, int64。若为None, 则默认等于float32。
- **name** (str, 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str, 可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:shape为 [num_rows, num_columns]的张量 返回: ``shape`` 为 [num_rows, num_columns]的Tensor
返回类型:Variable(Tensor),数据类型为dtype指定的类型。
抛出异常: 抛出异常:
- ``TypeError``: - 如果 ``dtype`` 的类型不是float16, float32, float64, int32, int64其中之一。 - ``TypeError``: - 如果 ``dtype`` 的类型不是float16, float32, float64, int32, int64其中之一。
...@@ -26,8 +25,8 @@ eye ...@@ -26,8 +25,8 @@ eye
.. code-block:: python .. code-block:: python
import paddle import paddle
paddle.enable_imperative() paddle.enable_imperative() # Now we are in imperative mode
data = paddle.eye(3, dtype='int32') # paddle.eye 等价于 paddle.tensor.eye data = paddle.eye(3, dtype='int32')
# [[1 0 0] # [[1 0 0]
# [0 1 0] # [0 1 0]
# [0 0 1]] # [0 0 1]]
......
...@@ -5,27 +5,22 @@ full ...@@ -5,27 +5,22 @@ full
.. py:function:: paddle.full(shape, fill_value, dtype=None, name=None) .. py:function:: paddle.full(shape, fill_value, dtype=None, name=None)
:alias_main: paddle.full
:alias: paddle.full,paddle.tensor.full,paddle.tensor.creation.full
:update_api: paddle.fluid.layers.fill_constant
该OP创建形状大小为shape并且数据类型为dtype的Tensor,其中元素值均为 ``fill_value``。
该OP创建形状大小为shape并且数据类型为dtype的张量,其中元素值均为 ``fill_value``。
参数: 参数:
- **shape** (list|tuple|Variable) – 指定创建张量的形状(shape), 数据类型为int32 或者int64。 - **shape** (list|tuple|Tensor) – 指定创建Tensor的形状(shape), 数据类型为int32 或者int64。
- **fill_value** (bool|float|int|Variable) - 用于初始化输出张量的常量数据的值。注意:该参数不可超过输出变量数据类型的表示范围。 - **fill_value** (bool|float|int|Tensor) - 用于初始化输出Tensor的常量数据的值。注意:该参数不可超过输出变量数据类型的表示范围。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若为None,则输出变量的数据类型和输入变量相同,默认值为None。 - **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若为None,则输出变量的数据类型和输入变量相同,默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:返回一个存储结果的Tensor,数据类型和dtype相同。 返回:返回一个存储结果的Tensor,数据类型和dtype相同。
返回类型:Variable
抛出异常: 抛出异常:
- ``TypeError``: - 如果 ``dtype`` 的类型不是bool, float16, float32, float64, int32, int64其中之一。 - ``TypeError``: - 如果 ``dtype`` 的类型不是bool, float16, float32, float64, int32, int64其中之一。
- ``TypeError``: - 如果 ``shape`` 的类型不是list或tuple或Variable - ``TypeError``: - 如果 ``shape`` 的类型不是list或tuple或Tensor。当 ``shape`` 是Tensor的时候,其数据类型不是int32或者int64时
**代码示例**: **代码示例**:
...@@ -38,18 +33,18 @@ full ...@@ -38,18 +33,18 @@ full
#[[0] #[[0]
# [0]] # [0]]
# attr shape is a list which contains Variable Tensor. # attr shape is a list which contains Tensor.
positive_3 = paddle.fill_constant([1], "int32", 2) positive_3 = paddle.fill_constant([1], "int32", 2)
data3 = paddle.full(shape=[1, positive_2], dtype='float32', fill_value=1.5) data3 = paddle.full(shape=[1, positive_2], dtype='float32', fill_value=1.5)
# [[1.5 1.5]] # [[1.5 1.5]]
# attr shape is an Variable Tensor. # attr shape is a Tensor.
shape = paddle.fill_constant([2], "int32", 2) shape = paddle.fill_constant([2], "int32", 2)
data4 = paddle.full(shape=shape, dtype='bool', fill_value=True) data4 = paddle.full(shape=shape, dtype='bool', fill_value=True)
# [[True True] # [[True True]
# [True True]] # [True True]]
# attr fill_value is an Variable Tensor. # attr fill_value is a Tensor.
val = paddle.fill_constant([1], "float32", 2.0) val = paddle.fill_constant([1], "float32", 2.0)
data5 = paddle.full(shape=[2,1], fill_value=val, dtype='float32') i data5 = paddle.full(shape=[2,1], fill_value=val, dtype='float32') i
# [[2.0] # [[2.0]
......
...@@ -5,24 +5,20 @@ full_like ...@@ -5,24 +5,20 @@ full_like
.. py:function:: paddle.full_like(x, fill_value, dtype=None, name=None) .. py:function:: paddle.full_like(x, fill_value, dtype=None, name=None)
:alias_main: paddle.full_like
:alias: paddle.full_like,paddle.tensor.full_like,paddle.tensor.creation.full_like
该OP创建一个和x具有相同的形状和数据类型的张量,其中元素值均为 ``fill_value`` 该OP创建一个和 ``x`` 具有相同的形状并且数据类型为 ``dtype`` 的Tensor,其中元素值均为 ``fill_value`` , 当 ``dtype`` 为None的时候,Tensor数据类型和输入 ``x`` 相同
参数: 参数:
- **x** (Variable) – 输入张量, 输出张量和x具有相同的形状,x的数据类型可以是bool,float16,float32,float64,int32,int64。 - **x** (Tensor) – 输入Tensor, 输出Tensor和x具有相同的形状,x的数据类型可以是bool,float16,float32,float64,int32,int64。
- **fill_value** (bool|float|int) - 用于初始化输出张量的常量数据的值。注意:该参数不可超过输出变量数据类型的表示范围。 - **fill_value** (bool|float|int) - 用于初始化输出张量的常量数据的值。注意:该参数不可超过输出变量数据类型的表示范围。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若参数为None,则输出变量的数据类型和输入变量相同,默认值为None。 - **dtype** (np.dtype|core.VarDesc.VarType|str, 可选)- 输出变量的数据类型。若参数为None,则输出变量的数据类型和输入变量相同,默认值为None。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:返回一个存储结果的Tensor,数据类型和dtype相同。 返回:返回一个存储结果的Tensor,数据类型和dtype相同。
返回类型:Variable
抛出异常: 抛出异常:
- ``TypeError``: - 当dtype不是bool、float16、float32、float64、int32、int64其中之一。 - ``TypeError``: - 当 ``x`` 的数据类型不是bool、float16、float32、float64、int32、int64其中之一。
- ``TypeError``: - 如果 ``shape`` 的类型不是list或tuple或Varibable - ``TypeError``: - 当 ``dtype`` 不是bool、float16、float32、float64、int32、int64或者None其中之一
**代码示例**: **代码示例**:
......
...@@ -5,25 +5,22 @@ index_select ...@@ -5,25 +5,22 @@ index_select
.. py:function:: paddle.index_select(x, index, axis=0, name=None) .. py:function:: paddle.index_select(x, index, axis=0, name=None)
:alias_main: paddle.index_select
:alias: paddle.index_select,paddle.tensor.index_select,paddle.tensor.search.index_select
该OP沿着指定轴 ``axis`` 对输入 ``x`` 进行索引,取 ``index`` 中指定的相应项,创建并返回到一个新的Tensor。这里 ``index`` 是一个 ``1-D`` Tensor。除 ``axis`` 轴外,返回的Tensor其余维度大小和输入 ``x``相等 , ``axis`` 维度的大小等于 ``index`` 的大小。
该OP沿着指定维度 ``axis`` 对输入 ``input`` 进行索引,取 ``index`` 中指定的相应项,然后返回到一个新的张量。这里 ``index`` 是一个 ``1-D`` 张量。除 ``axis`` 维外,返回的张量其余维度大小同输入 ``input`` , ``axis`` 维大小等于 ``index`` 的大小。
**参数**: **参数**:
- **x** (Variable)– 输入张量。x的数据类型可以是float32,float64,int32,int64。 - **x** (Tensor)– 输入Tensor。 ``x`` 的数据类型可以是float32,float64,int32,int64。
- **index** (Variable)– 包含索引下标的一维张量 - **index** (Tensor)– 包含索引下标的一维Tensor
- **axis** (int, optional) – 索引轴,若未指定,则默认选取第0维。 - **axis** (int, 可选) – 索引轴,若未指定,则默认选取第0维。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
**返回**: **返回**:
-**Variable**: 数据类型同输入 -**Tensor**: 返回一个数据类型同输入的Tensor
抛出异常: 抛出异常:
- ``TypeError`` - 当x或者index的类型不是Variable - ``TypeError`` - 当 ``x`` 或者 ``index`` 的类型不是Tensor
- ``TypeError`` - 当x的数据类型不是float32、float64、int32、int64其中之一或者index的数据类型不是int32、int64其中之一。 - ``TypeError`` - 当 ``x`` 的数据类型不是float32、float64、int32、int64其中之一或者 ``index`` 的数据类型不是int32、int64其中之一。
**代码示例**: **代码示例**:
......
...@@ -6,20 +6,19 @@ linspace ...@@ -6,20 +6,19 @@ linspace
.. py:function:: paddle.linspace(start, stop, num, dtype=None, name=None) .. py:function:: paddle.linspace(start, stop, num, dtype=None, name=None)
:alias_main: paddle.linspace :alias_main: paddle.linspace
:alias: paddle.linspace,paddle.tensor.linspace,paddle.tensor.creation.linspace :alias: paddle.tensor.linspace, paddle.tensor.creation.linspace
:update_api: paddle.fluid.layers.linspace
该OP在给定区间内返回固定数目的均匀间隔的值 该OP返回一个Tensor,Tensor的值为在区间start和stop上均匀间隔的num个值,输出Tensor的长度为num
**注意:该OP不进行梯度计算** **注意:该OP不进行梯度计算**
参数: 参数:
- **start** (float|Variable) – start是区间开始的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。 - **start** (float|Tensor) – ``start`` 是区间开始的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **stop** (float|Variable) – end是区间结束的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。 - **stop** (float|Tensor) – ``end`` 是区间结束的变量,可以是一个浮点标量,或是一个shape为[1]的Tensor,该Tensor的数据类型可以是float32或者是float64。
- **num** (int|Variable) – num是给定区间内需要划分的区间数,可以是一个整型标量,或是一个shape为[1]的Tensor,该Tensor的数据类型需为int32。 - **num** (int|Tensor) – ``num`` 是给定区间内需要划分的区间数,可以是一个整型标量,或是一个shape为[1]的Tensor,该Tensor的数据类型需为int32。
- **dtype** (np.dtype|core.VarDesc.VarType|str,可选) – 输出Tensor的数据类型,可以是‘float32’或者是‘float64’。如果dtype为None,默认类型为float32。 - **dtype** (np.dtype|core.VarDesc.VarType|str,可选) – 输出Tensor的数据类型,可以是float32或者是float64。如果dtype为None,默认类型为float32。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:输出结果的数据类型是float32或float64,表示等间隔划分结果的1-D Tensor,该Tensor的shape大小为 :math:`[num]` ,在mum为1的情况下,仅返回包含start元素值的Tensor。 返回:输出结果的数据类型是float32或float64,表示等间隔划分结果的1-D Tensor,该Tensor的shape大小为 :math:`[num]` ,在mum为1的情况下,仅返回包含start元素值的Tensor。
......
...@@ -5,27 +5,21 @@ ones ...@@ -5,27 +5,21 @@ ones
.. py:function:: paddle.ones(shape, dtype=None) .. py:function:: paddle.ones(shape, dtype=None)
:alias_main: paddle.ones
:alias: paddle.ones,paddle.tensor.ones,paddle.tensor.creation.ones
:update_api: paddle.fluid.layers.ones
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为1的Tensor。 该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为1的Tensor。
参数: 参数:
- **shape** (tuple|list|Variable) - 输出Tensor的形状,数据类型为int32或者int64。 - **shape** (tuple|list|Tensor) - 输出Tensor的形状, ``shape`` 的数据类型为int32或者int64。
- **dtype** (np.dtype|core.VarDesc.VarType|str, 可选) - 输出Tensor的数据类型,数据类型必须为float16、float32、float64、int32或int64。如果dtype为None,默认数据类型为float32。 - **dtype** (np.dtype|core.VarDesc.VarType|str, 可选) - 输出Tensor的数据类型,数据类型必须为bool、 float16、float32、float64、int32或int64。如果 ``dtype`` 为None,默认数据类型为float32。
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:值全为1的Tensor,数据类型和 ``dtype`` 定义的类型一致。 返回:值全为1的Tensor,数据类型和 ``dtype`` 定义的类型一致。
返回类型:Variable
抛出异常: 抛出异常:
- ``TypeError`` - 当dtype不是float16、float32、float64、int32或int64中的一个的时候 - ``TypeError`` - 当 ``dtype`` 不是bool、 float16、float32、float64、int32、int64和None时。
- ``TypeError`` - 当shape 不是tuple、list、或者Variable的时候 - ``TypeError`` - 当 ``shape`` 不是tuple、list、或者Tensor的时, 当 ``shape`` 为Tensor时,其数据类型不是int32或者int64
**代码示例**: **代码示例**:
......
...@@ -9,7 +9,7 @@ sort ...@@ -9,7 +9,7 @@ sort
:alias: paddle.sort,paddle.tensor.sort,paddle.tensor.search.sort :alias: paddle.sort,paddle.tensor.sort,paddle.tensor.search.sort
对输入变量沿给定轴进行排序,输出排序好的数据和相应的索引,其维度和输入相同。**默认升序排列,如果需要降序排列设置** ``descending=True`` 。 对输入变量沿给定轴进行排序,输出排序好的数据,其维度和输入相同。默认升序排列,如果需要降序排列设置 ``descending=True`` 。
参数: 参数:
...@@ -18,9 +18,8 @@ sort ...@@ -18,9 +18,8 @@ sort
- **descending** (bool,可选) - 指定算法排序的方向。如果设置为True,算法按照降序排序。如果设置为False或者不设置,按照升序排序。默认值为False。 - **descending** (bool,可选) - 指定算法排序的方向。如果设置为True,算法按照降序排序。如果设置为False或者不设置,按照升序排序。默认值为False。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。 - **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
返回:一组已排序的输出(与 ``x`` 维度相同、数据类型相同)和索引(数据类型为int64)。 返回:Tensor, 排序后的输出(与 ``x`` 维度相同、数据类型相同)。
返回类型:tuple[Tensor]
**代码示例**: **代码示例**:
...@@ -41,28 +40,21 @@ sort ...@@ -41,28 +40,21 @@ sort
out1 = paddle.sort(x=x, axis=-1) out1 = paddle.sort(x=x, axis=-1)
out2 = paddle.sort(x=x, axis=0) out2 = paddle.sort(x=x, axis=0)
out3 = paddle.sort(x=x, axis=1) out3 = paddle.sort(x=x, axis=1)
print(out1[0].numpy()) print(out1.numpy())
#[[[5. 5. 8. 9.] #[[[5. 5. 8. 9.]
# [0. 0. 1. 7.] # [0. 0. 1. 7.]
# [2. 4. 6. 9.]] # [2. 4. 6. 9.]]
# [[2. 2. 4. 5.] # [[2. 2. 4. 5.]
# [4. 7. 7. 9.] # [4. 7. 7. 9.]
# [0. 1. 6. 7.]]] # [0. 1. 6. 7.]]]
print(out1[1].numpy()) print(out2.numpy())
#[[[0 3 1 2]
# [0 1 2 3]
# [2 3 0 1]]
# [[1 3 2 0]
# [0 1 2 3]
# [2 0 3 1]]]
print(out2[0].numpy())
#[[[5. 2. 4. 2.] #[[[5. 2. 4. 2.]
# [0. 0. 1. 7.] # [0. 0. 1. 7.]
# [1. 7. 0. 4.]] # [1. 7. 0. 4.]]
# [[5. 8. 9. 5.] # [[5. 8. 9. 5.]
# [4. 7. 7. 9.] # [4. 7. 7. 9.]
# [6. 9. 2. 6.]]] # [6. 9. 2. 6.]]]
print(out3[0].numpy()) print(out3.numpy())
#[[[0. 0. 1. 4.] #[[[0. 0. 1. 4.]
# [5. 8. 2. 5.] # [5. 8. 2. 5.]
# [6. 9. 9. 7.]] # [6. 9. 9. 7.]]
......
...@@ -2,39 +2,55 @@ ...@@ -2,39 +2,55 @@
split split
------------------------------- -------------------------------
.. py:function:: paddle.tensor.split(input, num_or_sections, dim=-1, name=None) .. py:function:: paddle.tensor.split(x, num_or_sections, axis=0, name=None)
:alias_main: paddle.split
:alias: paddle.split,paddle.tensor.split,paddle.tensor.manipulation.split
:update_api: paddle.fluid.layers.split
该OP将输入Tensor分割成多个子Tensor。 该OP将输入Tensor分割成多个子Tensor。
**参数**: **参数**:
- **input** (Variable) - 输入变量,数据类型为float32,float64,int32,int64的多维Tensor或者LoDTensor。 - **x** (Tensor) - 输入变量,数据类型为bool, float16, float32,float64,int32,int64的多维Tensor。
- **num_or_sections** (int|list|tuple) - 如果 num_or_sections 是一个整数,则表示Tensor平均划分为相同大小子Tensor的数量。如果 num_or_sections 是一个list或tuple,那么它的长度代表子Tensor的数量,它的元素可以是整数或者形状为[1]的Tensor,依次代表子Tensor需要分割成的维度的大小。list或tuple的长度不能超过输入Tensor待分割的维度的大小。在list或tuple中,至多有一个元素值为-1,表示该值是由input的维度和其他num_or_sections中元素推断出来的。例如对一个维度为[4,6,6]Tensor的第三维进行分割时,指定num_or_sections=[2,-1,1],输出的三个Tensor维度分别为:[4,6,2],[4,6,3],[4,6,1]。 - **num_or_sections** (int|list|tuple) - 如果 ``num_or_sections`` 是一个整数,则表示Tensor平均划分为相同大小子Tensor的数量。如果 ``num_or_sections`` 是一个list或tuple,那么它的长度代表子Tensor的数量,它的元素可以是整数或者形状为[1]的Tensor,依次代表子Tensor需要分割成的维度的大小。list或tuple的长度不能超过输入Tensor待分割的维度的大小。在list或tuple中,至多有一个元素值为-1,表示该值是由 ``x`` 的维度和其他 ``num_or_sections`` 中元素推断出来的。例如对一个维度为[4,6,6]Tensor的第三维进行分割时,指定 ``num_or_sections=[2,-1,1]`` ,输出的三个Tensor维度分别为:[4,6,2],[4,6,3],[4,6,1]。
- **dim** (int|Variable,可选) - 整数或者形状为[1]的Tensor,数据类型为int32或int64。表示需要分割的维度。如果dim < 0,则划分的维度为rank(input) + dim。默认值为-1 - **axis** (int|Tensor,可选) - 整数或者形状为[1]的Tensor,数据类型为int32或int64。表示需要分割的维度。如果 ``axis < 0`` ,则划分的维度为 ``rank(x) + axis`` 。默认值为0
- **name** (str,可选) - 一般无需设置,默认值为None。 - **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
**返回**:分割后的Tensor列表。 返回:分割后的Tensor列表。
**返回类型**:列表(Variable(Tensor|LoDTensor)),数据类型为int32,int64,float32,float64。 抛出异常:
- :code:`TypeError`:``x`` 的数据类型不是float16、float32、float64、int32或int64时 。
- :code:`TypeError`:``num_or_sections`` 不是int、list 或 tuple时。
- :code:`TypeError`:``axis`` 不是 int 或 Tensor时。当 ``axis`` 为Tensor,其数据类型不是int32或int64时。
**代码示例**: **代码示例**:
.. code-block:: python .. code-block:: python
import paddle
import paddle.fluid as fluid
import numpy as np import numpy as np
with fluid.dygraph.guard(): import paddle
input_1 = np.random.random([4, 6, 6]).astype("int32")
# input is a variable which shape is [4, 6, 6] paddle.enable_imperative()
input = fluid.dygraph.to_variable(input_1) # x is a Tensor which shape is [3, 9, 5]
x_np = np.random.random([3, 9, 5]).astype("int32")
x0, x1, x2 = paddle.split(input, num_or_sections= 3, dim=1) x = paddle.imperative.to_variable(x_np)
# x0.shape [4, 2, 6]
# x1.shape [4, 2, 6] out0, out1, out22 = paddle.split(x, num_or_sections=3, axis=1)
# x2.shape [4, 2, 6] # out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]
out0, out1, out2 = paddle.split(x, num_or_sections=[2, 3, 4], axis=1)
# out0.shape [3, 2, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 4, 5]
out0, out1, out2 = paddle.split(x, num_or_sections=[2, 3, -1], axis=1)
# out0.shape [3, 2, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 4, 5]
# axis is negative, the real axis is (rank(x) + axis) which real
# value is 1.
out0, out1, out2 = paddle.split(x, num_or_sections=3, axis=-2)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]
...@@ -3,31 +3,41 @@ ...@@ -3,31 +3,41 @@
zeros zeros
------------------------------- -------------------------------
.. py:function:: paddle.zeros(shape, dtype, out=None, device=None) .. py:function:: paddle.zeros(shape, dtype=None, name=None)
:alias_main: paddle.zeros
:alias: paddle.zeros,paddle.tensor.zeros,paddle.tensor.creation.zeros
:update_api: paddle.fluid.layers.zeros
该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为0的Tensor。 该OP创建形状为 ``shape`` 、数据类型为 ``dtype`` 且值全为0的Tensor。
参数: 参数:
- **shape** (tuple|list) - 输出Tensor的形状。 - **shape** (tuple|list|Tensor) - 输出Tensor的形状, ``shape`` 的数据类型为int32或者int64。
- **dtype** (np.dtype|core.VarDesc.VarType|str) - 输出Tensor的数据类型,数据类型必须为float16、float32、float64、int32或int64。 - **dtype** (np.dtype|core.VarDesc.VarType|str,可选) - 输出Tensor的数据类型,数据类型必须为bool、float16、float32、float64、int32或int64。若为None,数据类型为float32, 默认为None。
- **out** (Variable, 可选) – 指定存储运算结果的Tensor。如果设置为None或者不设置,将创建新的Tensor存储运算结果,默认值为None。 - **name** (str, 可选) - 输出的名字。一般无需设置,默认值为None。该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` 。
- **device** (str,可选) – 选择在哪个设备运行该操作,可选值包括None,'cpu'和'gpu'。如果 ``device`` 为None,则将选择运行Paddle程序的设备,默认为None。
返回:值全为0的Tensor,数据类型和 ``dtype`` 定义的类型一致。 返回:值全为0的Tensor,数据类型和 ``dtype`` 定义的类型一致。
返回类型:Variable 抛出异常:
- ``TypeError`` - 当 ``dtype`` 不是bool、 float16、float32、float64、int32、int64和None时。
- ``TypeError`` - 当 ``shape`` 不是tuple、list、或者Tensor时, 当 ``shape`` 为Tensor,其数据类型不是int32或者int64时。
**代码示例**: **代码示例**:
.. code-block:: python .. code-block:: python
import paddle import paddle
data = paddle.zeros(shape=[3, 2], dtype='float32') # [[0., 0.], [0., 0.], [0., 0.]] paddle.enable_imperative() # Now we are in imperative mode
data = paddle.zeros(shape=[2, 2], dtype='float32', device='cpu') # [[0., 0.], [0., 0.]] data = paddle.zeros(shape=[3, 2], dtype='float32')
# [[0. 0.]
# [0. 0.]
# [0. 0.]]
data = paddle.zeros(shape=[2, 2])
# [[0. 0.]
# [0. 0.]]
# shape is a Tensor
shape = paddle.fill_constant(shape=[2], dtype='int32', value=2)
data3 = paddle.zeros(shape=shape, dtype='int32')
# [[0 0]
# [0 0]]
...@@ -4,151 +4,152 @@ ...@@ -4,151 +4,152 @@
TensorFlow-Fluid常用接口对应表 TensorFlow-Fluid常用接口对应表
############################### ###############################
本文档基于TensorFlow v1.13梳理了常用API与PaddlePaddle API对应关系和差异分析。根据文档对应关系,有TensorFlow使用经验的用户,可根据对应关系,快速熟悉PaddlePaddle的接口使用。 本文档基于TensorFlow v1.15梳理了常用API与PaddlePaddle API对应关系和差异分析。根据文档对应关系,有TensorFlow使用经验的用户,可根据对应关系,快速熟悉PaddlePaddle的接口使用。
.. csv-table:: .. csv-table::
:header: "序号", "TensorFlow接口", "Fluid接口", "备注" :header: "序号", "TensorFlow接口", "Fluid接口", "备注"
:widths: 1, 8, 8, 3 :widths: 1, 8, 8, 3
"1", "`tf.abs <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/abs>`_", ":ref:`cn_api_fluid_layers_abs`", "功能一致" "1", "`tf.abs <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/abs>`_", ":ref:`cn_api_fluid_layers_abs`", "功能一致"
"2", "`tf.add <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/add>`_", ":ref:`cn_api_fluid_layers_elementwise_add`", "功能一致" "2", "`tf.add <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/add>`_", ":ref:`cn_api_fluid_layers_elementwise_add`", "功能一致"
"3", "`tf.argmax <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/argmax>`_", ":ref:`cn_api_fluid_layers_argmax`", "功能一致" "3", "`tf.argmax <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/argmax>`_", ":ref:`cn_api_fluid_layers_argmax`", "功能一致"
"4", "`tf.argmin <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/argmin>`_", ":ref:`cn_api_fluid_layers_argmin`", "功能一致" "4", "`tf.argmin <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/argmin>`_", ":ref:`cn_api_fluid_layers_argmin`", "功能一致"
"5", "`tf.assign <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/assign>`_", ":ref:`cn_api_fluid_layers_assign`", "功能一致" "5", "`tf.assign <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/assign>`_", ":ref:`cn_api_fluid_layers_assign`", "功能一致"
"6", "`tf.assign_add <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/assign_add>`_", ":ref:`cn_api_fluid_layers_increment`", "功能一致" "6", "`tf.assign_add <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/assign_add>`_", ":ref:`cn_api_fluid_layers_increment`", "功能一致"
"7", "`tf.case <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/case>`_", ":ref:`cn_api_fluid_layers_Switch`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.case.md>`_" "7", "`tf.case <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/case>`_", ":ref:`cn_api_fluid_layers_Switch`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.case.md>`_"
"8", "`tf.cast <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/dtypes/cast>`_", ":ref:`cn_api_fluid_layers_cast`", "功能一致" "8", "`tf.cast <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/dtypes/cast>`_", ":ref:`cn_api_fluid_layers_cast`", "功能一致"
"9", "`tf.clip_by_global_norm <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/clip_by_global_norm>`_", ":ref:`cn_api_fluid_clip_GradientClipByGlobalNorm`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.clip_by_global_norm.md>`_" "9", "`tf.clip_by_global_norm <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/clip_by_global_norm>`_", ":ref:`cn_api_fluid_clip_GradientClipByGlobalNorm`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.clip_by_global_norm.md>`_"
"10", "`tf.clip_by_norm <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/clip_by_norm>`_", ":ref:`cn_api_fluid_layers_clip_by_norm`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.clip_by_norm.md>`_" "10", "`tf.clip_by_norm <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/clip_by_norm>`_", ":ref:`cn_api_fluid_layers_clip_by_norm`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.clip_by_norm.md>`_"
"11", "`tf.clip_by_value <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/clip_by_value>`_", ":ref:`cn_api_fluid_layers_clip`", "功能一致" "11", "`tf.clip_by_value <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/clip_by_value>`_", ":ref:`cn_api_fluid_layers_clip`", "功能一致"
"12", "`tf.concat <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/concat>`_", ":ref:`cn_api_fluid_layers_concat`", "功能一致" "12", "`tf.concat <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/concat>`_", ":ref:`cn_api_fluid_layers_concat`", "功能一致"
"13", "`tf.cond <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/cond>`_", ":ref:`cn_api_fluid_layers_ifElse`", "功能一致" "13", "`tf.cond <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/cond>`_", ":ref:`cn_api_fluid_layers_ifElse`", "功能一致"
"14", "`tf.constant <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/constant>`_", ":ref:`cn_api_fluid_layers_fill_constant`", "功能一致" "14", "`tf.constant <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/constant>`_", ":ref:`cn_api_fluid_layers_fill_constant`", "功能一致"
"15", "`tf.contrib.layers.batch_norm <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/batch_norm>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致" "15", "`tf.contrib.layers.batch_norm <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/batch_norm>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"16", "`tf.contrib.layers.flatten <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/flatten>`_", ":ref:`cn_api_fluid_layers_flatten`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.contrib.layers.flatten.md>`_" "16", "`tf.contrib.layers.flatten <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/flatten>`_", ":ref:`cn_api_fluid_layers_flatten`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.contrib.layers.flatten.md>`_"
"17", "`tf.contrib.layers.fully_connected <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/fully_connected>`_", ":ref:`cn_api_fluid_layers_fc`", "功能一致" "17", "`tf.contrib.layers.fully_connected <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/fully_connected>`_", ":ref:`cn_api_fluid_layers_fc`", "功能一致"
"18", "`tf.contrib.layers.one_hot_encoding <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/one_hot_encoding>`_", ":ref:`cn_api_fluid_layers_one_hot`", "功能一致" "18", "`tf.contrib.layers.one_hot_encoding <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/one_hot_encoding>`_", ":ref:`cn_api_fluid_layers_one_hot`", "功能一致"
"19", "`tf.contrib.layers.softmax <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/softmax>`_", ":ref:`cn_api_fluid_layers_softmax`", "功能一致" "19", "`tf.contrib.layers.softmax <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/softmax>`_", ":ref:`cn_api_fluid_layers_softmax`", "功能一致"
"20", "`tf.contrib.layers.xavier_initializer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/contrib/layers/xavier_initializer>`_", ":ref:`cn_api_fluid_initializer_Xavier`", "功能一致" "20", "`tf.contrib.layers.xavier_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/xavier_initializer>`_", ":ref:`cn_api_fluid_initializer_Xavier`", "功能一致"
"21", "`tf.nn.rnn.GRUCell <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/rnn_cell/GRUCell>`_", ":ref:`cn_api_fluid_layers_gru_unit`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn.GRUCell.md>`_" "21", "`tf.nn.rnn.GRUCell <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/rnn_cell/GRUCell>`_", ":ref:`cn_api_fluid_layers_gru_unit`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn.GRUCell.md>`_"
"22", "`tf.nn.rnn.MultiRNNCell <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/rnn_cell/MultiRNNCell>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn_cell.MultiRNNCell.md>`_" "22", "`tf.nn.rnn.MultiRNNCell <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/rnn_cell/MultiRNNCell>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn_cell.MultiRNNCell.md>`_"
"23", "`tf.nn.rnn.static_rnn <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/static_rnn>`_", ":ref:`cn_api_fluid_layers_DynamicRNN`", "功能一致" "23", "`tf.nn.rnn.static_rnn <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/static_rnn>`_", ":ref:`cn_api_fluid_layers_DynamicRNN`", "功能一致"
"24", "`tf.convert_to_tensor <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/convert_to_tensor>`_", ":ref:`cn_api_fluid_layers_assign`", "功能一致" "24", "`tf.convert_to_tensor <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/convert_to_tensor>`_", ":ref:`cn_api_fluid_layers_assign`", "功能一致"
"25", "`tf.cos <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/cos>`_", ":ref:`cn_api_fluid_layers_cos`", "功能一致" "25", "`tf.cos <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/cos>`_", ":ref:`cn_api_fluid_layers_cos`", "功能一致"
"26", "`tf.div <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/div>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致" "26", "`tf.div <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/div>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"27", "`tf.divide <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/divide>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致" "27", "`tf.divide <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/divide>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"28", "`tf.dropout <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.dropout.md>`_" "28", "`tf.dropout <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.dropout.md>`_"
"29", "`tf.equal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/equal>`_", "`运算符== <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致" "29", "`tf.equal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/equal>`_", "`运算符== <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"30", "`tf.exp <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/exp>`_", ":ref:`cn_api_fluid_layers_exp`", "功能一致" "30", "`tf.exp <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/exp>`_", ":ref:`cn_api_fluid_layers_exp`", "功能一致"
"31", "`tf.expand_dims <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/expand_dims>`_", ":ref:`cn_api_fluid_layers_unsqueeze`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.expand_dims.md>`_" "31", "`tf.expand_dims <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/expand_dims>`_", ":ref:`cn_api_fluid_layers_unsqueeze`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.expand_dims.md>`_"
"32", "`tf.fill <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/fill>`_", ":ref:`cn_api_fluid_layers_fill_constant`", "功能一致" "32", "`tf.fill <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/fill>`_", ":ref:`cn_api_fluid_layers_fill_constant`", "功能一致"
"33", "`tf.floor <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/floor>`_", ":ref:`cn_api_fluid_layers_floor`", "功能一致" "33", "`tf.floor <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/floor>`_", ":ref:`cn_api_fluid_layers_floor`", "功能一致"
"34", "`tf.gather <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/gather>`_", ":ref:`cn_api_fluid_layers_gather`", "功能一致" "34", "`tf.gather <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/gather>`_", ":ref:`cn_api_fluid_layers_gather`", "功能一致"
"35", "`tf.greater <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/greater>`_", "`运算符> <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致" "35", "`tf.greater <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/greater>`_", "`运算符> <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"36", "`tf.greater_equal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/greater_equal>`_", "`运算符>= <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致" "36", "`tf.greater_equal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/greater_equal>`_", "`运算符>= <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"37", "`tf.image.non_max_suppression <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/non_max_suppression>`_", ":ref:`cn_api_fluid_layers_multiclass_nms`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.image.non_max_suppression.md>`_" "37", "`tf.image.non_max_suppression <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/non_max_suppression>`_", ":ref:`cn_api_fluid_layers_multiclass_nms`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.image.non_max_suppression.md>`_"
"38", "`tf.image.resize_bilinear <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/resize_bilinear>`_", ":ref:`cn_api_fluid_layers_resize_bilinear`", "功能一致" "38", "`tf.image.resize_bilinear <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/resize_bilinear>`_", ":ref:`cn_api_fluid_layers_resize_bilinear`", "功能一致"
"39", "`tf.image.resize_images <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/resize_images>`_", ":ref:`cn_api_fluid_layers_image_resize`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.image.resize_images.md>`_" "39", "`tf.image.resize_images <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/resize_images>`_", ":ref:`cn_api_fluid_layers_image_resize`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.image.resize_images.md>`_"
"40", "`tf.image.resize_nearest_neighbor <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/resize_nearest_neighbor>`_", ":ref:`cn_api_fluid_layers_resize_nearest`", "功能一致" "40", "`tf.image.resize_nearest_neighbor <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/resize_nearest_neighbor>`_", ":ref:`cn_api_fluid_layers_resize_nearest`", "功能一致"
"41", "`tf.is_finite <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/is_finite>`_", ":ref:`cn_api_fluid_layers_isfinite`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.math.is_finite.md>`_" "41", "`tf.is_finite <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/is_finite>`_", ":ref:`cn_api_fluid_layers_isfinite`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.math.is_finite.md>`_"
"42", "`tf.layers.batch_normalization <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/batch_normalization>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致" "42", "`tf.layers.batch_normalization <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/batch_normalization>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"43", "`tf.layers.conv2d <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.layers.conv2d.md>`_" "43", "`tf.layers.conv2d <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.layers.conv2d.md>`_"
"44", "`tf.layers.dense <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/dense>`_", ":ref:`cn_api_fluid_layers_fc`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.layers.dense.md>`_" "44", "`tf.layers.dense <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/dense>`_", ":ref:`cn_api_fluid_layers_fc`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.layers.dense.md>`_"
"45", "`tf.layers.dropout <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "功能一致" "45", "`tf.layers.dropout <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "功能一致"
"46", "`tf.layers.Dropout <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/Dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "功能一致" "46", "`tf.layers.Dropout <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/Dropout>`_", ":ref:`cn_api_fluid_layers_dropout`", "功能一致"
"47", "`tf.layers.flatten <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/layers/flatten>`_", ":ref:`cn_api_fluid_layers_flatten`", "功能一致" "47", "`tf.layers.flatten <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/layers/flatten>`_", ":ref:`cn_api_fluid_layers_flatten`", "功能一致"
"48", "`tf.less <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/less>`_", "`运算符< <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致" "48", "`tf.less <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/less>`_", "`运算符< <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"49", "`tf.less_equal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/less_equal>`_", "`运算符<= <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致" "49", "`tf.less_equal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/less_equal>`_", "`运算符<= <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/compare_op.md>`_", "功能一致"
"50", "`tf.log <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/log>`_", ":ref:`cn_api_fluid_layers_log`", "功能一致" "50", "`tf.log <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/log>`_", ":ref:`cn_api_fluid_layers_log`", "功能一致"
"51", "`tf.logical_and <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/logical_and>`_", ":ref:`cn_api_fluid_layers_logical_and`", "功能一致" "51", "`tf.logical_and <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/logical_and>`_", ":ref:`cn_api_fluid_layers_logical_and`", "功能一致"
"52", "`tf.logical_not <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/logical_not>`_", ":ref:`cn_api_fluid_layers_logical_not`", "功能一致" "52", "`tf.logical_not <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/logical_not>`_", ":ref:`cn_api_fluid_layers_logical_not`", "功能一致"
"53", "`tf.logical_or <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/logical_or>`_", ":ref:`cn_api_fluid_layers_logical_or`", "功能一致" "53", "`tf.logical_or <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/logical_or>`_", ":ref:`cn_api_fluid_layers_logical_or`", "功能一致"
"54", "`tf.losses.mean_squared_error <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/losses/mean_squared_error>`_", ":ref:`cn_api_fluid_layers_square_error_cost`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.losses.mean_and_squared_error.md>`_" "54", "`tf.losses.mean_squared_error <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/losses/mean_squared_error>`_", ":ref:`cn_api_fluid_layers_square_error_cost`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.losses.mean_and_squared_error.md>`_"
"55", "`tf.losses.sigmoid_cross_entropy <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/losses/sigmoid_cross_entropy>`_", ":ref:`cn_api_fluid_layers_sigmoid_cross_entropy_with_logits`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.losses.sigmoid_cross_entropy.md>`_" "55", "`tf.losses.sigmoid_cross_entropy <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/losses/sigmoid_cross_entropy>`_", ":ref:`cn_api_fluid_layers_sigmoid_cross_entropy_with_logits`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.losses.sigmoid_cross_entropy.md>`_"
"56", "`tf.losses.softmax_cross_entropy <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/losses/softmax_cross_entropy>`_", ":ref:`cn_api_fluid_layers_softmax_with_cross_entropy`", "功能一致" "56", "`tf.losses.softmax_cross_entropy <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/losses/softmax_cross_entropy>`_", ":ref:`cn_api_fluid_layers_softmax_with_cross_entropy`", "功能一致"
"57", "`tf.matmul <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/linalg/matmul>`_", ":ref:`cn_api_fluid_layers_matmul`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.matmul.md>`_" "57", "`tf.matmul <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/linalg/matmul>`_", ":ref:`cn_api_fluid_layers_matmul`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.matmul.md>`_"
"58", "`tf.maximum <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/maximum>`_", ":ref:`cn_api_fluid_layers_elementwise_max`", "功能一致" "58", "`tf.maximum <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/maximum>`_", ":ref:`cn_api_fluid_layers_elementwise_max`", "功能一致"
"59", "`tf.metrics.accuracy <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/metrics/accuracy>`_", ":ref:`cn_api_fluid_layers_accuracy`", "功能一致" "59", "`tf.metrics.accuracy <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/metrics/accuracy>`_", ":ref:`cn_api_fluid_layers_accuracy`", "功能一致"
"60", "`tf.metrics.mean <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/metrics/mean>`_", ":ref:`cn_api_fluid_layers_mean`", "功能一致" "60", "`tf.metrics.mean <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/metrics/mean>`_", ":ref:`cn_api_fluid_layers_mean`", "功能一致"
"61", "`tf.minimum <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/minimum>`_", ":ref:`cn_api_fluid_layers_elementwise_min`", "功能一致" "61", "`tf.minimum <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/minimum>`_", ":ref:`cn_api_fluid_layers_elementwise_min`", "功能一致"
"62", "`tf.multiply <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/multiply>`_", ":ref:`cn_api_fluid_layers_elementwise_mul`", "功能一致" "62", "`tf.multiply <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/multiply>`_", ":ref:`cn_api_fluid_layers_elementwise_mul`", "功能一致"
"63", "`tf.nn.avg_pool <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/avg_pool>`_", ":ref:`cn_api_fluid_layers_pool2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.avg_pool.md>`_" "63", "`tf.nn.avg_pool <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/avg_pool>`_", ":ref:`cn_api_fluid_layers_pool2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.avg_pool.md>`_"
"64", "`tf.nn.batch_normalization <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/batch_normalization>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致" "64", "`tf.nn.batch_normalization <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/batch_normalization>`_", ":ref:`cn_api_fluid_layers_batch_norm`", "功能一致"
"65", "`tf.nn.bidirectional_dynamic_rnn <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/bidirectional_dynamic_rnn>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.bidirectional_dynamic_rnn.md>`_" "65", "`tf.nn.bidirectional_dynamic_rnn <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/bidirectional_dynamic_rnn>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.bidirectional_dynamic_rnn.md>`_"
"66", "`tf.nn.conv2d <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv2d.md>`_" "66", "`tf.nn.conv2d <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv2d.md>`_"
"67", "`tf.nn.conv2d_transpose <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/conv2d_transpose>`_", ":ref:`cn_api_fluid_layers_conv2d_transpose`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv2d_transpose.md>`_" "67", "`tf.nn.conv2d_transpose <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/conv2d_transpose>`_", ":ref:`cn_api_fluid_layers_conv2d_transpose`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv2d_transpose.md>`_"
"68", "`tf.nn.conv3d_transpose <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/conv3d_transpose>`_", ":ref:`cn_api_fluid_layers_conv3d_transpose`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv3d_transpose.md>`_" "68", "`tf.nn.conv3d_transpose <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/conv3d_transpose>`_", ":ref:`cn_api_fluid_layers_conv3d_transpose`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.conv3d_transpose.md>`_"
"69", "`tf.nn.depthwise_conv2d <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/depthwise_conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.depthwise_conv2d.md>`_" "69", "`tf.nn.depthwise_conv2d <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/depthwise_conv2d>`_", ":ref:`cn_api_fluid_layers_conv2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.depthwise_conv2d.md>`_"
"70", "`tf.nn.dynamic_rnn <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/dynamic_rnn>`_", ":ref:`cn_api_fluid_layers_DynamicRNN`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.dynamic_rnn.md>`_" "70", "`tf.nn.dynamic_rnn <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/dynamic_rnn>`_", ":ref:`cn_api_fluid_layers_DynamicRNN`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.dynamic_rnn.md>`_"
"71", "`tf.nn.l2_normalize <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/l2_normalize>`_", ":ref:`cn_api_fluid_layers_l2_normalize`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.l2_normalize.md>`_" "71", "`tf.nn.l2_normalize <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/l2_normalize>`_", ":ref:`cn_api_fluid_layers_l2_normalize`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.l2_normalize.md>`_"
"72", "`tf.nn.leaky_relu <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/leaky_relu>`_", ":ref:`cn_api_fluid_layers_leaky_relu`", "功能一致" "72", "`tf.nn.leaky_relu <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/leaky_relu>`_", ":ref:`cn_api_fluid_layers_leaky_relu`", "功能一致"
"73", "`tf.nn.lrn <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/local_response_normalization>`_", ":ref:`cn_api_fluid_layers_lrn`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.lrn.md>`_" "73", "`tf.nn.lrn <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/local_response_normalization>`_", ":ref:`cn_api_fluid_layers_lrn`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.lrn.md>`_"
"74", "`tf.nn.max_pool <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/max_pool>`_", ":ref:`cn_api_fluid_layers_pool2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.max_pool.md>`_" "74", "`tf.nn.max_pool <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/max_pool>`_", ":ref:`cn_api_fluid_layers_pool2d`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.max_pool.md>`_"
"75", "`tf.nn.relu <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/relu>`_", ":ref:`cn_api_fluid_layers_relu`", "功能一致" "75", "`tf.nn.relu <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/relu>`_", ":ref:`cn_api_fluid_layers_relu`", "功能一致"
"76", "`tf.nn.relu6 <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/relu6>`_", ":ref:`cn_api_fluid_layers_relu6`", "功能一致" "76", "`tf.nn.relu6 <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/relu6>`_", ":ref:`cn_api_fluid_layers_relu6`", "功能一致"
"77", "`tf.nn.rnn_cell.LSTMCell <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/rnn_cell/LSTMCell>`_", ":ref:`cn_api_fluid_layers_lstm_unit`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn_cell.LSTMCell.md>`_" "77", "`tf.nn.rnn_cell.LSTMCell <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/rnn_cell/LSTMCell>`_", ":ref:`cn_api_fluid_layers_lstm_unit`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.rnn_cell.LSTMCell.md>`_"
"78", "`tf.nn.separable_conv2d <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/separable_conv2d>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.separable_conv2d.md>`_" "78", "`tf.nn.separable_conv2d <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/separable_conv2d>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.separable_conv2d.md>`_"
"79", "`tf.nn.sigmoid <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/sigmoid>`_", ":ref:`cn_api_fluid_layers_sigmoid`", "功能一致" "79", "`tf.nn.sigmoid <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/sigmoid>`_", ":ref:`cn_api_fluid_layers_sigmoid`", "功能一致"
"80", "`tf.nn.sigmoid_cross_entropy_with_logits <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits>`_", ":ref:`cn_api_fluid_layers_sigmoid_cross_entropy_with_logits`", "功能一致" "80", "`tf.nn.sigmoid_cross_entropy_with_logits <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits>`_", ":ref:`cn_api_fluid_layers_sigmoid_cross_entropy_with_logits`", "功能一致"
"81", "`tf.nn.softmax <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/softmax>`_", ":ref:`cn_api_fluid_layers_softmax`", "功能一致" "81", "`tf.nn.softmax <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/softmax>`_", ":ref:`cn_api_fluid_layers_softmax`", "功能一致"
"82", "`tf.nn.softmax_cross_entropy_with_logits <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/softmax_cross_entropy_with_logits>`_", ":ref:`cn_api_fluid_layers_softmax_with_cross_entropy`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.softmax_cross_entropy_with_logits.md>`_" "82", "`tf.nn.softmax_cross_entropy_with_logits <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/softmax_cross_entropy_with_logits>`_", ":ref:`cn_api_fluid_layers_softmax_with_cross_entropy`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.softmax_cross_entropy_with_logits.md>`_"
"83", "`tf.nn.softplus <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/softplus>`_", ":ref:`cn_api_fluid_layers_softplus`", "功能一致" "83", "`tf.nn.softplus <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/softplus>`_", ":ref:`cn_api_fluid_layers_softplus`", "功能一致"
"84", "`tf.nn.softsign <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/softsign>`_", ":ref:`cn_api_fluid_layers_softsign`", "功能一致" "84", "`tf.nn.softsign <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/softsign>`_", ":ref:`cn_api_fluid_layers_softsign`", "功能一致"
"85", "`tf.nn.tanh <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/tanh>`_", ":ref:`cn_api_fluid_layers_tanh`", "功能一致" "85", "`tf.nn.tanh <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/tanh>`_", ":ref:`cn_api_fluid_layers_tanh`", "功能一致"
"86", "`tf.one_hot <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/one_hot>`_", ":ref:`cn_api_fluid_layers_one_hot`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.one_hot.md>`_" "86", "`tf.one_hot <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/one_hot>`_", ":ref:`cn_api_fluid_layers_one_hot`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.one_hot.md>`_"
"87", "`tf.ones <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/ones>`_", ":ref:`cn_api_fluid_layers_ones`", "功能一致" "87", "`tf.ones <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/ones>`_", ":ref:`cn_api_fluid_layers_ones`", "功能一致"
"88", "`tf.intializers.ones <https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/initializers/ones>`_", ":ref:`cn_api_fluid_initializer_Constant`", "功能一致" "88", "`tf.intializers.ones <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers/ones>`_", ":ref:`cn_api_fluid_initializer_Constant`", "功能一致"
"89", "`tf.pad <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/pad>`_", ":ref:`cn_api_fluid_layers_pad`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.pad.md>`_" "89", "`tf.pad <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/pad>`_", ":ref:`cn_api_fluid_layers_pad`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.pad.md>`_"
"90", "`tf.placeholder <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/placeholder>`_", ":ref:`cn_api_fluid_layers_data`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.placeholder.md>`_" "90", "`tf.placeholder <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/placeholder>`_", ":ref:`cn_api_fluid_layers_data`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.placeholder.md>`_"
"91", "`tf.pow <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/pow>`_", ":ref:`cn_api_fluid_layers_pow`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.pow.md>`_" "91", "`tf.pow <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/pow>`_", ":ref:`cn_api_fluid_layers_pow`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.pow.md>`_"
"92", "`tf.print <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/print>`_", ":ref:`cn_api_fluid_layers_print`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.print.md>`_" "92", "`tf.print <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/print>`_", ":ref:`cn_api_fluid_layers_print`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.print.md>`_"
"93", "`tf.py_func <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/py_func>`_", ":ref:`cn_api_fluid_layers_py_func`", "功能一致" "93", "`tf.py_func <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/py_func>`_", ":ref:`cn_api_fluid_layers_py_func`", "功能一致"
"94", "`tf.random_normal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/random/normal>`_", ":ref:`cn_api_fluid_layers_gaussian_random`", "功能一致" "94", "`tf.random_normal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/random/normal>`_", ":ref:`cn_api_fluid_layers_gaussian_random`", "功能一致"
"95", "`tf.random_normal_initializer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/initializers/random_normal>`_", ":ref:`cn_api_fluid_initializer_Normal`", "功能一致" "95", "`tf.random_normal_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers/random_normal>`_", ":ref:`cn_api_fluid_initializer_Normal`", "功能一致"
"96", "`tf.random_uniform <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/random/uniform>`_", ":ref:`cn_api_fluid_layers_uniform_random`", "功能一致" "96", "`tf.random_uniform <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/random/uniform>`_", ":ref:`cn_api_fluid_layers_uniform_random`", "功能一致"
"97", "`tf.random_uniform_initializer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/initializers/random_uniform>`_", ":ref:`cn_api_fluid_initializer_UniformInitializer`", "功能一致" "97", "`tf.random_uniform_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers/random_uniform>`_", ":ref:`cn_api_fluid_initializer_UniformInitializer`", "功能一致"
"98", "`tf.reduce_logsumexp <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_logsumexp>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.reduce_logsumexp.md>`_" "98", "`tf.reduce_logsumexp <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_logsumexp>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.reduce_logsumexp.md>`_"
"99", "`tf.reduce_max <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_max>`_", ":ref:`cn_api_fluid_layers_reduce_max`", "功能一致" "99", "`tf.reduce_max <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_max>`_", ":ref:`cn_api_fluid_layers_reduce_max`", "功能一致"
"100", "`tf.reduce_mean <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_mean>`_", ":ref:`cn_api_fluid_layers_reduce_mean`", "功能一致" "100", "`tf.reduce_mean <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_mean>`_", ":ref:`cn_api_fluid_layers_reduce_mean`", "功能一致"
"101", "`tf.reduce_min <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_min>`_", ":ref:`cn_api_fluid_layers_reduce_min`", "功能一致" "101", "`tf.reduce_min <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_min>`_", ":ref:`cn_api_fluid_layers_reduce_min`", "功能一致"
"102", "`tf.reduce_sum <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/reduce_sum>`_", ":ref:`cn_api_fluid_layers_reduce_sum`", "功能一致" "102", "`tf.reduce_sum <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/reduce_sum>`_", ":ref:`cn_api_fluid_layers_reduce_sum`", "功能一致"
"103", "`tf.reshape <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/reshape>`_", ":ref:`cn_api_fluid_layers_reshape`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.reshape.md>`_" "103", "`tf.reshape <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reshape>`_", ":ref:`cn_api_fluid_layers_reshape`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.reshape.md>`_"
"104", "`tf.reverse <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/reverse>`_", ":ref:`cn_api_fluid_layers_reverse`", "功能一致" "104", "`tf.reverse <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reverse>`_", ":ref:`cn_api_fluid_layers_reverse`", "功能一致"
"105", "`tf.reverse_sequence <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/reverse_sequence>`_", ":ref:`cn_api_fluid_layers_sequence_reverse`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.reverse_sequence.md>`_" "105", "`tf.reverse_sequence <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reverse_sequence>`_", ":ref:`cn_api_fluid_layers_sequence_reverse`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.reverse_sequence.md>`_"
"106", "`tf.reverse_v2 <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/reverse>`_", ":ref:`cn_api_fluid_layers_reverse`", "功能一致" "106", "`tf.reverse_v2 <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reverse>`_", ":ref:`cn_api_fluid_layers_reverse`", "功能一致"
"107", "`tf.round <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/round>`_", ":ref:`cn_api_fluid_layers_round`", "功能一致" "107", "`tf.round <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/round>`_", ":ref:`cn_api_fluid_layers_round`", "功能一致"
"108", "`tf.rsqrt <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/rsqrt>`_", ":ref:`cn_api_fluid_layers_rsqrt`", "功能一致" "108", "`tf.rsqrt <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/rsqrt>`_", ":ref:`cn_api_fluid_layers_rsqrt`", "功能一致"
"109", "`tf.scalar_mul <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/scalar_mul>`_", ":ref:`cn_api_fluid_layers_scale`", "功能一致" "109", "`tf.scalar_mul <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/scalar_mul>`_", ":ref:`cn_api_fluid_layers_scale`", "功能一致"
"110", "`tf.scatter_update <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/scatter_update>`_", ":ref:`cn_api_fluid_layers_scatter`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.scatter_update.md>`_" "110", "`tf.scatter_update <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/scatter_update>`_", ":ref:`cn_api_fluid_layers_scatter`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.scatter_update.md>`_"
"111", "`tf.sequence_mask <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/sequence_mask>`_", ":ref:`cn_api_fluid_layers_sequence_mask`", "功能一致" "111", "`tf.sequence_mask <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/sequence_mask>`_", ":ref:`cn_api_fluid_layers_sequence_mask`", "功能一致"
"112", "`tf.shape <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/shape>`_", ":ref:`cn_api_fluid_layers_shape`", "功能一致" "112", "`tf.shape <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/shape>`_", ":ref:`cn_api_fluid_layers_shape`", "功能一致"
"113", "`tf.sigmoid <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/sigmoid>`_", ":ref:`cn_api_fluid_layers_sigmoid`", "功能一致" "113", "`tf.sigmoid <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/sigmoid>`_", ":ref:`cn_api_fluid_layers_sigmoid`", "功能一致"
"114", "`tf.sin <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/sin>`_", ":ref:`cn_api_fluid_layers_sin`", "功能一致" "114", "`tf.sin <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/sin>`_", ":ref:`cn_api_fluid_layers_sin`", "功能一致"
"115", "`tf.slice <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/slice>`_", ":ref:`cn_api_fluid_layers_slice`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.slice.md>`_" "115", "`tf.slice <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/slice>`_", ":ref:`cn_api_fluid_layers_slice`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.slice.md>`_"
"116", "`tf.split <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/split>`_", ":ref:`cn_api_fluid_layers_split`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.split.md>`_" "116", "`tf.split <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/split>`_", ":ref:`cn_api_fluid_layers_split`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.split.md>`_"
"117", "`tf.sqrt <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/sqrt>`_", ":ref:`cn_api_fluid_layers_sqrt`", "功能一致" "117", "`tf.sqrt <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/sqrt>`_", ":ref:`cn_api_fluid_layers_sqrt`", "功能一致"
"118", "`tf.square <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/square>`_", ":ref:`cn_api_fluid_layers_square`", "功能一致" "118", "`tf.square <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/square>`_", ":ref:`cn_api_fluid_layers_square`", "功能一致"
"119", "`tf.squared_difference <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/squared_difference>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.squared_difference.md>`_" "119", "`tf.squared_difference <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/squared_difference>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.squared_difference.md>`_"
"120", "`tf.squeeze <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/squeeze>`_", ":ref:`cn_api_fluid_layers_squeeze`", "功能一致" "120", "`tf.squeeze <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/squeeze>`_", ":ref:`cn_api_fluid_layers_squeeze`", "功能一致"
"121", "`tf.stack <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/stack>`_", ":ref:`cn_api_fluid_layers_stack`", "功能一致" "121", "`tf.stack <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/stack>`_", ":ref:`cn_api_fluid_layers_stack`", "功能一致"
"122", "`tf.stop_gradient <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/stop_gradient>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.stop_gradient.md>`_" "122", "`tf.stop_gradient <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/stop_gradient>`_", "无相应接口", "`Fluid实现 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.stop_gradient.md>`_"
"123", "`tf.subtract <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/subtract>`_", ":ref:`cn_api_fluid_layers_elementwise_sub`", "功能一致" "123", "`tf.subtract <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/subtract>`_", ":ref:`cn_api_fluid_layers_elementwise_sub`", "功能一致"
"124", "`tf.tanh <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/tanh>`_", ":ref:`cn_api_fluid_layers_tanh`", "功能一致" "124", "`tf.tanh <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/tanh>`_", ":ref:`cn_api_fluid_layers_tanh`", "功能一致"
"125", "`tf.tile <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/tile>`_", ":ref:`cn_api_fluid_layers_expand`", "功能一致" "125", "`tf.tile <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/tile>`_", ":ref:`cn_api_fluid_layers_expand`", "功能一致"
"126", "`tf.top_k <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/top_k>`_", ":ref:`cn_api_fluid_layers_topk`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.top_k.md>`_" "126", "`tf.top_k <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/top_k>`_", ":ref:`cn_api_fluid_layers_topk`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.nn.top_k.md>`_"
"127", "`tf.train.AdagradOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/AdagradOptimizer>`_", ":ref:`cn_api_fluid_optimizer_AdagradOptimizer`", "功能一致" "127", "`tf.train.AdagradOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/AdagradOptimizer>`_", ":ref:`cn_api_fluid_optimizer_AdagradOptimizer`", "功能一致"
"128", "`tf.train.AdamOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/AdamOptimizer>`_", ":ref:`cn_api_fluid_optimizer_Adam`", "功能一致" "128", "`tf.train.AdamOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/AdamOptimizer>`_", ":ref:`cn_api_fluid_optimizer_Adam`", "功能一致"
"129", "`tf.train.exponential_decay <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/exponential_decay>`_", ":ref:`cn_api_fluid_layers_exponential_decay`", "功能一致" "129", "`tf.train.exponential_decay <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/exponential_decay>`_", ":ref:`cn_api_fluid_layers_exponential_decay`", "功能一致"
"130", "`tf.train.GradientDescentOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/GradientDescentOptimizer>`_", ":ref:`cn_api_fluid_optimizer_SGDOptimizer`", "功能一致" "130", "`tf.train.GradientDescentOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/GradientDescentOptimizer>`_", ":ref:`cn_api_fluid_optimizer_SGDOptimizer`", "功能一致"
"131", "`tf.train.MomentumOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/MomentumOptimizer>`_", ":ref:`cn_api_fluid_optimizer_MomentumOptimizer`", "功能一致" "131", "`tf.train.MomentumOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/MomentumOptimizer>`_", ":ref:`cn_api_fluid_optimizer_MomentumOptimizer`", "功能一致"
"132", "`tf.train.polynomial_decay <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/polynomial_decay>`_", ":ref:`cn_api_fluid_layers_polynomial_decay`", "功能一致" "132", "`tf.train.polynomial_decay <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/polynomial_decay>`_", ":ref:`cn_api_fluid_layers_polynomial_decay`", "功能一致"
"133", "`tf.train.RMSPropOptimizer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train/RMSPropOptimizer>`_", ":ref:`cn_api_fluid_optimizer_RMSPropOptimizer`", "功能一致" "133", "`tf.train.RMSPropOptimizer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train/RMSPropOptimizer>`_", ":ref:`cn_api_fluid_optimizer_RMSPropOptimizer`", "功能一致"
"134", "`tf.transpose <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/transpose>`_", ":ref:`cn_api_fluid_layers_transpose`", "功能一致" "134", "`tf.transpose <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/transpose>`_", ":ref:`cn_api_fluid_layers_transpose`", "功能一致"
"135", "`tf.truediv <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/math/truediv>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致" "135", "`tf.truediv <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/math/truediv>`_", ":ref:`cn_api_fluid_layers_elementwise_div`", "功能一致"
"136", "`tf.truncated_normal <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/random/truncated_normal>`_", ":ref:`cn_api_fluid_initializer_TruncatedNormal`", "功能一致" "136", "`tf.truncated_normal <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/random/truncated_normal>`_", ":ref:`cn_api_fluid_initializer_TruncatedNormal`", "功能一致"
"137", "`tf.truncated_normal_initializer <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/initializers/truncated_normal>`_", ":ref:`cn_api_fluid_initializer_TruncatedNormal`", "功能一致" "137", "`tf.truncated_normal_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers/truncated_normal>`_", ":ref:`cn_api_fluid_initializer_TruncatedNormal`", "功能一致"
"138", "`tf.unstack <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/unstack>`_", ":ref:`cn_api_fluid_layers_unstack`", "功能一致" "138", "`tf.unstack <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/unstack>`_", ":ref:`cn_api_fluid_layers_unstack`", "功能一致"
"139", "`tf.Variable <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/Variable>`_", ":ref:`cn_api_fluid_layers_create_parameter`", "功能一致" "139", "`tf.Variable <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Variable>`_", ":ref:`cn_api_fluid_layers_create_parameter`", "功能一致"
"140", "`tf.while_loop <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/while_loop>`_", ":ref:`cn_api_fluid_layers_While`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.while_loop.md>`_" "140", "`tf.while_loop <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/while_loop>`_", ":ref:`cn_api_fluid_layers_While`", "`差异对比 <https://github.com/PaddlePaddle/X2Paddle/blob/master/tensorflow2fluid/doc/tf.while_loop.md>`_"
"141", "`tf.zeros <https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/zeros>`_", ":ref:`cn_api_fluid_layers_zeros`", "功能一致" "141", "`tf.zeros <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/zeros>`_", ":ref:`cn_api_fluid_layers_zeros`", "功能一致"
"142", "`tf.zeros_initializer <https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/zeros_initializer>`_", ":ref:`cn_api_fluid_initializer_Constant`", "功能一致" "142", "`tf.zeros_initializer <https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/zeros_initializer>`_", ":ref:`cn_api_fluid_initializer_Constant`", "功能一致"
.. _cn_user_guide_broadcasting: .. _cn_user_guide_broadcasting:
========= ==================
广播 (broadcasting) 广播 (broadcasting)
========= ==================
飞桨(PaddlePaddle,以下简称Paddle)和其他框架一样,提供的一些API支持广播(broadcasting)机制,允许在一些运算时使用不同形状的张量。 飞桨(PaddlePaddle,以下简称Paddle)和其他框架一样,提供的一些API支持广播(broadcasting)机制,允许在一些运算时使用不同形状的张量。
通常来讲,如果有一个形状较小和一个形状较大的张量,我们希望多次使用较小的张量来对较大的张量执行一些操作,看起来像是较小形状的张量的形状首先被扩展到和较大形状的张量一致,然后做运算。 通常来讲,如果有一个形状较小和一个形状较大的张量,我们希望多次使用较小的张量来对较大的张量执行一些操作,看起来像是较小形状的张量的形状首先被扩展到和较大形状的张量一致,然后做运算。
......
.. _user_guide_broadcasting .. _user_guide_broadcasting
========= ==================
Broadcasting Broadcasting
========= ==================
PaddlePaddle provides broadcasting semantics in some APIs like other deep learning frameworks, which allows using tensors with different shapes while operating. PaddlePaddle provides broadcasting semantics in some APIs like other deep learning frameworks, which allows using tensors with different shapes while operating.
In General, broadcast is the rule how the smaller tensor is “broadcast” across the larger tsnsor so that they have same shapes. In General, broadcast is the rule how the smaller tensor is “broadcast” across the larger tsnsor so that they have same shapes.
...@@ -98,4 +98,4 @@ For example: ...@@ -98,4 +98,4 @@ For example:
z = paddle.elementwise_add(x,y,axis=1) z = paddle.elementwise_add(x,y,axis=1)
print(z.shape) print(z.shape)
# z'shape [2, 3, 4, 5] # z'shape [2, 3, 4, 5]
# Start comparation at axis=1 from forward to backward. # Start comparation at axis=1 from forward to backward.
\ No newline at end of file
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
- `Operator <operator.html>`_ : Operator表示对数据的操作。 - `Operator <operator.html>`_ : Operator表示对数据的操作。
- `Program <program.html>`_ : Program表示对计算过程的描述。 - `Program <program.html>`_ : Program表示对计算过程的描述。
- `Executor <executor.html>`_ : Executor表示执行引擎。 - `Executor <executor.html>`_ : Executor表示执行引擎。
- `Broadcasting <broadcasting.html>`_ : Paddle对广播支持的说明。
.. toctree:: .. toctree::
:hidden: :hidden:
...@@ -22,4 +22,4 @@ ...@@ -22,4 +22,4 @@
operator.rst operator.rst
program.rst program.rst
executor.rst executor.rst
broadcasting.rst
...@@ -6,13 +6,13 @@ This paper introduces the basic concepts of Paddle: ...@@ -6,13 +6,13 @@ This paper introduces the basic concepts of Paddle:
- `Guide to Fluid Programming <./programming_guide/programming_guide_en.html>`_ :introduces the basic concept and usage of Paddle. - `Guide to Fluid Programming <./programming_guide/programming_guide_en.html>`_ :introduces the basic concept and usage of Paddle.
- `LoD-Tensor User Guide <lod_tensor_en.html>`_ : LoD-Tensor is a high-level feature of Paddle. It adds sequence information on the basis of tensor and supports processing variable length data. - `LoD-Tensor User Guide <lod_tensor_en.html>`_ : LoD-Tensor is a high-level feature of Paddle. It adds sequence information on the basis of tensor and supports processing variable length data.
- `Broadcasting <broadcasting_en.html>`_ : introduces Paddle provides broadcasting semantics.
.. toctree:: .. toctree::
:hidden: :hidden:
programming_guide/programming_guide_en.md programming_guide/programming_guide_en.md
lod_tensor_en.rst lod_tensor_en.rst
broadcasting_en.rst
.. _cn_user_guide_Operator: .. _cn_user_guide_Operator:
======= =========
Operator Operator
======= =========
在飞桨(PaddlePaddle,以下简称Paddle)中,所有对数据的操作都由Operator表示 在飞桨(PaddlePaddle,以下简称Paddle)中,所有对数据的操作都由Operator表示
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
wget http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm wget http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
rpm -i nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm rpm -i nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
sudo apt-get install -y libnccl2=2.3.7-1+cuda9.0 libnccl-dev=2.3.7-1+cuda9.0
yum update -y yum update -y
yum install -y libnccl-2.3.7-2+cuda9.0 libnccl-devel-2.3.7-2+cuda9.0 libnccl-static-2.3.7-2+cuda9.0 yum install -y libnccl-2.3.7-2+cuda9.0 libnccl-devel-2.3.7-2+cuda9.0 libnccl-static-2.3.7-2+cuda9.0
...@@ -119,7 +118,7 @@ ...@@ -119,7 +118,7 @@
> 安装protobuf。 > 安装protobuf。
`apt install patchelf` `yum install patchelf`
> 安装patchelf,PatchELF 是一个小而实用的程序,用于修改ELF可执行文件的动态链接器和RPATH。 > 安装patchelf,PatchELF 是一个小而实用的程序,用于修改ELF可执行文件的动态链接器和RPATH。
...@@ -153,7 +152,7 @@ ...@@ -153,7 +152,7 @@
恭喜,至此您已完成PaddlePaddle的编译安装。您只需要进入Docker容器后运行PaddlePaddle,即可开始使用。更多Docker使用请参见[Docker官方文档](https://docs.docker.com) 恭喜,至此您已完成PaddlePaddle的编译安装。您只需要进入Docker容器后运行PaddlePaddle,即可开始使用。更多Docker使用请参见[Docker官方文档](https://docs.docker.com)
> 注:PaddlePaddle Docker镜像为了减小体积,默认没有安装`vim`,您可以在容器中执行 `apt-get install -y vim` 来安装 > 注:PaddlePaddle Docker镜像为了减小体积,默认没有安装`vim`,您可以在容器中执行 `yum install -y vim` 来安装
<a name="ct_source"></a> <a name="ct_source"></a>
### **本机编译** ### **本机编译**
...@@ -206,7 +205,7 @@ ...@@ -206,7 +205,7 @@
* 这里特别提供`patchELF`的安装方法,其他的依赖可以使用`yum install`或者`pip install`/`pip3 install` 后跟依赖名称和版本安装: * 这里特别提供`patchELF`的安装方法,其他的依赖可以使用`yum install`或者`pip install`/`pip3 install` 后跟依赖名称和版本安装:
`yum install patchelf` `yum install patchelf`
> 不能使用apt安装的用户请参见patchElF github[官方文档](https://gist.github.com/ruario/80fefd174b3395d34c14) > 不能使用yum安装的用户请参见patchElF github[官方文档](https://gist.github.com/ruario/80fefd174b3395d34c14)
7. 将PaddlePaddle的源码clone在当下目录下的Paddle的文件夹中,并进入Padde目录下: 7. 将PaddlePaddle的源码clone在当下目录下的Paddle的文件夹中,并进入Padde目录下:
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
wget http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm wget http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
rpm -i nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm rpm -i nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
sudo apt-get install -y libnccl2=2.3.7-1+cuda9.0 libnccl-dev=2.3.7-1+cuda9.0
yum update -y yum update -y
yum install -y libnccl-2.3.7-2+cuda9.0 libnccl-devel-2.3.7-2+cuda9.0 libnccl-static-2.3.7-2+cuda9.0 yum install -y libnccl-2.3.7-2+cuda9.0 libnccl-devel-2.3.7-2+cuda9.0 libnccl-static-2.3.7-2+cuda9.0
...@@ -109,7 +108,7 @@ Please follow the steps below to install: ...@@ -109,7 +108,7 @@ Please follow the steps below to install:
`mkdir -p /paddle/build && cd /paddle/build` `mkdir -p /paddle/build && cd /paddle/build`
7. Use the following command to install the dependencies: 7. Use the following command to install the dependencies:
For Python2: pip install protobuf For Python2: pip install protobuf
...@@ -119,7 +118,7 @@ Please follow the steps below to install: ...@@ -119,7 +118,7 @@ Please follow the steps below to install:
> Install protobuf 3.1.0 > Install protobuf 3.1.0
`apt install patchelf` `yum install patchelf`
> Installing patchelf, PatchELF is a small and useful program for modifying the dynamic linker and RPATH of ELF executables. > Installing patchelf, PatchELF is a small and useful program for modifying the dynamic linker and RPATH of ELF executables.
...@@ -145,7 +144,7 @@ Please follow the steps below to install: ...@@ -145,7 +144,7 @@ Please follow the steps below to install:
10. After compiling successfully, go to the `/paddle/build/python/dist` directory and find the generated `.whl` package: `cd /paddle/build/python/dist` 10. After compiling successfully, go to the `/paddle/build/python/dist` directory and find the generated `.whl` package: `cd /paddle/build/python/dist`
11. Install the compiled `.whl` package on the current machine or target machine: 11. Install the compiled `.whl` package on the current machine or target machine:
For Python2: pip install -U (whl package name) For Python2: pip install -U (whl package name)
For Python3: pip3.5 install -U (whl package name) For Python3: pip3.5 install -U (whl package name)
...@@ -154,7 +153,7 @@ Please follow the steps below to install: ...@@ -154,7 +153,7 @@ Please follow the steps below to install:
Congratulations, now that you have successfully installed PaddlePaddle using Docker, you only need to run PaddlePaddle after entering the Docker container. For more Docker usage, please refer to the [official Docker documentation](https://docs.docker.com/). Congratulations, now that you have successfully installed PaddlePaddle using Docker, you only need to run PaddlePaddle after entering the Docker container. For more Docker usage, please refer to the [official Docker documentation](https://docs.docker.com/).
> Note: In order to reduce the size, `vim` is not installed in PaddlePaddle Docker image by default. You can edit the code in the container after executing `apt-get install -y vim` in the container. > Note: In order to reduce the size, `vim` is not installed in PaddlePaddle Docker image by default. You can edit the code in the container after executing `yum install -y vim` in the container.
<a name="ct_source"></a> <a name="ct_source"></a>
### **Local compilation** ### **Local compilation**
...@@ -215,7 +214,7 @@ Congratulations, now that you have successfully installed PaddlePaddle using Doc ...@@ -215,7 +214,7 @@ Congratulations, now that you have successfully installed PaddlePaddle using Doc
* Here is the installation method for `patchELF`. Other dependencies can be installed using `yum install` or `pip install`/`pip3 install` followed by the name and version: * Here is the installation method for `patchELF`. Other dependencies can be installed using `yum install` or `pip install`/`pip3 install` followed by the name and version:
`yum install patchelf` `yum install patchelf`
> Users who can't use apt installation can refer to patchElF github [official documentation](https://gist.github.com/ruario/80fefd174b3395d34c14). > Users who can't use yum installation can refer to patchElF github [official documentation](https://gist.github.com/ruario/80fefd174b3395d34c14).
7. Put the PaddlePaddle source cloned in the Paddle folder in the current directory and go to the Paddle directory: 7. Put the PaddlePaddle source cloned in the Paddle folder in the current directory and go to the Paddle directory:
......
...@@ -22,33 +22,49 @@ common_args_en = """ ...@@ -22,33 +22,49 @@ common_args_en = """
include_sublayers (bool, optional): Whether include the sublayers. If True, return list includes the sublayers weights. Default is True. include_sublayers (bool, optional): Whether include the sublayers. If True, return list includes the sublayers weights. Default is True.
stride (tuple|int): The stride size. It can be a single integer or a tuple containing two integers, representing the strides of the convolution along the height and width. If it is a single integer, the height and width are equal to the integer. Default is 1. stride (tuple|int): The stride size. It can be a single integer or a tuple containing two integers, representing the strides of the convolution along the height and width. If it is a single integer, the height and width are equal to the integer. Default is 1.
groups (int, optional): The group number of convolution layer. When group=n, the input and convolution kernels are divided into n groups equally, the first group of convolution kernels and the first group of inputs are subjected to convolution calculation, the second group of convolution kernels and the second group of inputs are subjected to convolution calculation, ……, the nth group of convolution kernels and the nth group of inputs perform convolution calculations. Default is 1. groups (int, optional): The group number of convolution layer. When group=n, the input and convolution kernels are divided into n groups equally, the first group of convolution kernels and the first group of inputs are subjected to convolution calculation, the second group of convolution kernels and the second group of inputs are subjected to convolution calculation, ……, the nth group of convolution kernels and the nth group of inputs perform convolution calculations. Default is 1.
regularization (WeightDecayRegularizer, optional) The strategy of regularization. There are two method: :ref:`api_fluid_regularizer_L1Decay` 、 :ref:`api_fluid_regularizer_L2Decay` . If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization. regularization (WeightDecayRegularizer, optional): The strategy of regularization. There are two method: :ref:`api_fluid_regularizer_L1Decay` 、 :ref:`api_fluid_regularizer_L2Decay` . If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping. grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default None, meaning there is no gradient clipping.
dilation (tuple|int) – The dilation size. It can be a single integer or a tuple containing two integers, representing the height and width of dilation of the convolution kernel elements. If it is a single integer,the height and width of dilation are equal to the integer. Default is 1. dilation (tuple|int): The dilation size. It can be a single integer or a tuple containing two integers, representing the height and width of dilation of the convolution kernel elements. If it is a single integer,the height and width of dilation are equal to the integer. Default is 1.
stop_gradient (bool, optional) – A boolean that mentions whether gradient should flow. Default is True, means stop calculate gradients. stop_gradient (bool, optional): A boolean that mentions whether gradient should flow. Default is True, means stop calculate gradients.
force_cpu (bool, optional): Whether force to store the output tensor in CPU memory. If force_cpu is False, the output tensor will be stored in running device memory, otherwise it will be stored to the CPU memory. Default is False.
data_format (str, optional): Specify the input data format, the output data format will be consistent with the input, which can be "NCHW" or "NHWC". N is batch size, C is channels, H is height, and W is width. Default is "NCHW".
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of some derived class of ``GradientClipBase`` . There are three cliping strategies ( :ref:`api_fluid_clip_GradientClipByGlobalNorm` , :ref:`api_fluid_clip_GradientClipByNorm` , :ref:`api_fluid_clip_GradientClipByValue` ). Default is None, meaning there is no gradient clipping.
num_filters (int): The number of filter. It is as same as the output channals numbers.
dim (int, optional): A dimension along which to operate. Default is 0.
is_sparse (bool, optional): Whether use sparse updating. For more information, please refer to :ref:`api_guide_sparse_update_en` . If it’s True, it will ues sparse updating.
place (fluid.CPUPlace()|fluid.CUDAPlace(N)|None): This parameter represents which device the executor runs on, and N means the GPU's id. When this parameter is None, PaddlePaddle will set the default device according to its installation version. If Paddle is CPU version, the default device would be set to CPUPlace(). If Paddle is GPU version, the default device would be set to CUDAPlace(0). Default is None.
num_filters (int): the number of convolution kernels, is also the number of output channels.
""" """
common_args_cn = """ common_args_cn = """
x (Tensor) - 输入的Tensor,数据类型为:float32、float64、int32、int64。 x (Tensor) - 输入的 `Tensor` ,数据类型为:float32、float64、int32、int64。
y (Tensor) - 输入的Tensor,数据类型为:float32、float64、int32、int64。 y (Tensor) - 输入的 `Tensor` ,数据类型为:float32、float64、int32、int64。
name (str, 可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。 name (str可选) - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
dtype (str, 可选) - 输出Tensor的数据类型,支持int32、int64、float32、float64。 dtype (str,可选) - 输出 `Tensor` 的数据类型,支持int32、int64、float32、float64。
param_attr (ParamAttr, 可选) – 该Layer的可学习的权重(Parameter)的参数属性。更多信息请参见 :ref:`cn_api_fluid_ParamAttr`。 param_attr (ParamAttr可选) – 该Layer的可学习的权重(Parameter)的参数属性。更多信息请参见 :ref:`cn_api_fluid_ParamAttr`。
bias_attr (ParamAttr, 可选) - 该Layer的可学习的偏置(Bias)的参数属性。更多信息请参见 :ref:`cn_api_fluid_ParamAttr`。 bias_attr (ParamAttr可选) - 该Layer的可学习的偏置(Bias)的参数属性。更多信息请参见 :ref:`cn_api_fluid_ParamAttr`。
label (Tensor) - 训练数据的标签,数据类型为:int32, int64。 label (Tensor) - 训练数据的标签,数据类型为:int32, int64。
learning_rate (Tensor|float) - 学习率,可以是一个Tensor或者是一个浮点数。默认值为1e-03. learning_rate (Tensor|float) - 学习率,可以是一个 `Tensor` 或者是一个浮点数。默认值为1e-03.
axis (int, 可选) - 指定对输入Tensor进行运算的轴。默认值为0。 axis (int,可选) - 指定对输入 `Tensor` 进行运算的轴。默认值为0。
epsilon (float, 可选) - 添加到分母上的值以防止分母除0。默认值为1e-05。 epsilon (float可选) - 添加到分母上的值以防止分母除0。默认值为1e-05。
is_test (bool, 可选) - 用于表明是否在测试阶段执行。默认值为False,表示非测试阶段。 is_test (bool可选) - 用于表明是否在测试阶段执行。默认值为False,表示非测试阶段。
shape (Tensor|tuple|list) - Tensor的形状。如果shape是一个列表或元组,则其元素应该是形状为[1]的整数或Tensor。 如果shape是Tensor,则它应该是一维Tensor shape (Tensor|tuple|list) - `Tensor` 的形状。如果 `shape` 是一个列表或元组,则其元素应该是形状为[1]的整数或 `Tensor` 。 如果 `shape` 是 `Tensor` ,则它应该是1-D `Tensor`
keep_dim (bool) - 是否在输出Tensor中保留减小的维度。如 keep_dim 为True,否则结果张量的维度将比输入张量小,默认值为False。 keep_dim (bool) - 是否在输出 `Tensor` 中保留减小的维度。如 `keep_dim` 为True,否则结果张量的维度将比输入张量小,默认值为False。
filter_size (tuple|list|int) - 卷积核大小。可以为单个整数或包含两个整数的元组或列表,分别表示卷积核的高和宽。如果为单个整数,表示卷积核的高和宽都等于该整数。 filter_size (tuple|list|int) - 卷积核大小。可以为单个整数或包含两个整数的元组或列表,分别表示卷积核的高和宽。如果为单个整数,表示卷积核的高和宽都等于该整数。
padding (tuple|int) – 填充大小。可以为单个整数或包含两个整数的元组,分别表示对输入高和宽两侧填充的大小。如果为单个整数,表示高和宽的填充都等于该整数。默认值为0。 padding (tuple|int) – 填充大小。可以为单个整数或包含两个整数的元组,分别表示对输入高和宽两侧填充的大小。如果为单个整数,表示高和宽的填充都等于该整数。默认值为0。
include_sublayers (bool, 可选) - 是否返回子层的参数。如果为True,返回的列表中包含子层的参数。默认值为True。 include_sublayers (bool可选) - 是否返回子层的参数。如果为True,返回的列表中包含子层的参数。默认值为True。
stride (tuple|int) - 步长大小。可以为单个整数或包含两个整数的元组,分别表示卷积沿着高和宽的步长。如果为单个整数,表示沿着高和宽的步长都等于该整数。默认值为1。 stride (tuple|int) - 步长大小。可以为单个整数或包含两个整数的元组,分别表示卷积沿着高和宽的步长。如果为单个整数,表示沿着高和宽的步长都等于该整数。默认值为1。
groups (int, 可选) - 卷积的组数。当group=n,输入和卷积核分别平均分为n组,第一组卷积核和第一组输入进行卷积计算,第二组卷积核和第二组输入进行卷积计算,……,第n组卷积核和第n组输入进行卷积计算。默认值为11。 groups (int可选) - 卷积的组数。当group=n,输入和卷积核分别平均分为n组,第一组卷积核和第一组输入进行卷积计算,第二组卷积核和第二组输入进行卷积计算,……,第n组卷积核和第n组输入进行卷积计算。默认值为11。
regularization (WeightDecayRegularizer,可选) - 正则化方法。支持两种正则化策略: :ref:`cn_api_fluid_regularizer_L1Decay` 、 :ref:`cn_api_fluid_regularizer_L2Decay` 。如果一个参数已经在 :ref:`cn_api_fluid_ParamAttr` 中设置了正则化,这里的正则化设置将被忽略;如果没有在 :ref:`cn_api_fluid_ParamAttr` 中设置正则化,这里的设置才会生效。默认值为None,表示没有正则化。 regularization (WeightDecayRegularizer,可选) - 正则化方法。支持两种正则化策略: :ref:`cn_api_fluid_regularizer_L1Decay` 、 :ref:`cn_api_fluid_regularizer_L2Decay` 。如果一个参数已经在 :ref:`cn_api_fluid_ParamAttr` 中设置了正则化,这里的正则化设置将被忽略;如果没有在 :ref:`cn_api_fluid_ParamAttr` 中设置正则化,这里的设置才会生效。默认值为None,表示没有正则化。
grad_clip (GradientClipBase, 可选) – 梯度裁剪的策略,支持三种裁剪策略: :ref:`cn_api_fluid_clip_GradientClipByGlobalNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByValue` 。 grad_clip (GradientClipBase可选) – 梯度裁剪的策略,支持三种裁剪策略: :ref:`cn_api_fluid_clip_GradientClipByGlobalNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByValue` 。
dilation (tuple|int, 可选) - 空洞大小。可以为单个整数或包含两个整数的元组,分别表示卷积核中的元素沿着高和宽的空洞。如果为单个整数,表示高和宽的空洞都等于该整数。默认值为1。 dilation (tuple|int可选) - 空洞大小。可以为单个整数或包含两个整数的元组,分别表示卷积核中的元素沿着高和宽的空洞。如果为单个整数,表示高和宽的空洞都等于该整数。默认值为1。
stop_gradient (bool,可选) - 提示是否应该停止计算梯度,默认值为True,表示停止计算梯度。 stop_gradient (bool,可选) - 提示是否应该停止计算梯度,默认值为True,表示停止计算梯度。
force_cpu (bool,可选) - 是否强制将输出Tensor写入CPU内存。如果为False,则将输出Tensor写入当前所在运算设备的内存,否则写入CPU内存中。默认为False。
data_format (str,可选) - 指定输入的数据格式,输出的数据格式将与输入保持一致,可以是"NCHW"和"NHWC"。N是批大小,C是通道数,H是高度,W是宽度。默认值为"NCHW"。
grad_clip (GradientClipBase,可选) – 梯度裁剪的策略,支持三种裁剪策略: :ref:`cn_api_fluid_clip_GradientClipByGlobalNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByNorm` 、 :ref:`cn_api_fluid_clip_GradientClipByValue` 。默认值为None,表示不使用梯度裁剪。
num_filters (int) - 卷积核的个数,与输出的通道数相同。
dim (int,可选) - 指定对输入Tensor进行运算的维度。默认值为0。
is_sparse (bool,可选) - 是否使用稀疏更新的方式,更多信息请参见 :ref:`api_guide_sparse_update` 。默认值为True,表示使用稀疏更新的方式。
place (fluid.CPUPlace()|fluid.CUDAPlace(N)|None) – 该参数表示Executor执行所在的设备,这里的N为GPU对应的ID。当该参数为None时,PaddlePaddle会根据其安装版本来设置默认设备。当PaddlePaddle是CPU版时,默认运行设备将会设置为 `fluid.CPUPlace()` ;当PaddlePaddle是GPU版本时,默认执行设备将会设置为 `fluid.CUDAPlace(0)` 。默认值为None。
num_filters (int) - 卷积核个数,同时也是输出的通道数。
""" """
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册