未验证 提交 86ca31ab 编写于 作者: C Cindy Cai 提交者: GitHub

English API Docs Optimization Part 1 (#24536)

* test=develop, test=document_fix

* test=develop, test=document_fix
Co-authored-by: Nswtkiwi <1208425345@qq.com>
上级 2d0f849e
...@@ -1196,6 +1196,8 @@ def append_backward(loss, ...@@ -1196,6 +1196,8 @@ def append_backward(loss,
callbacks=None, callbacks=None,
checkpoints=None): checkpoints=None):
""" """
:api_attr: Static Graph
This function appends backward part to main_program. This function appends backward part to main_program.
A complete neural network training is made up of forward and backward A complete neural network training is made up of forward and backward
...@@ -1724,6 +1726,8 @@ def calc_gradient(targets, inputs, target_gradients=None, no_grad_set=None): ...@@ -1724,6 +1726,8 @@ def calc_gradient(targets, inputs, target_gradients=None, no_grad_set=None):
def gradients(targets, inputs, target_gradients=None, no_grad_set=None): def gradients(targets, inputs, target_gradients=None, no_grad_set=None):
""" """
:api_attr: Static Graph
Backpropagate the gradients of targets to inputs. Backpropagate the gradients of targets to inputs.
Args: Args:
......
...@@ -158,6 +158,10 @@ class GradientClipBase(object): ...@@ -158,6 +158,10 @@ class GradientClipBase(object):
class GradientClipByValue(GradientClipBase): class GradientClipByValue(GradientClipBase):
""" """
:alias_main: paddle.nn.GradientClipByValue
:alias: paddle.nn.GradientClipByValue,paddle.nn.clip.GradientClipByValue
:old_api: paddle.fluid.clip.GradientClipByValue
Limit the value of multi-dimensional Tensor :math:`X` to the range [min, max]. Limit the value of multi-dimensional Tensor :math:`X` to the range [min, max].
- Any values less than min are set to ``min``. - Any values less than min are set to ``min``.
...@@ -296,6 +300,10 @@ class GradientClipByValue(GradientClipBase): ...@@ -296,6 +300,10 @@ class GradientClipByValue(GradientClipBase):
class GradientClipByNorm(GradientClipBase): class GradientClipByNorm(GradientClipBase):
""" """
:alias_main: paddle.nn.GradientClipByNorm
:alias: paddle.nn.GradientClipByNorm,paddle.nn.clip.GradientClipByNorm
:old_api: paddle.fluid.clip.GradientClipByNorm
Limit the l2 norm of multi-dimensional Tensor :math:`X` to ``clip_norm`` . Limit the l2 norm of multi-dimensional Tensor :math:`X` to ``clip_norm`` .
- If the l2 norm of :math:`X` is greater than ``clip_norm`` , :math:`X` will be compressed by a ratio. - If the l2 norm of :math:`X` is greater than ``clip_norm`` , :math:`X` will be compressed by a ratio.
...@@ -447,6 +455,10 @@ class GradientClipByNorm(GradientClipBase): ...@@ -447,6 +455,10 @@ class GradientClipByNorm(GradientClipBase):
class GradientClipByGlobalNorm(GradientClipBase): class GradientClipByGlobalNorm(GradientClipBase):
""" """
:alias_main: paddle.nn.GradientClipByGlobalNorm
:alias: paddle.nn.GradientClipByGlobalNorm,paddle.nn.clip.GradientClipByGlobalNorm
:old_api: paddle.fluid.clip.GradientClipByGlobalNorm
Given a list of Tensor :math:`t\_list` , calculate the global norm for the elements of all tensors in Given a list of Tensor :math:`t\_list` , calculate the global norm for the elements of all tensors in
:math:`t\_list` , and limit it to ``clip_norm`` . :math:`t\_list` , and limit it to ``clip_norm`` .
...@@ -691,6 +703,8 @@ class GradientClipByGlobalNorm(GradientClipBase): ...@@ -691,6 +703,8 @@ class GradientClipByGlobalNorm(GradientClipBase):
@framework.dygraph_not_support @framework.dygraph_not_support
def set_gradient_clip(clip, param_list=None, program=None): def set_gradient_clip(clip, param_list=None, program=None):
""" """
:api_attr: Static Graph
Warning: Warning:
This API must be used after building network, and before ``minimize`` , This API must be used after building network, and before ``minimize`` ,
......
...@@ -86,6 +86,8 @@ def _has_optimizer_in_control_flow(program): ...@@ -86,6 +86,8 @@ def _has_optimizer_in_control_flow(program):
class CompiledProgram(object): class CompiledProgram(object):
""" """
:api_attr: Static Graph
The CompiledProgram is used to transform a program or graph for The CompiledProgram is used to transform a program or graph for
various optimizations according to the configuration of build_strategy, various optimizations according to the configuration of build_strategy,
for example, the operators' fusion in the computation graph, memory for example, the operators' fusion in the computation graph, memory
......
...@@ -24,6 +24,11 @@ __all__ = ['data'] ...@@ -24,6 +24,11 @@ __all__ = ['data']
def data(name, shape, dtype='float32', lod_level=0): def data(name, shape, dtype='float32', lod_level=0):
""" """
:api_attr: Static Graph
:alias_main: paddle.nn.data
:alias: paddle.nn.data,paddle.nn.input.data
:old_api: paddle.fluid.data
**Data Layer** **Data Layer**
This function creates a variable on the global block. The global variable This function creates a variable on the global block. The global variable
......
...@@ -20,6 +20,8 @@ __all__ = ['DataFeedDesc'] ...@@ -20,6 +20,8 @@ __all__ = ['DataFeedDesc']
class DataFeedDesc(object): class DataFeedDesc(object):
""" """
:api_attr: Static Graph
Datafeed descriptor, describing input training data format. This class is Datafeed descriptor, describing input training data format. This class is
currently only used for AsyncExecutor (See comments for class AsyncExecutor currently only used for AsyncExecutor (See comments for class AsyncExecutor
for a brief introduction) for a brief introduction)
......
...@@ -211,6 +211,8 @@ class BatchedTensorProvider(object): ...@@ -211,6 +211,8 @@ class BatchedTensorProvider(object):
class DataFeeder(object): class DataFeeder(object):
""" """
:api_attr: Static Graph
DataFeeder converts the data that returned by a reader into a data DataFeeder converts the data that returned by a reader into a data
structure that can feed into Executor. The reader is usually a structure that can feed into Executor. The reader is usually a
python generator that returns a list of mini-batch data entries. python generator that returns a list of mini-batch data entries.
......
...@@ -728,6 +728,8 @@ class InMemoryDataset(DatasetBase): ...@@ -728,6 +728,8 @@ class InMemoryDataset(DatasetBase):
def release_memory(self): def release_memory(self):
""" """
:api_attr: Static Graph
Release InMemoryDataset memory data, when data will not be used again. Release InMemoryDataset memory data, when data will not be used again.
Examples: Examples:
......
...@@ -111,6 +111,10 @@ def enabled(): ...@@ -111,6 +111,10 @@ def enabled():
def enable_dygraph(place=None): def enable_dygraph(place=None):
""" """
:alias_main: paddle.enable_dygraph
:alias: paddle.enable_dygraph,paddle.enable_imperative.enable_dygraph
:old_api: paddle.fluid.dygraph.base.enable_dygraph
This function enables dynamic graph mode. This function enables dynamic graph mode.
Parameters: Parameters:
...@@ -141,6 +145,10 @@ def enable_dygraph(place=None): ...@@ -141,6 +145,10 @@ def enable_dygraph(place=None):
def disable_dygraph(): def disable_dygraph():
""" """
:alias_main: paddle.disable_dygraph
:alias: paddle.disable_dygraph,paddle.disable_imperative.disable_dygraph
:old_api: paddle.fluid.dygraph.base.disable_dygraph
This function disables dynamic graph mode. This function disables dynamic graph mode.
return: return:
...@@ -178,6 +186,8 @@ def _switch_tracer_mode_guard_(is_train=True): ...@@ -178,6 +186,8 @@ def _switch_tracer_mode_guard_(is_train=True):
def no_grad(func=None): def no_grad(func=None):
""" """
:api_attr: imperative
Create a context which disables dygraph gradient calculation. Create a context which disables dygraph gradient calculation.
In this mode, the result of every computation will have `stop_gradient=True`. In this mode, the result of every computation will have `stop_gradient=True`.
...@@ -236,6 +246,8 @@ def no_grad(func=None): ...@@ -236,6 +246,8 @@ def no_grad(func=None):
@signature_safe_contextmanager @signature_safe_contextmanager
def guard(place=None): def guard(place=None):
""" """
:api_attr: imperative
This context will create a dygraph context for dygraph to run, using python ``with`` statement. This context will create a dygraph context for dygraph to run, using python ``with`` statement.
Parameters: Parameters:
...@@ -520,6 +532,8 @@ def grad(outputs, ...@@ -520,6 +532,8 @@ def grad(outputs,
@framework.dygraph_only @framework.dygraph_only
def to_variable(value, name=None, zero_copy=None): def to_variable(value, name=None, zero_copy=None):
""" """
:api_attr: imperative
The API will create a ``Variable`` or ``ComplexVariable`` object from The API will create a ``Variable`` or ``ComplexVariable`` object from
numpy\.ndarray, Variable or ComplexVariable object. numpy\.ndarray, Variable or ComplexVariable object.
......
...@@ -32,6 +32,8 @@ __all__ = [ ...@@ -32,6 +32,8 @@ __all__ = [
@dygraph_only @dygraph_only
def save_dygraph(state_dict, model_path): def save_dygraph(state_dict, model_path):
''' '''
:api_attr: imperative
Save Layer's state_dict to disk. This will generate a file with suffix ".pdparams" Save Layer's state_dict to disk. This will generate a file with suffix ".pdparams"
The state_dict is get from Layers.state_dict function The state_dict is get from Layers.state_dict function
...@@ -95,6 +97,8 @@ def save_dygraph(state_dict, model_path): ...@@ -95,6 +97,8 @@ def save_dygraph(state_dict, model_path):
@dygraph_only @dygraph_only
def load_dygraph(model_path, keep_name_table=False): def load_dygraph(model_path, keep_name_table=False):
''' '''
:api_attr: imperative
Load parameter state_dict from disk. Load parameter state_dict from disk.
Args: Args:
......
...@@ -203,6 +203,8 @@ def _trace(layer, ...@@ -203,6 +203,8 @@ def _trace(layer,
class TracedLayer(object): class TracedLayer(object):
""" """
:api_attr: imperative
TracedLayer is used to convert a forward dygraph model to a static TracedLayer is used to convert a forward dygraph model to a static
graph model. This is mainly used to save the dygraph model for online graph model. This is mainly used to save the dygraph model for online
inference using C++. Besides, users can also do inference in Python inference using C++. Besides, users can also do inference in Python
......
...@@ -58,7 +58,12 @@ class HookRemoveHelper(object): ...@@ -58,7 +58,12 @@ class HookRemoveHelper(object):
class Layer(core.Layer): class Layer(core.Layer):
"""Dynamic graph Layer based on OOD, includes the parameters of the layer, the structure of the forward graph and so on. """
:alias_main: paddle.nn.Layer
:alias: paddle.nn.Layer
:old_api: paddle.fluid.dygraph.layers.Layer
Dynamic graph Layer based on OOD, includes the parameters of the layer, the structure of the forward graph and so on.
Parameters: Parameters:
name_scope (str, optional): prefix name used by the layer to name parameters. name_scope (str, optional): prefix name used by the layer to name parameters.
......
...@@ -69,6 +69,8 @@ class LearningRateDecay(object): ...@@ -69,6 +69,8 @@ class LearningRateDecay(object):
class PiecewiseDecay(LearningRateDecay): class PiecewiseDecay(LearningRateDecay):
""" """
:api_attr: imperative
Piecewise decay scheduler. Piecewise decay scheduler.
The algorithm can be described as the code below. The algorithm can be described as the code below.
...@@ -128,6 +130,8 @@ class PiecewiseDecay(LearningRateDecay): ...@@ -128,6 +130,8 @@ class PiecewiseDecay(LearningRateDecay):
class NaturalExpDecay(LearningRateDecay): class NaturalExpDecay(LearningRateDecay):
""" """
:api_attr: imperative
Applies natural exponential decay to the initial learning rate. Applies natural exponential decay to the initial learning rate.
The algorithm can be described as following. The algorithm can be described as following.
...@@ -207,6 +211,8 @@ class NaturalExpDecay(LearningRateDecay): ...@@ -207,6 +211,8 @@ class NaturalExpDecay(LearningRateDecay):
class ExponentialDecay(LearningRateDecay): class ExponentialDecay(LearningRateDecay):
""" """
:api_attr: imperative
Applies exponential decay to the learning rate. Applies exponential decay to the learning rate.
The algorithm can be described as following. The algorithm can be described as following.
...@@ -287,6 +293,8 @@ class ExponentialDecay(LearningRateDecay): ...@@ -287,6 +293,8 @@ class ExponentialDecay(LearningRateDecay):
class InverseTimeDecay(LearningRateDecay): class InverseTimeDecay(LearningRateDecay):
""" """
:api_attr: imperative
Applies inverse time decay to the initial learning rate. Applies inverse time decay to the initial learning rate.
The algorithm can be described as following. The algorithm can be described as following.
...@@ -363,6 +371,8 @@ class InverseTimeDecay(LearningRateDecay): ...@@ -363,6 +371,8 @@ class InverseTimeDecay(LearningRateDecay):
class PolynomialDecay(LearningRateDecay): class PolynomialDecay(LearningRateDecay):
""" """
:api_attr: imperative
Applies polynomial decay to the initial learning rate. Applies polynomial decay to the initial learning rate.
The algorithm can be described as following. The algorithm can be described as following.
...@@ -455,6 +465,8 @@ class PolynomialDecay(LearningRateDecay): ...@@ -455,6 +465,8 @@ class PolynomialDecay(LearningRateDecay):
class CosineDecay(LearningRateDecay): class CosineDecay(LearningRateDecay):
""" """
:api_attr: imperative
Applies cosine decay to the learning rate. Applies cosine decay to the learning rate.
The algorithm can be described as following. The algorithm can be described as following.
...@@ -511,6 +523,8 @@ class CosineDecay(LearningRateDecay): ...@@ -511,6 +523,8 @@ class CosineDecay(LearningRateDecay):
class NoamDecay(LearningRateDecay): class NoamDecay(LearningRateDecay):
""" """
:api_attr: imperative
Applies Noam decay to the initial learning rate. Applies Noam decay to the initial learning rate.
The algorithm can be described as following. The algorithm can be described as following.
......
...@@ -696,6 +696,10 @@ class Conv3DTranspose(layers.Layer): ...@@ -696,6 +696,10 @@ class Conv3DTranspose(layers.Layer):
class Pool2D(layers.Layer): class Pool2D(layers.Layer):
""" """
:alias_main: paddle.nn.Pool2D
:alias: paddle.nn.Pool2D,paddle.nn.layer.Pool2D,paddle.nn.layer.common.Pool2D
:old_api: paddle.fluid.dygraph.Pool2D
This interface is used to construct a callable object of the ``Pool2D`` class. This interface is used to construct a callable object of the ``Pool2D`` class.
For more details, refer to code examples. For more details, refer to code examples.
The pooling2d operation calculates the output based on the input, pool_type and pool_size, pool_stride, The pooling2d operation calculates the output based on the input, pool_type and pool_size, pool_stride,
...@@ -867,6 +871,10 @@ class Pool2D(layers.Layer): ...@@ -867,6 +871,10 @@ class Pool2D(layers.Layer):
class Linear(layers.Layer): class Linear(layers.Layer):
""" """
:alias_main: paddle.nn.Linear
:alias: paddle.nn.Linear,paddle.nn.layer.Linear,paddle.nn.layer.common.Linear
:old_api: paddle.fluid.dygraph.Linear
Fully-connected linear transformation layer: Fully-connected linear transformation layer:
.. math:: .. math::
...@@ -1100,6 +1108,10 @@ class InstanceNorm(layers.Layer): ...@@ -1100,6 +1108,10 @@ class InstanceNorm(layers.Layer):
class BatchNorm(layers.Layer): class BatchNorm(layers.Layer):
""" """
:alias_main: paddle.nn.BatchNorm
:alias: paddle.nn.BatchNorm,paddle.nn.layer.BatchNorm,paddle.nn.layer.norm.BatchNorm
:old_api: paddle.fluid.dygraph.BatchNorm
This interface is used to construct a callable object of the ``BatchNorm`` class. This interface is used to construct a callable object of the ``BatchNorm`` class.
For more details, refer to code examples. For more details, refer to code examples.
It implements the function of the Batch Normalization Layer and can be used It implements the function of the Batch Normalization Layer and can be used
...@@ -1443,6 +1455,10 @@ class Dropout(layers.Layer): ...@@ -1443,6 +1455,10 @@ class Dropout(layers.Layer):
class Embedding(layers.Layer): class Embedding(layers.Layer):
""" """
:alias_main: paddle.nn.Embedding
:alias: paddle.nn.Embedding,paddle.nn.layer.Embedding,paddle.nn.layer.common.Embedding
:old_api: paddle.fluid.dygraph.Embedding
**Embedding Layer** **Embedding Layer**
This interface is used to construct a callable object of the ``Embedding`` class. This interface is used to construct a callable object of the ``Embedding`` class.
...@@ -1599,6 +1615,10 @@ class Embedding(layers.Layer): ...@@ -1599,6 +1615,10 @@ class Embedding(layers.Layer):
class LayerNorm(layers.Layer): class LayerNorm(layers.Layer):
""" """
:alias_main: paddle.nn.LayerNorm
:alias: paddle.nn.LayerNorm,paddle.nn.layer.LayerNorm,paddle.nn.layer.norm.LayerNorm
:old_api: paddle.fluid.dygraph.LayerNorm
This interface is used to construct a callable object of the ``LayerNorm`` class. This interface is used to construct a callable object of the ``LayerNorm`` class.
For more details, refer to code examples. For more details, refer to code examples.
It implements the function of the Layer Normalization Layer and can be applied to mini-batch input data. It implements the function of the Layer Normalization Layer and can be applied to mini-batch input data.
...@@ -2289,6 +2309,10 @@ class PRelu(layers.Layer): ...@@ -2289,6 +2309,10 @@ class PRelu(layers.Layer):
class BilinearTensorProduct(layers.Layer): class BilinearTensorProduct(layers.Layer):
""" """
:alias_main: paddle.nn.BilinearTensorProduct
:alias: paddle.nn.BilinearTensorProduct,paddle.nn.layer.BilinearTensorProduct,paddle.nn.layer.common.BilinearTensorProduct
:old_api: paddle.fluid.dygraph.BilinearTensorProduct
**Add Bilinear Tensor Product Layer** **Add Bilinear Tensor Product Layer**
This layer performs bilinear tensor product on two inputs. This layer performs bilinear tensor product on two inputs.
...@@ -2809,6 +2833,10 @@ class RowConv(layers.Layer): ...@@ -2809,6 +2833,10 @@ class RowConv(layers.Layer):
class GroupNorm(layers.Layer): class GroupNorm(layers.Layer):
""" """
:alias_main: paddle.nn.GroupNorm
:alias: paddle.nn.GroupNorm,paddle.nn.layer.GroupNorm,paddle.nn.layer.norm.GroupNorm
:old_api: paddle.fluid.dygraph.GroupNorm
This interface is used to construct a callable object of the ``GroupNorm`` class. This interface is used to construct a callable object of the ``GroupNorm`` class.
For more details, refer to code examples. For more details, refer to code examples.
It implements the function of the Group Normalization Layer. It implements the function of the Group Normalization Layer.
...@@ -2909,6 +2937,10 @@ class GroupNorm(layers.Layer): ...@@ -2909,6 +2937,10 @@ class GroupNorm(layers.Layer):
class SpectralNorm(layers.Layer): class SpectralNorm(layers.Layer):
""" """
:alias_main: paddle.nn.SpectralNorm
:alias: paddle.nn.SpectralNorm,paddle.nn.layer.SpectralNorm,paddle.nn.layer.norm.SpectralNorm
:old_api: paddle.fluid.dygraph.SpectralNorm
This interface is used to construct a callable object of the ``SpectralNorm`` class. This interface is used to construct a callable object of the ``SpectralNorm`` class.
For more details, refer to code examples. It implements the function of the Spectral Normalization Layer. For more details, refer to code examples. It implements the function of the Spectral Normalization Layer.
This layer calculates the spectral normalization value of weight parameters of This layer calculates the spectral normalization value of weight parameters of
......
...@@ -28,6 +28,9 @@ ParallelStrategy = core.ParallelStrategy ...@@ -28,6 +28,9 @@ ParallelStrategy = core.ParallelStrategy
def prepare_context(strategy=None): def prepare_context(strategy=None):
'''
:api_attr: imperative
'''
if strategy is None: if strategy is None:
strategy = ParallelStrategy() strategy = ParallelStrategy()
strategy.nranks = Env().nranks strategy.nranks = Env().nranks
......
...@@ -23,6 +23,8 @@ from paddle.fluid import framework ...@@ -23,6 +23,8 @@ from paddle.fluid import framework
class Tracer(core.Tracer): class Tracer(core.Tracer):
""" """
:api_attr: imperative
Tracer is used to execute and record the operators executed, to construct the Tracer is used to execute and record the operators executed, to construct the
computation graph in dygraph model. Tracer has two mode, :code:`train_mode` computation graph in dygraph model. Tracer has two mode, :code:`train_mode`
and :code:`eval_mode`. In :code:`train_mode`, Tracer would add backward network and :code:`eval_mode`. In :code:`train_mode`, Tracer would add backward network
......
...@@ -40,6 +40,8 @@ InferAnalysisConfig = core.AnalysisConfig ...@@ -40,6 +40,8 @@ InferAnalysisConfig = core.AnalysisConfig
def global_scope(): def global_scope():
""" """
:api_attr: Static Graph
Get the global/default scope instance. There are a lot of APIs use Get the global/default scope instance. There are a lot of APIs use
:code:`global_scope` as its default value, e.g., :code:`Executor.run` :code:`global_scope` as its default value, e.g., :code:`Executor.run`
...@@ -68,6 +70,8 @@ def _switch_scope(scope): ...@@ -68,6 +70,8 @@ def _switch_scope(scope):
@signature_safe_contextmanager @signature_safe_contextmanager
def scope_guard(scope): def scope_guard(scope):
""" """
:api_attr: Static Graph
This function switches scope through python `with` statement. This function switches scope through python `with` statement.
Scope records the mapping between variable names and variables ( :ref:`api_guide_Variable` ), Scope records the mapping between variable names and variables ( :ref:`api_guide_Variable` ),
similar to brackets in programming languages. similar to brackets in programming languages.
...@@ -456,6 +460,8 @@ handler = FetchHandlerExample(var_dict=var_dict) ...@@ -456,6 +460,8 @@ handler = FetchHandlerExample(var_dict=var_dict)
class Executor(object): class Executor(object):
""" """
:api_attr: Static Graph
An Executor in Python, supports single/multiple-GPU running, An Executor in Python, supports single/multiple-GPU running,
and single/multiple-CPU running. and single/multiple-CPU running.
......
...@@ -179,6 +179,10 @@ def require_version(min_version, max_version=None): ...@@ -179,6 +179,10 @@ def require_version(min_version, max_version=None):
def in_dygraph_mode(): def in_dygraph_mode():
""" """
:alias_main: paddle.in_dygraph_mode
:alias: paddle.in_dygraph_mode
:old_api: paddle.fluid.framework.in_dygraph_mode
This function checks whether the program runs in dynamic graph mode or not. This function checks whether the program runs in dynamic graph mode or not.
You can enter dynamic graph mode with :ref:`api_fluid_dygraph_guard` api, You can enter dynamic graph mode with :ref:`api_fluid_dygraph_guard` api,
or enable and disable dynamic graph mode with :ref:`api_fluid_dygraph_enable` or enable and disable dynamic graph mode with :ref:`api_fluid_dygraph_enable`
...@@ -436,6 +440,8 @@ _name_scope = NameScope() ...@@ -436,6 +440,8 @@ _name_scope = NameScope()
@signature_safe_contextmanager @signature_safe_contextmanager
def name_scope(prefix=None): def name_scope(prefix=None):
""" """
:api_attr: Static Graph
Generate hierarchical name prefix for the operators. Generate hierarchical name prefix for the operators.
Note: Note:
...@@ -5277,6 +5283,8 @@ def switch_startup_program(program): ...@@ -5277,6 +5283,8 @@ def switch_startup_program(program):
@signature_safe_contextmanager @signature_safe_contextmanager
def program_guard(main_program, startup_program=None): def program_guard(main_program, startup_program=None):
""" """
:api_attr: Static Graph
Change the global main program and startup program with `"with"` statement. Change the global main program and startup program with `"with"` statement.
Layer functions in the Python `"with"` block will append operators and Layer functions in the Python `"with"` block will append operators and
variables to the new main programs. variables to the new main programs.
...@@ -5376,6 +5384,8 @@ def _dygraph_place_guard(place): ...@@ -5376,6 +5384,8 @@ def _dygraph_place_guard(place):
def load_op_library(lib_filename): def load_op_library(lib_filename):
""" """
:api_attr: Static Graph
Load a dynamic library, including custom operators and kernels. Load a dynamic library, including custom operators and kernels.
When library is loaded, ops and kernels registered in the library When library is loaded, ops and kernels registered in the library
will be available in PaddlePaddle main process. will be available in PaddlePaddle main process.
......
...@@ -23,6 +23,9 @@ __all__ = ['one_hot', 'embedding'] ...@@ -23,6 +23,9 @@ __all__ = ['one_hot', 'embedding']
def one_hot(input, depth, allow_out_of_range=False): def one_hot(input, depth, allow_out_of_range=False):
""" """
:alias_main: paddle.nn.functional.one_hot
:alias: paddle.nn.functional.one_hot,paddle.nn.functional.common.one_hot
:old_api: paddle.fluid.one_hot
The operator converts each id in the input to an one-hot vector with a The operator converts each id in the input to an one-hot vector with a
depth length. The value in the vector dimension corresponding to the id depth length. The value in the vector dimension corresponding to the id
...@@ -132,6 +135,7 @@ def embedding(input, ...@@ -132,6 +135,7 @@ def embedding(input,
param_attr=None, param_attr=None,
dtype='float32'): dtype='float32'):
""" """
:api_attr: Static Graph
The operator is used to lookup embeddings vector of ids provided by :attr:`input` . The operator is used to lookup embeddings vector of ids provided by :attr:`input` .
It automatically constructs a 2D embedding matrix based on the It automatically constructs a 2D embedding matrix based on the
......
...@@ -124,6 +124,8 @@ def is_belong_to_optimizer(var): ...@@ -124,6 +124,8 @@ def is_belong_to_optimizer(var):
@dygraph_not_support @dygraph_not_support
def get_program_parameter(program): def get_program_parameter(program):
""" """
:api_attr: Static Graph
Get all the parameters from Program. Get all the parameters from Program.
Args: Args:
...@@ -147,6 +149,8 @@ def get_program_parameter(program): ...@@ -147,6 +149,8 @@ def get_program_parameter(program):
@dygraph_not_support @dygraph_not_support
def get_program_persistable_vars(program): def get_program_persistable_vars(program):
""" """
:api_attr: Static Graph
Get all the persistable vars from Program. Get all the persistable vars from Program.
Args: Args:
...@@ -223,6 +227,8 @@ def save_vars(executor, ...@@ -223,6 +227,8 @@ def save_vars(executor,
predicate=None, predicate=None,
filename=None): filename=None):
""" """
:api_attr: Static Graph
This API saves specific variables in the `Program` to files. This API saves specific variables in the `Program` to files.
There are two ways to specify the variables to be saved: set variables in There are two ways to specify the variables to be saved: set variables in
...@@ -365,6 +371,8 @@ def save_vars(executor, ...@@ -365,6 +371,8 @@ def save_vars(executor,
@dygraph_not_support @dygraph_not_support
def save_params(executor, dirname, main_program=None, filename=None): def save_params(executor, dirname, main_program=None, filename=None):
""" """
:api_attr: Static Graph
This operator saves all parameters from the :code:`main_program` to This operator saves all parameters from the :code:`main_program` to
the folder :code:`dirname` or file :code:`filename`. You can refer to the folder :code:`dirname` or file :code:`filename`. You can refer to
:ref:`api_guide_model_save_reader_en` for more details. :ref:`api_guide_model_save_reader_en` for more details.
...@@ -588,6 +596,8 @@ def _save_distributed_persistables(executor, dirname, main_program): ...@@ -588,6 +596,8 @@ def _save_distributed_persistables(executor, dirname, main_program):
@dygraph_not_support @dygraph_not_support
def save_persistables(executor, dirname, main_program=None, filename=None): def save_persistables(executor, dirname, main_program=None, filename=None):
""" """
:api_attr: Static Graph
This operator saves all persistable variables from :code:`main_program` to This operator saves all persistable variables from :code:`main_program` to
the folder :code:`dirname` or file :code:`filename`. You can refer to the folder :code:`dirname` or file :code:`filename`. You can refer to
:ref:`api_guide_model_save_reader_en` for more details. And then :ref:`api_guide_model_save_reader_en` for more details. And then
...@@ -661,6 +671,8 @@ def load_vars(executor, ...@@ -661,6 +671,8 @@ def load_vars(executor,
predicate=None, predicate=None,
filename=None): filename=None):
""" """
:api_attr: Static Graph
This API loads variables from files by executor. This API loads variables from files by executor.
There are two ways to specify the variables to be loaded: the first way, set There are two ways to specify the variables to be loaded: the first way, set
...@@ -829,6 +841,8 @@ def load_vars(executor, ...@@ -829,6 +841,8 @@ def load_vars(executor,
@dygraph_not_support @dygraph_not_support
def load_params(executor, dirname, main_program=None, filename=None): def load_params(executor, dirname, main_program=None, filename=None):
""" """
:api_attr: Static Graph
This API filters out all parameters from the give ``main_program`` This API filters out all parameters from the give ``main_program``
and then tries to load these parameters from the directory ``dirname`` or and then tries to load these parameters from the directory ``dirname`` or
the file ``filename``. the file ``filename``.
...@@ -887,6 +901,8 @@ def load_params(executor, dirname, main_program=None, filename=None): ...@@ -887,6 +901,8 @@ def load_params(executor, dirname, main_program=None, filename=None):
@dygraph_not_support @dygraph_not_support
def load_persistables(executor, dirname, main_program=None, filename=None): def load_persistables(executor, dirname, main_program=None, filename=None):
""" """
:api_attr: Static Graph
This API filters out all variables with ``persistable==True`` from the This API filters out all variables with ``persistable==True`` from the
given ``main_program`` and then tries to load these variables from the given ``main_program`` and then tries to load these variables from the
directory ``dirname`` or the file ``filename``. directory ``dirname`` or the file ``filename``.
...@@ -1084,6 +1100,8 @@ def save_inference_model(dirname, ...@@ -1084,6 +1100,8 @@ def save_inference_model(dirname,
export_for_deployment=True, export_for_deployment=True,
program_only=False): program_only=False):
""" """
:api_attr: Static Graph
Prune the given `main_program` to build a new program especially for inference, Prune the given `main_program` to build a new program especially for inference,
and then save it and all related parameters to given `dirname` . and then save it and all related parameters to given `dirname` .
If you just want to save parameters of your trained model, please use the If you just want to save parameters of your trained model, please use the
...@@ -1288,6 +1306,8 @@ def load_inference_model(dirname, ...@@ -1288,6 +1306,8 @@ def load_inference_model(dirname,
params_filename=None, params_filename=None,
pserver_endpoints=None): pserver_endpoints=None):
""" """
:api_attr: Static Graph
Load the inference model from a given directory. By this API, you can get the model Load the inference model from a given directory. By this API, you can get the model
structure(Inference Program) and model parameters. If you just want to load structure(Inference Program) and model parameters. If you just want to load
parameters of the pre-trained model, please use the :ref:`api_fluid_io_load_params` API. parameters of the pre-trained model, please use the :ref:`api_fluid_io_load_params` API.
...@@ -1577,6 +1597,11 @@ def _load_persistable_nodes(executor, dirname, graph): ...@@ -1577,6 +1597,11 @@ def _load_persistable_nodes(executor, dirname, graph):
@dygraph_not_support @dygraph_not_support
def save(program, model_path): def save(program, model_path):
""" """
:api_attr: Static Graph
:alias_main: paddle.save
:alias: paddle.save,paddle.tensor.save,paddle.tensor.io.save
:old_api: paddle.fluid.save
This function save parameters, optimizer information and network description to model_path. This function save parameters, optimizer information and network description to model_path.
The parameters contains all the trainable Variable, will save to a file with suffix ".pdparams". The parameters contains all the trainable Variable, will save to a file with suffix ".pdparams".
...@@ -1636,6 +1661,11 @@ def save(program, model_path): ...@@ -1636,6 +1661,11 @@ def save(program, model_path):
@dygraph_not_support @dygraph_not_support
def load(program, model_path, executor=None, var_list=None): def load(program, model_path, executor=None, var_list=None):
""" """
:api_attr: Static Graph
:alias_main: paddle.load
:alias: paddle.load,paddle.tensor.load,paddle.tensor.io.load
:old_api: paddle.fluid.io.load
This function get parameters and optimizer information from program, and then get corresponding value from file. This function get parameters and optimizer information from program, and then get corresponding value from file.
An exception will throw if shape or dtype of the parameters is not match. An exception will throw if shape or dtype of the parameters is not match.
...@@ -1803,6 +1833,8 @@ def load(program, model_path, executor=None, var_list=None): ...@@ -1803,6 +1833,8 @@ def load(program, model_path, executor=None, var_list=None):
@dygraph_not_support @dygraph_not_support
def load_program_state(model_path, var_list=None): def load_program_state(model_path, var_list=None):
""" """
:api_attr: Static Graph
Load program state from local file Load program state from local file
Args: Args:
...@@ -1934,6 +1966,8 @@ def load_program_state(model_path, var_list=None): ...@@ -1934,6 +1966,8 @@ def load_program_state(model_path, var_list=None):
@dygraph_not_support @dygraph_not_support
def set_program_state(program, state_dict): def set_program_state(program, state_dict):
""" """
:api_attr: Static Graph
Set program parameter from state_dict Set program parameter from state_dict
An exception will throw if shape or dtype of the parameters is not match. An exception will throw if shape or dtype of the parameters is not match.
......
...@@ -222,6 +222,8 @@ def Print(input, ...@@ -222,6 +222,8 @@ def Print(input,
print_tensor_lod=True, print_tensor_lod=True,
print_phase='both'): print_phase='both'):
''' '''
:api_attr: Static Graph
**Print operator** **Print operator**
This creates a print op that will print when a tensor is accessed. This creates a print op that will print when a tensor is accessed.
...@@ -446,6 +448,8 @@ class StaticRNNMemoryLink(object): ...@@ -446,6 +448,8 @@ class StaticRNNMemoryLink(object):
class StaticRNN(object): class StaticRNN(object):
""" """
:api_attr: Static Graph
StaticRNN class. StaticRNN class.
The StaticRNN can process a batch of sequence data. The first dimension of inputs The StaticRNN can process a batch of sequence data. The first dimension of inputs
...@@ -923,6 +927,8 @@ class WhileGuard(BlockGuard): ...@@ -923,6 +927,8 @@ class WhileGuard(BlockGuard):
class While(object): class While(object):
""" """
:api_attr: Static Graph
while loop control flow. Repeat while body until cond is False. while loop control flow. Repeat while body until cond is False.
Note: Note:
...@@ -1061,6 +1067,11 @@ def assign_skip_lod_tensor_array(inputs, outputs): ...@@ -1061,6 +1067,11 @@ def assign_skip_lod_tensor_array(inputs, outputs):
def while_loop(cond, body, loop_vars, is_test=False, name=None): def while_loop(cond, body, loop_vars, is_test=False, name=None):
""" """
:api_attr: Static Graph
:alias_main: paddle.nn.while_loop
:alias: paddle.nn.while_loop,paddle.nn.control_flow.while_loop
:old_api: paddle.fluid.layers.while_loop
while_loop is one of the control flows. Repeats while_loop `body` until `cond` returns False. while_loop is one of the control flows. Repeats while_loop `body` until `cond` returns False.
Notice: Notice:
...@@ -1529,6 +1540,10 @@ def create_array(dtype): ...@@ -1529,6 +1540,10 @@ def create_array(dtype):
@templatedoc() @templatedoc()
def less_than(x, y, force_cpu=None, cond=None): def less_than(x, y, force_cpu=None, cond=None):
""" """
:alias_main: paddle.less_than
:alias: paddle.less_than,paddle.tensor.less_than,paddle.tensor.logic.less_than
:old_api: paddle.fluid.layers.less_than
${comment} ${comment}
Args: Args:
...@@ -1594,6 +1609,10 @@ def less_than(x, y, force_cpu=None, cond=None): ...@@ -1594,6 +1609,10 @@ def less_than(x, y, force_cpu=None, cond=None):
@templatedoc() @templatedoc()
def less_equal(x, y, cond=None): def less_equal(x, y, cond=None):
""" """
:alias_main: paddle.less_equal
:alias: paddle.less_equal,paddle.tensor.less_equal,paddle.tensor.logic.less_equal
:old_api: paddle.fluid.layers.less_equal
This OP returns the truth value of :math:`x <= y` elementwise, which is equivalent function to the overloaded operator `<=`. This OP returns the truth value of :math:`x <= y` elementwise, which is equivalent function to the overloaded operator `<=`.
Args: Args:
...@@ -1642,6 +1661,10 @@ def less_equal(x, y, cond=None): ...@@ -1642,6 +1661,10 @@ def less_equal(x, y, cond=None):
@templatedoc() @templatedoc()
def greater_than(x, y, cond=None): def greater_than(x, y, cond=None):
""" """
:alias_main: paddle.greater_than
:alias: paddle.greater_than,paddle.tensor.greater_than,paddle.tensor.logic.greater_than
:old_api: paddle.fluid.layers.greater_than
This OP returns the truth value of :math:`x > y` elementwise, which is equivalent function to the overloaded operator `>`. This OP returns the truth value of :math:`x > y` elementwise, which is equivalent function to the overloaded operator `>`.
Args: Args:
...@@ -1689,6 +1712,10 @@ def greater_than(x, y, cond=None): ...@@ -1689,6 +1712,10 @@ def greater_than(x, y, cond=None):
@templatedoc() @templatedoc()
def greater_equal(x, y, cond=None): def greater_equal(x, y, cond=None):
""" """
:alias_main: paddle.greater_equal
:alias: paddle.greater_equal,paddle.tensor.greater_equal,paddle.tensor.logic.greater_equal
:old_api: paddle.fluid.layers.greater_equal
This OP returns the truth value of :math:`x >= y` elementwise, which is equivalent function to the overloaded operator `>=`. This OP returns the truth value of :math:`x >= y` elementwise, which is equivalent function to the overloaded operator `>=`.
Args: Args:
...@@ -1782,6 +1809,10 @@ def equal(x, y, cond=None): ...@@ -1782,6 +1809,10 @@ def equal(x, y, cond=None):
def not_equal(x, y, cond=None): def not_equal(x, y, cond=None):
""" """
:alias_main: paddle.not_equal
:alias: paddle.not_equal,paddle.tensor.not_equal,paddle.tensor.logic.not_equal
:old_api: paddle.fluid.layers.not_equal
This OP returns the truth value of :math:`x != y` elementwise, which is equivalent function to the overloaded operator `!=`. This OP returns the truth value of :math:`x != y` elementwise, which is equivalent function to the overloaded operator `!=`.
Args: Args:
...@@ -2225,6 +2256,11 @@ def copy_var_to_parent_block(var, layer_helper): ...@@ -2225,6 +2256,11 @@ def copy_var_to_parent_block(var, layer_helper):
def cond(pred, true_fn=None, false_fn=None, name=None): def cond(pred, true_fn=None, false_fn=None, name=None):
""" """
:api_attr: Static Graph
:alias_main: paddle.nn.cond
:alias: paddle.nn.cond,paddle.nn.control_flow.cond
:old_api: paddle.fluid.layers.cond
This API returns ``true_fn()`` if the predicate ``pred`` is true else This API returns ``true_fn()`` if the predicate ``pred`` is true else
``false_fn()`` . Users could also set ``true_fn`` or ``false_fn`` to ``false_fn()`` . Users could also set ``true_fn`` or ``false_fn`` to
``None`` if do nothing and this API will treat the callable simply returns ``None`` if do nothing and this API will treat the callable simply returns
...@@ -2410,6 +2446,11 @@ def _error_message(what, arg_name, op_name, right_value, error_value): ...@@ -2410,6 +2446,11 @@ def _error_message(what, arg_name, op_name, right_value, error_value):
def case(pred_fn_pairs, default=None, name=None): def case(pred_fn_pairs, default=None, name=None):
''' '''
:api_attr: Static Graph
:alias_main: paddle.nn.case
:alias: paddle.nn.case,paddle.nn.control_flow.case
:old_api: paddle.fluid.layers.case
This operator works like an if-elif-elif-else chain. This operator works like an if-elif-elif-else chain.
Args: Args:
...@@ -2520,6 +2561,7 @@ def case(pred_fn_pairs, default=None, name=None): ...@@ -2520,6 +2561,7 @@ def case(pred_fn_pairs, default=None, name=None):
class Switch(object): class Switch(object):
""" """
:api_attr: Static Graph
This class is used to implement Switch branch control function. This class is used to implement Switch branch control function.
Switch branch contains several case branches and one default branch. Switch branch contains several case branches and one default branch.
...@@ -2677,6 +2719,8 @@ class IfElseBlockGuard(object): ...@@ -2677,6 +2719,8 @@ class IfElseBlockGuard(object):
class IfElse(object): class IfElse(object):
""" """
:api_attr: Static Graph
This class is used to implement IfElse branch control function. IfElse contains two blocks, true_block and false_block. IfElse will put data satisfying True or False conditions into different blocks to run. This class is used to implement IfElse branch control function. IfElse contains two blocks, true_block and false_block. IfElse will put data satisfying True or False conditions into different blocks to run.
Cond is a 2-D Tensor with shape [N, 1] and data type bool, representing the execution conditions of the corresponding part of the input data. Cond is a 2-D Tensor with shape [N, 1] and data type bool, representing the execution conditions of the corresponding part of the input data.
...@@ -2853,6 +2897,8 @@ class IfElse(object): ...@@ -2853,6 +2897,8 @@ class IfElse(object):
class DynamicRNN(object): class DynamicRNN(object):
""" """
:api_attr: Static Graph
**Note: the input of this class should be LoDTensor which holds the **Note: the input of this class should be LoDTensor which holds the
information of variable-length sequences. If the input is fixed-length Tensor, information of variable-length sequences. If the input is fixed-length Tensor,
please use StaticRNN (fluid.layers.** :ref:`api_fluid_layers_StaticRNN` **) for please use StaticRNN (fluid.layers.** :ref:`api_fluid_layers_StaticRNN` **) for
...@@ -3518,6 +3564,8 @@ class DynamicRNN(object): ...@@ -3518,6 +3564,8 @@ class DynamicRNN(object):
def switch_case(branch_index, branch_fns, default=None, name=None): def switch_case(branch_index, branch_fns, default=None, name=None):
''' '''
:api_attr: Static Graph
This operator is like a C++ switch/case statement. This operator is like a C++ switch/case statement.
Args: Args:
...@@ -3701,6 +3749,10 @@ def reorder_lod_tensor_by_rank(x, rank_table): ...@@ -3701,6 +3749,10 @@ def reorder_lod_tensor_by_rank(x, rank_table):
def is_empty(x, cond=None): def is_empty(x, cond=None):
""" """
:alias_main: paddle.is_empty
:alias: paddle.is_empty,paddle.tensor.is_empty,paddle.tensor.logic.is_empty
:old_api: paddle.fluid.layers.is_empty
Test whether a Variable is empty. Test whether a Variable is empty.
Args: Args:
......
...@@ -57,6 +57,11 @@ def center_loss(input, ...@@ -57,6 +57,11 @@ def center_loss(input,
param_attr, param_attr,
update_center=True): update_center=True):
""" """
:api_attr: Static Graph
:alias_main: paddle.nn.functional.center_loss
:alias: paddle.nn.functional.center_loss,paddle.nn.functional.loss.center_loss
:old_api: paddle.fluid.layers.center_loss
**Center loss Cost layer** **Center loss Cost layer**
This OP accepts input (deep features,the output of the last hidden layer) This OP accepts input (deep features,the output of the last hidden layer)
...@@ -147,6 +152,10 @@ def center_loss(input, ...@@ -147,6 +152,10 @@ def center_loss(input,
def bpr_loss(input, label, name=None): def bpr_loss(input, label, name=None):
""" """
:alias_main: paddle.nn.functional.bpr_loss
:alias: paddle.nn.functional.bpr_loss,paddle.nn.functional.loss.bpr_loss
:old_api: paddle.fluid.layers.bpr_loss
**Bayesian Personalized Ranking Loss Operator** **Bayesian Personalized Ranking Loss Operator**
This operator belongs to pairwise ranking loss. Label is the desired item. This operator belongs to pairwise ranking loss. Label is the desired item.
...@@ -195,6 +204,10 @@ def bpr_loss(input, label, name=None): ...@@ -195,6 +204,10 @@ def bpr_loss(input, label, name=None):
def cross_entropy(input, label, soft_label=False, ignore_index=kIgnoreIndex): def cross_entropy(input, label, soft_label=False, ignore_index=kIgnoreIndex):
""" """
:alias_main: paddle.nn.functional.cross_entropy
:alias: paddle.nn.functional.cross_entropy,paddle.nn.functional.loss.cross_entropy
:old_api: paddle.fluid.layers.cross_entropy
This operator computes the cross entropy between input and label. It This operator computes the cross entropy between input and label. It
supports both hard-label and and soft-label cross entropy computation. supports both hard-label and and soft-label cross entropy computation.
...@@ -288,6 +301,10 @@ def cross_entropy2(input, label, ignore_index=kIgnoreIndex): ...@@ -288,6 +301,10 @@ def cross_entropy2(input, label, ignore_index=kIgnoreIndex):
def square_error_cost(input, label): def square_error_cost(input, label):
""" """
:alias_main: paddle.nn.functional.square_error_cost
:alias: paddle.nn.functional.square_error_cost,paddle.nn.functional.loss.square_error_cost
:old_api: paddle.fluid.layers.square_error_cost
This op accepts input predictions and target label and returns the This op accepts input predictions and target label and returns the
squared error cost. squared error cost.
...@@ -663,6 +680,8 @@ def nce(input, ...@@ -663,6 +680,8 @@ def nce(input,
seed=0, seed=0,
is_sparse=False): is_sparse=False):
""" """
:api_attr: Static Graph
${comment} ${comment}
Args: Args:
...@@ -874,6 +893,8 @@ def hsigmoid(input, ...@@ -874,6 +893,8 @@ def hsigmoid(input,
is_custom=False, is_custom=False,
is_sparse=False): is_sparse=False):
""" """
:api_attr: Static Graph
The hierarchical sigmoid organizes the classes into a complete binary tree to reduce the computational complexity The hierarchical sigmoid organizes the classes into a complete binary tree to reduce the computational complexity
and speed up the model training, especially the training of language model. and speed up the model training, especially the training of language model.
Each leaf node of the complete binary tree represents a class(word) and each non-leaf node acts as a binary classifier. Each leaf node of the complete binary tree represents a class(word) and each non-leaf node acts as a binary classifier.
...@@ -1167,6 +1188,10 @@ def softmax_with_cross_entropy(logits, ...@@ -1167,6 +1188,10 @@ def softmax_with_cross_entropy(logits,
return_softmax=False, return_softmax=False,
axis=-1): axis=-1):
""" """
:alias_main: paddle.nn.functional.softmax_with_cross_entropy
:alias: paddle.nn.functional.softmax_with_cross_entropy,paddle.nn.functional.loss.softmax_with_cross_entropy
:old_api: paddle.fluid.layers.softmax_with_cross_entropy
This operator implements the cross entropy loss function with softmax. This function This operator implements the cross entropy loss function with softmax. This function
combines the calculation of the softmax operation and the cross entropy loss function combines the calculation of the softmax operation and the cross entropy loss function
to provide a more numerically stable gradient. to provide a more numerically stable gradient.
...@@ -1290,6 +1315,10 @@ def softmax_with_cross_entropy(logits, ...@@ -1290,6 +1315,10 @@ def softmax_with_cross_entropy(logits,
def rank_loss(label, left, right, name=None): def rank_loss(label, left, right, name=None):
""" """
:alias_main: paddle.nn.functional.rank_loss
:alias: paddle.nn.functional.rank_loss,paddle.nn.functional.loss.rank_loss
:old_api: paddle.fluid.layers.rank_loss
This operator implements the sort loss layer in the RankNet model. RankNet is a pairwise ranking model This operator implements the sort loss layer in the RankNet model. RankNet is a pairwise ranking model
with a training sample consisting of a pair of documents (A and B), The label (P) with a training sample consisting of a pair of documents (A and B), The label (P)
indicates whether A is ranked higher than B or not. Please refer to more details: indicates whether A is ranked higher than B or not. Please refer to more details:
...@@ -1407,6 +1436,10 @@ def sigmoid_cross_entropy_with_logits(x, ...@@ -1407,6 +1436,10 @@ def sigmoid_cross_entropy_with_logits(x,
name=None, name=None,
normalize=False): normalize=False):
""" """
:alias_main: paddle.nn.functional.sigmoid_cross_entropy_with_logits
:alias: paddle.nn.functional.sigmoid_cross_entropy_with_logits,paddle.nn.functional.loss.sigmoid_cross_entropy_with_logits
:old_api: paddle.fluid.layers.sigmoid_cross_entropy_with_logits
${comment} ${comment}
Args: Args:
...@@ -1464,6 +1497,10 @@ def teacher_student_sigmoid_loss(input, ...@@ -1464,6 +1497,10 @@ def teacher_student_sigmoid_loss(input,
soft_max_up_bound=15.0, soft_max_up_bound=15.0,
soft_max_lower_bound=-15.0): soft_max_lower_bound=-15.0):
""" """
:alias_main: paddle.nn.functional.teacher_student_sigmoid_loss
:alias: paddle.nn.functional.teacher_student_sigmoid_loss,paddle.nn.functional.loss.teacher_student_sigmoid_loss
:old_api: paddle.fluid.layers.teacher_student_sigmoid_loss
**Teacher Student Log Loss Layer** **Teacher Student Log Loss Layer**
This layer accepts input predictions and target label and returns the This layer accepts input predictions and target label and returns the
...@@ -1583,6 +1620,10 @@ def huber_loss(input, label, delta): ...@@ -1583,6 +1620,10 @@ def huber_loss(input, label, delta):
@templatedoc() @templatedoc()
def kldiv_loss(x, target, reduction='mean', name=None): def kldiv_loss(x, target, reduction='mean', name=None):
""" """
:alias_main: paddle.nn.functional.kldiv_loss
:alias: paddle.nn.functional.kldiv_loss,paddle.nn.functional.loss.kldiv_loss
:old_api: paddle.fluid.layers.kldiv_loss
${comment} ${comment}
Args: Args:
...@@ -1643,6 +1684,10 @@ from .control_flow import equal ...@@ -1643,6 +1684,10 @@ from .control_flow import equal
def npair_loss(anchor, positive, labels, l2_reg=0.002): def npair_loss(anchor, positive, labels, l2_reg=0.002):
''' '''
:alias_main: paddle.nn.functional.npair_loss
:alias: paddle.nn.functional.npair_loss,paddle.nn.functional.loss.npair_loss
:old_api: paddle.fluid.layers.npair_loss
**Npair Loss Layer** **Npair Loss Layer**
Read `Improved Deep Metric Learning with Multi class N pair Loss Objective\ Read `Improved Deep Metric Learning with Multi class N pair Loss Objective\
...@@ -1709,6 +1754,10 @@ def npair_loss(anchor, positive, labels, l2_reg=0.002): ...@@ -1709,6 +1754,10 @@ def npair_loss(anchor, positive, labels, l2_reg=0.002):
def mse_loss(input, label): def mse_loss(input, label):
""" """
:alias_main: paddle.nn.functional.mse_loss
:alias: paddle.nn.functional.mse_loss,paddle.nn.functional.loss.mse_loss
:old_api: paddle.fluid.layers.mse_loss
This op accepts input predications and target label and returns the mean square error. This op accepts input predications and target label and returns the mean square error.
The loss can be described as: The loss can be described as:
......
...@@ -213,6 +213,8 @@ def fc(input, ...@@ -213,6 +213,8 @@ def fc(input,
act=None, act=None,
name=None): name=None):
""" """
:api_attr: Static Graph
**Fully Connected Layer** **Fully Connected Layer**
This operator creates a fully connected layer in the network. It can take This operator creates a fully connected layer in the network. It can take
...@@ -370,6 +372,7 @@ def embedding(input, ...@@ -370,6 +372,7 @@ def embedding(input,
param_attr=None, param_attr=None,
dtype='float32'): dtype='float32'):
""" """
:api_attr: Static Graph
**WARING:** This OP will be deprecated in a future release. This OP requires the **WARING:** This OP will be deprecated in a future release. This OP requires the
last dimension of Tensor shape must be equal to 1. It is recommended to use last dimension of Tensor shape must be equal to 1. It is recommended to use
...@@ -696,6 +699,8 @@ def _pull_box_sparse(input, size, dtype='float32'): ...@@ -696,6 +699,8 @@ def _pull_box_sparse(input, size, dtype='float32'):
@templatedoc() @templatedoc()
def linear_chain_crf(input, label, param_attr=None, length=None): def linear_chain_crf(input, label, param_attr=None, length=None):
""" """
:api_attr: Static Graph
Linear Chain CRF. Linear Chain CRF.
${comment} ${comment}
...@@ -819,6 +824,7 @@ def linear_chain_crf(input, label, param_attr=None, length=None): ...@@ -819,6 +824,7 @@ def linear_chain_crf(input, label, param_attr=None, length=None):
@templatedoc() @templatedoc()
def crf_decoding(input, param_attr, label=None, length=None): def crf_decoding(input, param_attr, label=None, length=None):
""" """
:api_attr: Static Graph
${comment} ${comment}
Args: Args:
...@@ -924,6 +930,10 @@ def dropout(x, ...@@ -924,6 +930,10 @@ def dropout(x,
name=None, name=None,
dropout_implementation="downgrade_in_infer"): dropout_implementation="downgrade_in_infer"):
""" """
:alias_main: paddle.nn.functional.dropout
:alias: paddle.nn.functional.dropout,paddle.nn.functional.common.dropout
:old_api: paddle.fluid.layers.dropout
Computes dropout. Computes dropout.
Drop or keep each element of `x` independently. Dropout is a regularization Drop or keep each element of `x` independently. Dropout is a regularization
...@@ -1169,6 +1179,10 @@ def chunk_eval(input, ...@@ -1169,6 +1179,10 @@ def chunk_eval(input,
def softmax(input, use_cudnn=False, name=None, axis=-1): def softmax(input, use_cudnn=False, name=None, axis=-1):
""" """
:alias_main: paddle.nn.functional.softmax
:alias: paddle.nn.functional.softmax,paddle.nn.functional.activation.softmax
:old_api: paddle.fluid.layers.softmax
This operator implements the softmax layer. The calculation process is as follows: This operator implements the softmax layer. The calculation process is as follows:
1. The dimension :attr:`axis` of the ``input`` will be permuted to the last. 1. The dimension :attr:`axis` of the ``input`` will be permuted to the last.
...@@ -1309,6 +1323,8 @@ def conv2d(input, ...@@ -1309,6 +1323,8 @@ def conv2d(input,
name=None, name=None,
data_format="NCHW"): data_format="NCHW"):
""" """
:api_attr: Static Graph
The convolution2D layer calculates the output based on the input, filter The convolution2D layer calculates the output based on the input, filter
and strides, paddings, dilations, groups parameters. Input and and strides, paddings, dilations, groups parameters. Input and
Output are in NCHW or NHWC format, where N is batch size, C is the number of Output are in NCHW or NHWC format, where N is batch size, C is the number of
...@@ -1583,6 +1599,8 @@ def conv3d(input, ...@@ -1583,6 +1599,8 @@ def conv3d(input,
name=None, name=None,
data_format="NCDHW"): data_format="NCDHW"):
""" """
:api_attr: Static Graph
The convolution3D layer calculates the output based on the input, filter The convolution3D layer calculates the output based on the input, filter
and strides, paddings, dilations, groups parameters. Input(Input) and and strides, paddings, dilations, groups parameters. Input(Input) and
Output(Output) are in NCDHW or NDHWC format. Where N is batch size C is the number of Output(Output) are in NCDHW or NDHWC format. Where N is batch size C is the number of
...@@ -1846,6 +1864,10 @@ def pool2d(input, ...@@ -1846,6 +1864,10 @@ def pool2d(input,
exclusive=True, exclusive=True,
data_format="NCHW"): data_format="NCHW"):
""" """
:alias_main: paddle.nn.functional.pool2d
:alias: paddle.nn.functional.pool2d,paddle.nn.functional.pooling.pool2d
:old_api: paddle.fluid.layers.pool2d
${comment} ${comment}
Args: Args:
...@@ -2059,6 +2081,10 @@ def pool3d(input, ...@@ -2059,6 +2081,10 @@ def pool3d(input,
exclusive=True, exclusive=True,
data_format="NCDHW"): data_format="NCDHW"):
""" """
:alias_main: paddle.nn.functional.pool3d
:alias: paddle.nn.functional.pool3d,paddle.nn.functional.pooling.pool3d
:old_api: paddle.fluid.layers.pool3d
${comment} ${comment}
Args: Args:
...@@ -2277,6 +2303,10 @@ def adaptive_pool2d(input, ...@@ -2277,6 +2303,10 @@ def adaptive_pool2d(input,
require_index=False, require_index=False,
name=None): name=None):
""" """
:alias_main: paddle.nn.functional.adaptive_pool2d
:alias: paddle.nn.functional.adaptive_pool2d,paddle.nn.functional.pooling.adaptive_pool2d
:old_api: paddle.fluid.layers.adaptive_pool2d
This operation calculates the output based on the input, pool_size, This operation calculates the output based on the input, pool_size,
pool_type parameters. Input(X) and output(Out) are in NCHW format, where N is batch pool_type parameters. Input(X) and output(Out) are in NCHW format, where N is batch
size, C is the number of channels, H is the height of the feature, and W is size, C is the number of channels, H is the height of the feature, and W is
...@@ -2420,6 +2450,10 @@ def adaptive_pool3d(input, ...@@ -2420,6 +2450,10 @@ def adaptive_pool3d(input,
require_index=False, require_index=False,
name=None): name=None):
""" """
:alias_main: paddle.nn.functional.adaptive_pool3d
:alias: paddle.nn.functional.adaptive_pool3d,paddle.nn.functional.pooling.adaptive_pool3d
:old_api: paddle.fluid.layers.adaptive_pool3d
This operation calculates the output based on the input, pool_size, This operation calculates the output based on the input, pool_size,
pool_type parameters. Input(X) and output(Out) are in NCDHW format, where N is batch pool_type parameters. Input(X) and output(Out) are in NCDHW format, where N is batch
size, C is the number of channels, D is the depth of the feature, H is the height of size, C is the number of channels, D is the depth of the feature, H is the height of
...@@ -2589,6 +2623,8 @@ def batch_norm(input, ...@@ -2589,6 +2623,8 @@ def batch_norm(input,
do_model_average_for_mean_and_var=True, do_model_average_for_mean_and_var=True,
use_global_stats=False): use_global_stats=False):
""" """
:api_attr: Static Graph
**Batch Normalization Layer** **Batch Normalization Layer**
Can be used as a normalizer function for convolution or fully_connected operations. Can be used as a normalizer function for convolution or fully_connected operations.
...@@ -3041,6 +3077,8 @@ def instance_norm(input, ...@@ -3041,6 +3077,8 @@ def instance_norm(input,
bias_attr=None, bias_attr=None,
name=None): name=None):
""" """
:api_attr: Static Graph
**Instance Normalization Layer** **Instance Normalization Layer**
Can be used as a normalizer function for convolution or fully_connected operations. Can be used as a normalizer function for convolution or fully_connected operations.
...@@ -3166,6 +3204,8 @@ def data_norm(input, ...@@ -3166,6 +3204,8 @@ def data_norm(input,
summary_decay_rate=0.9999999, summary_decay_rate=0.9999999,
enable_scale_and_shift=False): enable_scale_and_shift=False):
""" """
:api_attr: Static Graph
**Data Normalization Layer** **Data Normalization Layer**
This op can be used as a normalizer function for conv2d and fully_connected operations. This op can be used as a normalizer function for conv2d and fully_connected operations.
...@@ -3345,6 +3385,8 @@ def layer_norm(input, ...@@ -3345,6 +3385,8 @@ def layer_norm(input,
act=None, act=None,
name=None): name=None):
""" """
:api_attr: Static Graph
**Layer Normalization Layer** **Layer Normalization Layer**
The API implements the function of the Layer Normalization Layer and can be applied to mini-batch input data. The API implements the function of the Layer Normalization Layer and can be applied to mini-batch input data.
...@@ -3471,6 +3513,8 @@ def group_norm(input, ...@@ -3471,6 +3513,8 @@ def group_norm(input,
data_layout='NCHW', data_layout='NCHW',
name=None): name=None):
""" """
:api_attr: Static Graph
**Group Normalization Layer** **Group Normalization Layer**
Refer to `Group Normalization <https://arxiv.org/abs/1803.08494>`_ . Refer to `Group Normalization <https://arxiv.org/abs/1803.08494>`_ .
...@@ -3566,6 +3610,8 @@ def group_norm(input, ...@@ -3566,6 +3610,8 @@ def group_norm(input,
@templatedoc() @templatedoc()
def spectral_norm(weight, dim=0, power_iters=1, eps=1e-12, name=None): def spectral_norm(weight, dim=0, power_iters=1, eps=1e-12, name=None):
""" """
:api_attr: Static Graph
**Spectral Normalization Layer** **Spectral Normalization Layer**
This operation calculates the spectral normalization value of weight parameters of This operation calculates the spectral normalization value of weight parameters of
...@@ -3682,6 +3728,8 @@ def conv2d_transpose(input, ...@@ -3682,6 +3728,8 @@ def conv2d_transpose(input,
name=None, name=None,
data_format='NCHW'): data_format='NCHW'):
""" """
:api_attr: Static Graph
The convolution2D transpose layer calculates the output based on the input, The convolution2D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output) filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCHW or NHWC format. Where N is batch size, C is the number of channels, are in NCHW or NHWC format. Where N is batch size, C is the number of channels,
...@@ -3970,6 +4018,8 @@ def conv3d_transpose(input, ...@@ -3970,6 +4018,8 @@ def conv3d_transpose(input,
name=None, name=None,
data_format='NCDHW'): data_format='NCDHW'):
""" """
:api_attr: Static Graph
The convolution3D transpose layer calculates the output based on the input, The convolution3D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output) filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCDHW or NDHWC format. Where N is batch size, C is the number of channels, are in NCDHW or NDHWC format. Where N is batch size, C is the number of channels,
...@@ -4256,6 +4306,10 @@ def conv3d_transpose(input, ...@@ -4256,6 +4306,10 @@ def conv3d_transpose(input,
def reduce_sum(input, dim=None, keep_dim=False, name=None): def reduce_sum(input, dim=None, keep_dim=False, name=None):
""" """
:alias_main: paddle.reduce_sum
:alias: paddle.reduce_sum,paddle.tensor.reduce_sum,paddle.tensor.math.reduce_sum
:old_api: paddle.fluid.layers.reduce_sum
Computes the sum of tensor elements over the given dimension. Computes the sum of tensor elements over the given dimension.
Args: Args:
...@@ -4332,6 +4386,10 @@ def reduce_sum(input, dim=None, keep_dim=False, name=None): ...@@ -4332,6 +4386,10 @@ def reduce_sum(input, dim=None, keep_dim=False, name=None):
def reduce_mean(input, dim=None, keep_dim=False, name=None): def reduce_mean(input, dim=None, keep_dim=False, name=None):
""" """
:alias_main: paddle.reduce_mean
:alias: paddle.reduce_mean,paddle.tensor.reduce_mean,paddle.tensor.stat.reduce_mean
:old_api: paddle.fluid.layers.reduce_mean
Computes the mean of the input tensor's elements along the given dimension. Computes the mean of the input tensor's elements along the given dimension.
Args: Args:
...@@ -4409,6 +4467,10 @@ def reduce_mean(input, dim=None, keep_dim=False, name=None): ...@@ -4409,6 +4467,10 @@ def reduce_mean(input, dim=None, keep_dim=False, name=None):
def reduce_max(input, dim=None, keep_dim=False, name=None): def reduce_max(input, dim=None, keep_dim=False, name=None):
""" """
:alias_main: paddle.reduce_max
:alias: paddle.reduce_max,paddle.tensor.reduce_max,paddle.tensor.math.reduce_max
:old_api: paddle.fluid.layers.reduce_max
Computes the maximum of tensor elements over the given dimension. Computes the maximum of tensor elements over the given dimension.
Args: Args:
...@@ -4471,6 +4533,10 @@ def reduce_max(input, dim=None, keep_dim=False, name=None): ...@@ -4471,6 +4533,10 @@ def reduce_max(input, dim=None, keep_dim=False, name=None):
def reduce_min(input, dim=None, keep_dim=False, name=None): def reduce_min(input, dim=None, keep_dim=False, name=None):
""" """
:alias_main: paddle.reduce_min
:alias: paddle.reduce_min,paddle.tensor.reduce_min,paddle.tensor.math.reduce_min
:old_api: paddle.fluid.layers.reduce_min
Computes the minimum of tensor elements over the given dimension. Computes the minimum of tensor elements over the given dimension.
Args: Args:
...@@ -4533,6 +4599,10 @@ def reduce_min(input, dim=None, keep_dim=False, name=None): ...@@ -4533,6 +4599,10 @@ def reduce_min(input, dim=None, keep_dim=False, name=None):
def reduce_prod(input, dim=None, keep_dim=False, name=None): def reduce_prod(input, dim=None, keep_dim=False, name=None):
""" """
:alias_main: paddle.reduce_prod
:alias: paddle.reduce_prod,paddle.tensor.reduce_prod,paddle.tensor.math.reduce_prod
:old_api: paddle.fluid.layers.reduce_prod
Computes the product of tensor elements over the given dimension. Computes the product of tensor elements over the given dimension.
Args: Args:
...@@ -4596,6 +4666,10 @@ def reduce_prod(input, dim=None, keep_dim=False, name=None): ...@@ -4596,6 +4666,10 @@ def reduce_prod(input, dim=None, keep_dim=False, name=None):
def reduce_all(input, dim=None, keep_dim=False, name=None): def reduce_all(input, dim=None, keep_dim=False, name=None):
""" """
:alias_main: paddle.reduce_all
:alias: paddle.reduce_all,paddle.tensor.reduce_all,paddle.tensor.logic.reduce_all
:old_api: paddle.fluid.layers.reduce_all
This OP computes the ``logical and`` of tensor elements over the given dimension, and output the result. This OP computes the ``logical and`` of tensor elements over the given dimension, and output the result.
Args: Args:
...@@ -4656,6 +4730,10 @@ def reduce_all(input, dim=None, keep_dim=False, name=None): ...@@ -4656,6 +4730,10 @@ def reduce_all(input, dim=None, keep_dim=False, name=None):
def reduce_any(input, dim=None, keep_dim=False, name=None): def reduce_any(input, dim=None, keep_dim=False, name=None):
""" """
:alias_main: paddle.reduce_any
:alias: paddle.reduce_any,paddle.tensor.reduce_any,paddle.tensor.logic.reduce_any
:old_api: paddle.fluid.layers.reduce_any
This OP computes the ``logical or`` of tensor elements over the given dimension, and output the result. This OP computes the ``logical or`` of tensor elements over the given dimension, and output the result.
Args: Args:
...@@ -4864,6 +4942,10 @@ def split(input, num_or_sections, dim=-1, name=None): ...@@ -4864,6 +4942,10 @@ def split(input, num_or_sections, dim=-1, name=None):
def l2_normalize(x, axis, epsilon=1e-12, name=None): def l2_normalize(x, axis, epsilon=1e-12, name=None):
""" """
:alias_main: paddle.nn.functional.l2_normalize
:alias: paddle.nn.functional.l2_normalize,paddle.nn.functional.norm.l2_normalize
:old_api: paddle.fluid.layers.l2_normalize
This op normalizes `x` along dimension `axis` using an L2 This op normalizes `x` along dimension `axis` using an L2
norm. For a 1-D tensor (`dim` is fixed to 0), this layer computes norm. For a 1-D tensor (`dim` is fixed to 0), this layer computes
...@@ -5023,6 +5105,10 @@ def matmul(x, y, transpose_x=False, transpose_y=False, alpha=1.0, name=None): ...@@ -5023,6 +5105,10 @@ def matmul(x, y, transpose_x=False, transpose_y=False, alpha=1.0, name=None):
def topk(input, k, name=None): def topk(input, k, name=None):
""" """
:alias_main: paddle.topk
:alias: paddle.topk,paddle.tensor.topk,paddle.tensor.search.topk
:old_api: paddle.fluid.layers.topk
This OP is used to find values and indices of the k largest entries This OP is used to find values and indices of the k largest entries
for the last dimension. for the last dimension.
...@@ -5284,6 +5370,10 @@ def ctc_greedy_decoder(input, ...@@ -5284,6 +5370,10 @@ def ctc_greedy_decoder(input,
def transpose(x, perm, name=None): def transpose(x, perm, name=None):
""" """
:alias_main: paddle.transpose
:alias: paddle.transpose,paddle.tensor.transpose,paddle.tensor.linalg.transpose,paddle.tensor.manipulation.transpose
:old_api: paddle.fluid.layers.transpose
Permute the data dimensions of `input` according to `perm`. Permute the data dimensions of `input` according to `perm`.
The `i`-th dimension of the returned tensor will correspond to the The `i`-th dimension of the returned tensor will correspond to the
...@@ -5376,6 +5466,8 @@ def im2sequence(input, ...@@ -5376,6 +5466,8 @@ def im2sequence(input,
out_stride=1, out_stride=1,
name=None): name=None):
""" """
:api_attr: Static Graph
Extracts image patches from the input tensor to form a tensor of shape Extracts image patches from the input tensor to form a tensor of shape
{input.batch_size * output_height * output_width, filter_size_height * {input.batch_size * output_height * output_width, filter_size_height *
filter_size_width * input.channels}. This op use filter to scan images filter_size_width * input.channels}. This op use filter to scan images
...@@ -5513,6 +5605,8 @@ def im2sequence(input, ...@@ -5513,6 +5605,8 @@ def im2sequence(input,
@templatedoc() @templatedoc()
def row_conv(input, future_context_size, param_attr=None, act=None): def row_conv(input, future_context_size, param_attr=None, act=None):
""" """
:api_attr: Static Graph
${comment} ${comment}
Args: Args:
...@@ -5633,6 +5727,10 @@ def multiplex(inputs, index): ...@@ -5633,6 +5727,10 @@ def multiplex(inputs, index):
def smooth_l1(x, y, inside_weight=None, outside_weight=None, sigma=None): def smooth_l1(x, y, inside_weight=None, outside_weight=None, sigma=None):
""" """
:alias_main: paddle.nn.functional.smooth_l1
:alias: paddle.nn.functional.smooth_l1,paddle.nn.functional.loss.smooth_l1
:old_api: paddle.fluid.layers.smooth_l1
This layer computes the smooth L1 loss for Variable :attr:`x` and :attr:`y`. This layer computes the smooth L1 loss for Variable :attr:`x` and :attr:`y`.
It takes the first dimension of :attr:`x` and :attr:`y` as batch size. It takes the first dimension of :attr:`x` and :attr:`y` as batch size.
For each instance, it computes the smooth L1 loss element by element first For each instance, it computes the smooth L1 loss element by element first
...@@ -5820,6 +5918,8 @@ def one_hot(input, depth, allow_out_of_range=False): ...@@ -5820,6 +5918,8 @@ def one_hot(input, depth, allow_out_of_range=False):
def autoincreased_step_counter(counter_name=None, begin=1, step=1): def autoincreased_step_counter(counter_name=None, begin=1, step=1):
""" """
:api_attr: Static Graph
Create an auto-increase variable. which will be automatically increased Create an auto-increase variable. which will be automatically increased
by 1 in every iteration. By default, the first return of this counter is 1, by 1 in every iteration. By default, the first return of this counter is 1,
and the step size is 1. and the step size is 1.
...@@ -5864,6 +5964,10 @@ def autoincreased_step_counter(counter_name=None, begin=1, step=1): ...@@ -5864,6 +5964,10 @@ def autoincreased_step_counter(counter_name=None, begin=1, step=1):
def reshape(x, shape, actual_shape=None, act=None, inplace=False, name=None): def reshape(x, shape, actual_shape=None, act=None, inplace=False, name=None):
""" """
:alias_main: paddle.reshape
:alias: paddle.reshape,paddle.tensor.reshape,paddle.tensor.manipulation.reshape
:old_api: paddle.fluid.layers.reshape
This operator changes the shape of ``x`` without changing its data. This operator changes the shape of ``x`` without changing its data.
The target shape can be given by ``shape`` or ``actual_shape``. The target shape can be given by ``shape`` or ``actual_shape``.
...@@ -6359,6 +6463,10 @@ def lod_append(x, level): ...@@ -6359,6 +6463,10 @@ def lod_append(x, level):
def lrn(input, n=5, k=1.0, alpha=1e-4, beta=0.75, name=None, def lrn(input, n=5, k=1.0, alpha=1e-4, beta=0.75, name=None,
data_format='NCHW'): data_format='NCHW'):
""" """
:alias_main: paddle.nn.functional.lrn
:alias: paddle.nn.functional.lrn,paddle.nn.functional.norm.lrn
:old_api: paddle.fluid.layers.lrn
This operator implements the Local Response Normalization Layer. This operator implements the Local Response Normalization Layer.
This layer performs a type of "lateral inhibition" by normalizing over local input regions. This layer performs a type of "lateral inhibition" by normalizing over local input regions.
For more information, please refer to `ImageNet Classification with Deep Convolutional Neural Networks <https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf>`_ For more information, please refer to `ImageNet Classification with Deep Convolutional Neural Networks <https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf>`_
...@@ -6445,6 +6553,10 @@ def lrn(input, n=5, k=1.0, alpha=1e-4, beta=0.75, name=None, ...@@ -6445,6 +6553,10 @@ def lrn(input, n=5, k=1.0, alpha=1e-4, beta=0.75, name=None,
def pad(x, paddings, pad_value=0., name=None): def pad(x, paddings, pad_value=0., name=None):
""" """
:alias_main: paddle.nn.functional.pad
:alias: paddle.nn.functional.pad,paddle.nn.functional.common.pad
:old_api: paddle.fluid.layers.pad
This op will pad a tensor with a constant value given by :attr:`pad_value`, and the This op will pad a tensor with a constant value given by :attr:`pad_value`, and the
padded shape is specified by :attr:`paddings`. padded shape is specified by :attr:`paddings`.
...@@ -6610,6 +6722,10 @@ def label_smooth(label, ...@@ -6610,6 +6722,10 @@ def label_smooth(label,
dtype="float32", dtype="float32",
name=None): name=None):
""" """
:alias_main: paddle.nn.functional.label_smooth
:alias: paddle.nn.functional.label_smooth,paddle.nn.functional.common.label_smooth
:old_api: paddle.fluid.layers.label_smooth
Label smoothing is a mechanism to regularize the classifier layer and is called Label smoothing is a mechanism to regularize the classifier layer and is called
label-smoothing regularization (LSR). label-smoothing regularization (LSR).
...@@ -6689,6 +6805,10 @@ def roi_pool(input, ...@@ -6689,6 +6805,10 @@ def roi_pool(input,
spatial_scale=1.0, spatial_scale=1.0,
rois_lod=None): rois_lod=None):
""" """
:alias_main: paddle.nn.functional.roi_pool
:alias: paddle.nn.functional.roi_pool,paddle.nn.functional.vision.roi_pool
:old_api: paddle.fluid.layers.roi_pool
This operator implements the roi_pooling layer. This operator implements the roi_pooling layer.
Region of interest pooling (also known as RoI pooling) is to perform max pooling on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7). Region of interest pooling (also known as RoI pooling) is to perform max pooling on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7).
...@@ -6776,6 +6896,10 @@ def roi_align(input, ...@@ -6776,6 +6896,10 @@ def roi_align(input,
name=None, name=None,
rois_lod=None): rois_lod=None):
""" """
:alias_main: paddle.nn.functional.roi_align
:alias: paddle.nn.functional.roi_align,paddle.nn.functional.vision.roi_align
:old_api: paddle.fluid.layers.roi_align
${comment} ${comment}
Args: Args:
...@@ -6840,6 +6964,10 @@ def roi_align(input, ...@@ -6840,6 +6964,10 @@ def roi_align(input,
def dice_loss(input, label, epsilon=0.00001, name=None): def dice_loss(input, label, epsilon=0.00001, name=None):
""" """
:alias_main: paddle.nn.functional.dice_loss
:alias: paddle.nn.functional.dice_loss,paddle.nn.functional.loss.dice_loss
:old_api: paddle.fluid.layers.dice_loss
Dice loss for comparing the similarity between the input predictions and the label. Dice loss for comparing the similarity between the input predictions and the label.
This implementation is for binary classification, where the input is sigmoid This implementation is for binary classification, where the input is sigmoid
predictions of each pixel, usually used for segmentation task. The dice loss can predictions of each pixel, usually used for segmentation task. The dice loss can
...@@ -6899,6 +7027,10 @@ def image_resize(input, ...@@ -6899,6 +7027,10 @@ def image_resize(input,
align_mode=1, align_mode=1,
data_format='NCHW'): data_format='NCHW'):
""" """
:alias_main: paddle.nn.functional.image_resize
:alias: paddle.nn.functional.image_resize,paddle.nn.functional.vision.image_resize
:old_api: paddle.fluid.layers.image_resize
This op resizes a batch of images. This op resizes a batch of images.
The input must be a 3-D Tensor of the shape (num_batches, channels, in_w) The input must be a 3-D Tensor of the shape (num_batches, channels, in_w)
...@@ -7468,6 +7600,10 @@ def resize_bilinear(input, ...@@ -7468,6 +7600,10 @@ def resize_bilinear(input,
align_mode=1, align_mode=1,
data_format='NCHW'): data_format='NCHW'):
""" """
:alias_main: paddle.nn.functional.resize_bilinear
:alias: paddle.nn.functional.resize_bilinear,paddle.nn.functional.vision.resize_bilinear
:old_api: paddle.fluid.layers.resize_bilinear
This op resizes the input by performing bilinear interpolation based on given This op resizes the input by performing bilinear interpolation based on given
output shape which specified by actual_shape, out_shape and scale output shape which specified by actual_shape, out_shape and scale
in priority order. in priority order.
...@@ -7631,6 +7767,10 @@ def resize_trilinear(input, ...@@ -7631,6 +7767,10 @@ def resize_trilinear(input,
align_mode=1, align_mode=1,
data_format='NCDHW'): data_format='NCDHW'):
""" """
:alias_main: paddle.nn.functional.resize_trilinear
:alias: paddle.nn.functional.resize_trilinear,paddle.nn.functional.vision.resize_trilinear
:old_api: paddle.fluid.layers.resize_trilinear
This op resizes the input by performing trilinear interpolation based on given This op resizes the input by performing trilinear interpolation based on given
output shape which specified by actual_shape, out_shape and scale output shape which specified by actual_shape, out_shape and scale
in priority order. in priority order.
...@@ -7795,6 +7935,10 @@ def resize_nearest(input, ...@@ -7795,6 +7935,10 @@ def resize_nearest(input,
align_corners=True, align_corners=True,
data_format='NCHW'): data_format='NCHW'):
""" """
:alias_main: paddle.nn.functional.resize_nearest
:alias: paddle.nn.functional.resize_nearest,paddle.nn.functional.vision.resize_nearest
:old_api: paddle.fluid.layers.resize_nearest
This op resizes the input by performing nearest neighbor interpolation in both the This op resizes the input by performing nearest neighbor interpolation in both the
height direction and the width direction based on given output shape height direction and the width direction based on given output shape
which is specified by actual_shape, out_shape and scale in priority order. which is specified by actual_shape, out_shape and scale in priority order.
...@@ -8132,6 +8276,10 @@ def gather_nd(input, index, name=None): ...@@ -8132,6 +8276,10 @@ def gather_nd(input, index, name=None):
def scatter(input, index, updates, name=None, overwrite=True): def scatter(input, index, updates, name=None, overwrite=True):
""" """
:alias_main: paddle.scatter
:alias: paddle.scatter,paddle.tensor.scatter,paddle.tensor.manipulation.scatter
:old_api: paddle.fluid.layers.scatter
**Scatter Layer** **Scatter Layer**
Output is obtained by updating the input on selected indices based on updates. Output is obtained by updating the input on selected indices based on updates.
...@@ -8392,6 +8540,10 @@ def random_crop(x, shape, seed=None): ...@@ -8392,6 +8540,10 @@ def random_crop(x, shape, seed=None):
def log(x, name=None): def log(x, name=None):
""" """
:alias_main: paddle.log
:alias: paddle.log,paddle.tensor.log,paddle.tensor.math.log
:old_api: paddle.fluid.layers.log
Calculates the natural log of the given input tensor, element-wise. Calculates the natural log of the given input tensor, element-wise.
.. math:: .. math::
...@@ -8481,6 +8633,10 @@ def relu(x, name=None): ...@@ -8481,6 +8633,10 @@ def relu(x, name=None):
def selu(x, scale=None, alpha=None, name=None): def selu(x, scale=None, alpha=None, name=None):
""" """
:alias_main: paddle.nn.functional.selu
:alias: paddle.nn.functional.selu,paddle.nn.functional.activation.selu
:old_api: paddle.fluid.layers.selu
Selu Operator. Selu Operator.
The equation is: The equation is:
...@@ -8718,6 +8874,10 @@ def crop(x, shape=None, offsets=None, name=None): ...@@ -8718,6 +8874,10 @@ def crop(x, shape=None, offsets=None, name=None):
def crop_tensor(x, shape=None, offsets=None, name=None): def crop_tensor(x, shape=None, offsets=None, name=None):
""" """
:alias_main: paddle.crop_tensor
:alias: paddle.crop_tensor,paddle.tensor.crop_tensor,paddle.tensor.creation.crop_tensor
:old_api: paddle.fluid.layers.crop_tensor
Crop input into output, as specified by offsets and shape. Crop input into output, as specified by offsets and shape.
.. code-block:: text .. code-block:: text
...@@ -8914,6 +9074,10 @@ def crop_tensor(x, shape=None, offsets=None, name=None): ...@@ -8914,6 +9074,10 @@ def crop_tensor(x, shape=None, offsets=None, name=None):
def affine_grid(theta, out_shape, name=None): def affine_grid(theta, out_shape, name=None):
""" """
:alias_main: paddle.nn.functional.affine_grid
:alias: paddle.nn.functional.affine_grid,paddle.nn.functional.vision.affine_grid
:old_api: paddle.fluid.layers.affine_grid
It generates a grid of (x,y) coordinates using the parameters of It generates a grid of (x,y) coordinates using the parameters of
the affine transformation that correspond to a set of points where the affine transformation that correspond to a set of points where
the input feature map should be sampled to produce the transformed the input feature map should be sampled to produce the transformed
...@@ -8985,6 +9149,10 @@ def pad2d(input, ...@@ -8985,6 +9149,10 @@ def pad2d(input,
data_format="NCHW", data_format="NCHW",
name=None): name=None):
""" """
:alias_main: paddle.nn.functional.pad2d
:alias: paddle.nn.functional.pad2d,paddle.nn.functional.common.pad2d
:old_api: paddle.fluid.layers.pad2d
Pad 2-d images according to 'paddings' and 'mode'. Pad 2-d images according to 'paddings' and 'mode'.
If mode is 'reflect', paddings[0] and paddings[1] must be no greater If mode is 'reflect', paddings[0] and paddings[1] must be no greater
than height-1. And the width dimension has the same condition. than height-1. And the width dimension has the same condition.
...@@ -9082,6 +9250,10 @@ def pad2d(input, ...@@ -9082,6 +9250,10 @@ def pad2d(input,
@templatedoc() @templatedoc()
def elu(x, alpha=1.0, name=None): def elu(x, alpha=1.0, name=None):
""" """
:alias_main: paddle.nn.functional.elu
:alias: paddle.nn.functional.elu,paddle.nn.functional.activation.elu
:old_api: paddle.fluid.layers.elu
${comment} ${comment}
Args: Args:
x(${x_type}): ${x_comment} x(${x_type}): ${x_comment}
...@@ -9120,6 +9292,10 @@ def elu(x, alpha=1.0, name=None): ...@@ -9120,6 +9292,10 @@ def elu(x, alpha=1.0, name=None):
@templatedoc() @templatedoc()
def relu6(x, threshold=6.0, name=None): def relu6(x, threshold=6.0, name=None):
""" """
:alias_main: paddle.nn.functional.relu6
:alias: paddle.nn.functional.relu6,paddle.nn.functional.activation.relu6
:old_api: paddle.fluid.layers.relu6
${comment} ${comment}
Args: Args:
...@@ -9212,6 +9388,10 @@ def pow(x, factor=1.0, name=None): ...@@ -9212,6 +9388,10 @@ def pow(x, factor=1.0, name=None):
@templatedoc() @templatedoc()
def stanh(x, scale_a=0.67, scale_b=1.7159, name=None): def stanh(x, scale_a=0.67, scale_b=1.7159, name=None):
""" """
:alias_main: paddle.stanh
:alias: paddle.stanh,paddle.tensor.stanh,paddle.tensor.math.stanh
:old_api: paddle.fluid.layers.stanh
${comment} ${comment}
Args: Args:
x(${x_type}): ${x_comment} x(${x_type}): ${x_comment}
...@@ -9260,6 +9440,10 @@ def stanh(x, scale_a=0.67, scale_b=1.7159, name=None): ...@@ -9260,6 +9440,10 @@ def stanh(x, scale_a=0.67, scale_b=1.7159, name=None):
@templatedoc() @templatedoc()
def hard_sigmoid(x, slope=0.2, offset=0.5, name=None): def hard_sigmoid(x, slope=0.2, offset=0.5, name=None):
""" """
:alias_main: paddle.nn.functional.hard_sigmoid
:alias: paddle.nn.functional.hard_sigmoid,paddle.nn.functional.activation.hard_sigmoid
:old_api: paddle.fluid.layers.hard_sigmoid
${comment} ${comment}
Parameters: Parameters:
x (${x_type}): ${x_comment} x (${x_type}): ${x_comment}
...@@ -9297,6 +9481,10 @@ def hard_sigmoid(x, slope=0.2, offset=0.5, name=None): ...@@ -9297,6 +9481,10 @@ def hard_sigmoid(x, slope=0.2, offset=0.5, name=None):
@templatedoc() @templatedoc()
def swish(x, beta=1.0, name=None): def swish(x, beta=1.0, name=None):
""" """
:alias_main: paddle.nn.functional.swish
:alias: paddle.nn.functional.swish,paddle.nn.functional.activation.swish
:old_api: paddle.fluid.layers.swish
Elementwise swish activation function. See `Searching for Activation Functions <https://arxiv.org/abs/1710.05941>`_ for more details. Elementwise swish activation function. See `Searching for Activation Functions <https://arxiv.org/abs/1710.05941>`_ for more details.
Equation: Equation:
...@@ -9377,6 +9565,8 @@ def swish(x, beta=1.0, name=None): ...@@ -9377,6 +9565,8 @@ def swish(x, beta=1.0, name=None):
def prelu(x, mode, param_attr=None, name=None): def prelu(x, mode, param_attr=None, name=None):
""" """
:api_attr: Static Graph
Equation: Equation:
.. math:: .. math::
...@@ -9448,6 +9638,10 @@ def prelu(x, mode, param_attr=None, name=None): ...@@ -9448,6 +9638,10 @@ def prelu(x, mode, param_attr=None, name=None):
@templatedoc() @templatedoc()
def brelu(x, t_min=0.0, t_max=24.0, name=None): def brelu(x, t_min=0.0, t_max=24.0, name=None):
""" """
:alias_main: paddle.nn.functional.brelu
:alias: paddle.nn.functional.brelu,paddle.nn.functional.activation.brelu
:old_api: paddle.fluid.layers.brelu
${comment} ${comment}
Args: Args:
x(${x_type}): ${x_comment} x(${x_type}): ${x_comment}
...@@ -9489,6 +9683,10 @@ def brelu(x, t_min=0.0, t_max=24.0, name=None): ...@@ -9489,6 +9683,10 @@ def brelu(x, t_min=0.0, t_max=24.0, name=None):
@templatedoc() @templatedoc()
def leaky_relu(x, alpha=0.02, name=None): def leaky_relu(x, alpha=0.02, name=None):
""" """
:alias_main: paddle.nn.functional.leaky_relu
:alias: paddle.nn.functional.leaky_relu,paddle.nn.functional.activation.leaky_relu
:old_api: paddle.fluid.layers.leaky_relu
${comment} ${comment}
Args: Args:
x(${x_type}): ${x_comment} x(${x_type}): ${x_comment}
...@@ -9534,6 +9732,10 @@ def leaky_relu(x, alpha=0.02, name=None): ...@@ -9534,6 +9732,10 @@ def leaky_relu(x, alpha=0.02, name=None):
def soft_relu(x, threshold=40.0, name=None): def soft_relu(x, threshold=40.0, name=None):
""" """
:alias_main: paddle.nn.functional.soft_relu
:alias: paddle.nn.functional.soft_relu,paddle.nn.functional.activation.soft_relu
:old_api: paddle.fluid.layers.soft_relu
SoftRelu Activation Operator. SoftRelu Activation Operator.
$out = \ln(1 + \exp(\max(\min(x, threshold), -threshold)))$ $out = \ln(1 + \exp(\max(\min(x, threshold), -threshold)))$
...@@ -9849,6 +10051,10 @@ def filter_by_instag(ins, ins_tag, filter_tag, is_lod, out_val_if_empty=0): ...@@ -9849,6 +10051,10 @@ def filter_by_instag(ins, ins_tag, filter_tag, is_lod, out_val_if_empty=0):
def unstack(x, axis=0, num=None): def unstack(x, axis=0, num=None):
""" """
:alias_main: paddle.unstack
:alias: paddle.unstack,paddle.tensor.unstack,paddle.tensor.manipulation.unstack
:old_api: paddle.fluid.layers.unstack
**UnStack Layer** **UnStack Layer**
This layer unstacks input Tensor :code:`x` into several Tensors along :code:`axis`. This layer unstacks input Tensor :code:`x` into several Tensors along :code:`axis`.
...@@ -9899,6 +10105,10 @@ def unstack(x, axis=0, num=None): ...@@ -9899,6 +10105,10 @@ def unstack(x, axis=0, num=None):
def expand(x, expand_times, name=None): def expand(x, expand_times, name=None):
""" """
:alias_main: paddle.expand
:alias: paddle.expand,paddle.tensor.expand,paddle.tensor.manipulation.expand
:old_api: paddle.fluid.layers.expand
This operation tiles ``x`` multiple times according to the parameter ``expand_times``. This operation tiles ``x`` multiple times according to the parameter ``expand_times``.
The times number for each dimension of ``x`` is set by the parameter ``expand_times``. The times number for each dimension of ``x`` is set by the parameter ``expand_times``.
The rank of ``x`` should be less than or equal to 6. Please note that size of ``expand_times`` must be the same The rank of ``x`` should be less than or equal to 6. Please note that size of ``expand_times`` must be the same
...@@ -10015,6 +10225,10 @@ def expand(x, expand_times, name=None): ...@@ -10015,6 +10225,10 @@ def expand(x, expand_times, name=None):
def expand_as(x, target_tensor, name=None): def expand_as(x, target_tensor, name=None):
""" """
:alias_main: paddle.expand_as
:alias: paddle.expand_as,paddle.tensor.expand_as,paddle.tensor.manipulation.expand_as
:old_api: paddle.fluid.layers.expand_as
expand_as operator tiles to the input by given expand tensor. You should set expand tensor expand_as operator tiles to the input by given expand tensor. You should set expand tensor
for each dimension by providing tensor 'target_tensor'. The rank of X for each dimension by providing tensor 'target_tensor'. The rank of X
should be in [1, 6]. Please note that size of 'target_tensor' must be the same should be in [1, 6]. Please note that size of 'target_tensor' must be the same
...@@ -10471,6 +10685,10 @@ def sum(x): ...@@ -10471,6 +10685,10 @@ def sum(x):
@templatedoc() @templatedoc()
def slice(input, axes, starts, ends): def slice(input, axes, starts, ends):
""" """
:alias_main: paddle.slice
:alias: paddle.slice,paddle.tensor.slice,paddle.tensor.manipulation.slice
:old_api: paddle.fluid.layers.slice
This operator produces a slice of ``input`` along multiple axes. Similar to numpy: This operator produces a slice of ``input`` along multiple axes. Similar to numpy:
https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
Slice uses ``axes``, ``starts`` and ``ends`` attributes to specify the start and Slice uses ``axes``, ``starts`` and ``ends`` attributes to specify the start and
...@@ -10634,6 +10852,10 @@ def slice(input, axes, starts, ends): ...@@ -10634,6 +10852,10 @@ def slice(input, axes, starts, ends):
@templatedoc() @templatedoc()
def strided_slice(input, axes, starts, ends, strides): def strided_slice(input, axes, starts, ends, strides):
""" """
:alias_main: paddle.strided_slice
:alias: paddle.strided_slice,paddle.tensor.strided_slice,paddle.tensor.manipulation.strided_slice
:old_api: paddle.fluid.layers.strided_slice
This operator produces a slice of ``input`` along multiple axes. Similar to numpy: This operator produces a slice of ``input`` along multiple axes. Similar to numpy:
https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
Slice uses ``axes``, ``starts`` and ``ends`` attributes to specify the start and Slice uses ``axes``, ``starts`` and ``ends`` attributes to specify the start and
...@@ -10839,6 +11061,10 @@ def strided_slice(input, axes, starts, ends, strides): ...@@ -10839,6 +11061,10 @@ def strided_slice(input, axes, starts, ends, strides):
def shape(input): def shape(input):
""" """
:alias_main: paddle.shape
:alias: paddle.shape,paddle.tensor.shape,paddle.tensor.attribute.shape
:old_api: paddle.fluid.layers.shape
**Shape Layer** **Shape Layer**
Get the shape of the input. Get the shape of the input.
...@@ -10878,6 +11104,10 @@ def shape(input): ...@@ -10878,6 +11104,10 @@ def shape(input):
def rank(input): def rank(input):
""" """
:alias_main: paddle.rank
:alias: paddle.rank,paddle.tensor.rank,paddle.tensor.attribute.rank
:old_api: paddle.fluid.layers.rank
The OP returns the number of dimensions for a tensor, which is a 0-D int32 Tensor. The OP returns the number of dimensions for a tensor, which is a 0-D int32 Tensor.
Args: Args:
...@@ -10959,6 +11189,10 @@ def _elementwise_op(helper): ...@@ -10959,6 +11189,10 @@ def _elementwise_op(helper):
def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None): def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None):
""" """
:alias_main: paddle.scale
:alias: paddle.scale,paddle.tensor.scale,paddle.tensor.math.scale
:old_api: paddle.fluid.layers.scale
Scale operator. Scale operator.
Putting scale and bias to the input Tensor as following: Putting scale and bias to the input Tensor as following:
...@@ -11053,6 +11287,10 @@ def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None): ...@@ -11053,6 +11287,10 @@ def scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None):
def elementwise_add(x, y, axis=-1, act=None, name=None): def elementwise_add(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_add
:alias: paddle.elementwise_add,paddle.tensor.elementwise_add,paddle.tensor.math.elementwise_add
:old_api: paddle.fluid.layers.elementwise_add
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11137,6 +11375,10 @@ Examples: ...@@ -11137,6 +11375,10 @@ Examples:
def elementwise_div(x, y, axis=-1, act=None, name=None): def elementwise_div(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_div
:alias: paddle.elementwise_div,paddle.tensor.elementwise_div,paddle.tensor.math.elementwise_div
:old_api: paddle.fluid.layers.elementwise_div
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11221,6 +11463,10 @@ Examples: ...@@ -11221,6 +11463,10 @@ Examples:
def elementwise_sub(x, y, axis=-1, act=None, name=None): def elementwise_sub(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_sub
:alias: paddle.elementwise_sub,paddle.tensor.elementwise_sub,paddle.tensor.math.elementwise_sub
:old_api: paddle.fluid.layers.elementwise_sub
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11305,6 +11551,10 @@ Examples: ...@@ -11305,6 +11551,10 @@ Examples:
def elementwise_mul(x, y, axis=-1, act=None, name=None): def elementwise_mul(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_mul
:alias: paddle.elementwise_mul,paddle.tensor.elementwise_mul,paddle.tensor.math.elementwise_mul
:old_api: paddle.fluid.layers.elementwise_mul
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11389,6 +11639,10 @@ Examples: ...@@ -11389,6 +11639,10 @@ Examples:
def elementwise_max(x, y, axis=-1, act=None, name=None): def elementwise_max(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_max
:alias: paddle.elementwise_max,paddle.tensor.elementwise_max,paddle.tensor.math.elementwise_max
:old_api: paddle.fluid.layers.elementwise_max
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11447,6 +11701,10 @@ Examples: ...@@ -11447,6 +11701,10 @@ Examples:
def elementwise_min(x, y, axis=-1, act=None, name=None): def elementwise_min(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_min
:alias: paddle.elementwise_min,paddle.tensor.elementwise_min,paddle.tensor.math.elementwise_min
:old_api: paddle.fluid.layers.elementwise_min
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11503,6 +11761,10 @@ Examples: ...@@ -11503,6 +11761,10 @@ Examples:
def elementwise_pow(x, y, axis=-1, act=None, name=None): def elementwise_pow(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_pow
:alias: paddle.elementwise_pow,paddle.tensor.elementwise_pow,paddle.tensor.math.elementwise_pow
:old_api: paddle.fluid.layers.elementwise_pow
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11535,6 +11797,10 @@ Examples: ...@@ -11535,6 +11797,10 @@ Examples:
def elementwise_mod(x, y, axis=-1, act=None, name=None): def elementwise_mod(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_mod
:alias: paddle.elementwise_mod,paddle.tensor.elementwise_mod,paddle.tensor.math.elementwise_mod
:old_api: paddle.fluid.layers.elementwise_mod
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11568,6 +11834,10 @@ Examples: ...@@ -11568,6 +11834,10 @@ Examples:
def elementwise_floordiv(x, y, axis=-1, act=None, name=None): def elementwise_floordiv(x, y, axis=-1, act=None, name=None):
""" """
:alias_main: paddle.elementwise_floordiv
:alias: paddle.elementwise_floordiv,paddle.tensor.elementwise_floordiv,paddle.tensor.math.elementwise_floordiv
:old_api: paddle.fluid.layers.elementwise_floordiv
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -11701,6 +11971,10 @@ def _logical_op(op_name, x, y, out=None, name=None, binary_op=True): ...@@ -11701,6 +11971,10 @@ def _logical_op(op_name, x, y, out=None, name=None, binary_op=True):
@templatedoc() @templatedoc()
def logical_and(x, y, out=None, name=None): def logical_and(x, y, out=None, name=None):
""" """
:alias_main: paddle.logical_and
:alias: paddle.logical_and,paddle.tensor.logical_and,paddle.tensor.logic.logical_and
:old_api: paddle.fluid.layers.logical_and
logical_and Operator logical_and Operator
It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean LoDTensor or Tensor. It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean LoDTensor or Tensor.
...@@ -11750,6 +12024,10 @@ def logical_and(x, y, out=None, name=None): ...@@ -11750,6 +12024,10 @@ def logical_and(x, y, out=None, name=None):
@templatedoc() @templatedoc()
def logical_or(x, y, out=None, name=None): def logical_or(x, y, out=None, name=None):
""" """
:alias_main: paddle.logical_or
:alias: paddle.logical_or,paddle.tensor.logical_or,paddle.tensor.logic.logical_or
:old_api: paddle.fluid.layers.logical_or
logical_or Operator logical_or Operator
It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean LoDTensor or Tensor. It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean LoDTensor or Tensor.
...@@ -11799,6 +12077,10 @@ def logical_or(x, y, out=None, name=None): ...@@ -11799,6 +12077,10 @@ def logical_or(x, y, out=None, name=None):
@templatedoc() @templatedoc()
def logical_xor(x, y, out=None, name=None): def logical_xor(x, y, out=None, name=None):
""" """
:alias_main: paddle.logical_xor
:alias: paddle.logical_xor,paddle.tensor.logical_xor,paddle.tensor.logic.logical_xor
:old_api: paddle.fluid.layers.logical_xor
logical_xor Operator logical_xor Operator
It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean LoDTensor or Tensor. It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean LoDTensor or Tensor.
...@@ -11848,6 +12130,10 @@ def logical_xor(x, y, out=None, name=None): ...@@ -11848,6 +12130,10 @@ def logical_xor(x, y, out=None, name=None):
@templatedoc() @templatedoc()
def logical_not(x, out=None, name=None): def logical_not(x, out=None, name=None):
""" """
:alias_main: paddle.logical_not
:alias: paddle.logical_not,paddle.tensor.logical_not,paddle.tensor.logic.logical_not
:old_api: paddle.fluid.layers.logical_not
logical_not Operator logical_not Operator
It operates element-wise on X, and returns the Out. X and Out are N-dim boolean LoDTensor or Tensor. It operates element-wise on X, and returns the Out. X and Out are N-dim boolean LoDTensor or Tensor.
...@@ -11894,6 +12180,10 @@ def logical_not(x, out=None, name=None): ...@@ -11894,6 +12180,10 @@ def logical_not(x, out=None, name=None):
@templatedoc() @templatedoc()
def clip(x, min, max, name=None): def clip(x, min, max, name=None):
""" """
:alias_main: paddle.nn.clip
:alias: paddle.nn.clip,paddle.nn.clip.clip
:old_api: paddle.fluid.layers.clip
${comment} ${comment}
Args: Args:
...@@ -11989,6 +12279,10 @@ def clip_by_norm(x, max_norm, name=None): ...@@ -11989,6 +12279,10 @@ def clip_by_norm(x, max_norm, name=None):
@templatedoc() @templatedoc()
def mean(x, name=None): def mean(x, name=None):
""" """
:alias_main: paddle.mean
:alias: paddle.mean,paddle.tensor.mean,paddle.tensor.stat.mean
:old_api: paddle.fluid.layers.mean
${comment} ${comment}
Args: Args:
...@@ -12105,6 +12399,10 @@ def mul(x, y, x_num_col_dims=1, y_num_col_dims=1, name=None): ...@@ -12105,6 +12399,10 @@ def mul(x, y, x_num_col_dims=1, y_num_col_dims=1, name=None):
@templatedoc() @templatedoc()
def maxout(x, groups, name=None, axis=1): def maxout(x, groups, name=None, axis=1):
""" """
:alias_main: paddle.nn.functional.maxout
:alias: paddle.nn.functional.maxout,paddle.nn.functional.activation.maxout
:old_api: paddle.fluid.layers.maxout
${comment} ${comment}
Args: Args:
...@@ -12155,6 +12453,10 @@ def maxout(x, groups, name=None, axis=1): ...@@ -12155,6 +12453,10 @@ def maxout(x, groups, name=None, axis=1):
def space_to_depth(x, blocksize, name=None): def space_to_depth(x, blocksize, name=None):
""" """
:alias_main: paddle.nn.functional.space_to_depth
:alias: paddle.nn.functional.space_to_depth,paddle.nn.functional.vision.space_to_depth
:old_api: paddle.fluid.layers.space_to_depth
Gives a blocksize to space_to_depth the input LoDtensor with Layout: [batch, channel, height, width] Gives a blocksize to space_to_depth the input LoDtensor with Layout: [batch, channel, height, width]
This op rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of \ This op rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of \
...@@ -12262,6 +12564,10 @@ def affine_channel(x, ...@@ -12262,6 +12564,10 @@ def affine_channel(x,
name=None, name=None,
act=None): act=None):
""" """
:alias_main: paddle.nn.functional.affine_channel
:alias: paddle.nn.functional.affine_channel,paddle.nn.functional.vision.affine_channel
:old_api: paddle.fluid.layers.affine_channel
Applies a separate affine transformation to each channel of the input. Applies a separate affine transformation to each channel of the input.
Useful for replacing spatial batch norm with its equivalent fixed Useful for replacing spatial batch norm with its equivalent fixed
transformation. The input also can be 2D tensor and applies a affine transformation. The input also can be 2D tensor and applies a affine
...@@ -12447,6 +12753,10 @@ def similarity_focus(input, axis, indexes, name=None): ...@@ -12447,6 +12753,10 @@ def similarity_focus(input, axis, indexes, name=None):
def hash(input, hash_size, num_hash=1, name=None): def hash(input, hash_size, num_hash=1, name=None):
""" """
:alias_main: paddle.nn.functional.hash
:alias: paddle.nn.functional.hash,paddle.nn.functional.lod.hash
:old_api: paddle.fluid.layers.hash
This OP hash the input to an integer less than the hash_size. This OP hash the input to an integer less than the hash_size.
The hash algorithm we used was xxHash - Extremely fast hash algorithm The hash algorithm we used was xxHash - Extremely fast hash algorithm
(https://github.com/Cyan4973/xxHash/tree/v0.6.5) (https://github.com/Cyan4973/xxHash/tree/v0.6.5)
...@@ -12509,6 +12819,10 @@ def hash(input, hash_size, num_hash=1, name=None): ...@@ -12509,6 +12819,10 @@ def hash(input, hash_size, num_hash=1, name=None):
@templatedoc() @templatedoc()
def grid_sampler(x, grid, name=None): def grid_sampler(x, grid, name=None):
""" """
:alias_main: paddle.nn.functional.grid_sampler
:alias: paddle.nn.functional.grid_sampler,paddle.nn.functional.vision.grid_sampler
:old_api: paddle.fluid.layers.grid_sampler
This operation samples input X by using bilinear interpolation based on This operation samples input X by using bilinear interpolation based on
flow field grid, which is usually generated by :code:`affine_grid` . The grid of flow field grid, which is usually generated by :code:`affine_grid` . The grid of
shape [N, H, W, 2] is the concatenation of (x, y) coordinates shape [N, H, W, 2] is the concatenation of (x, y) coordinates
...@@ -12609,6 +12923,10 @@ def grid_sampler(x, grid, name=None): ...@@ -12609,6 +12923,10 @@ def grid_sampler(x, grid, name=None):
def log_loss(input, label, epsilon=1e-4, name=None): def log_loss(input, label, epsilon=1e-4, name=None):
""" """
:alias_main: paddle.nn.functional.log_loss
:alias: paddle.nn.functional.log_loss,paddle.nn.functional.loss.log_loss
:old_api: paddle.fluid.layers.log_loss
**Negative Log Loss Layer** **Negative Log Loss Layer**
This layer accepts input predictions and target label and returns the This layer accepts input predictions and target label and returns the
...@@ -12658,6 +12976,10 @@ def log_loss(input, label, epsilon=1e-4, name=None): ...@@ -12658,6 +12976,10 @@ def log_loss(input, label, epsilon=1e-4, name=None):
def add_position_encoding(input, alpha, beta, name=None): def add_position_encoding(input, alpha, beta, name=None):
""" """
:alias_main: paddle.nn.functional.add_position_encoding
:alias: paddle.nn.functional.add_position_encoding,paddle.nn.functional.extension.add_position_encoding
:old_api: paddle.fluid.layers.add_position_encoding
This operator performs weighted sum of input feature at each position This operator performs weighted sum of input feature at each position
(position in the sequence) and the corresponding position encoding. (position in the sequence) and the corresponding position encoding.
...@@ -12730,6 +13052,8 @@ def bilinear_tensor_product(x, ...@@ -12730,6 +13052,8 @@ def bilinear_tensor_product(x,
param_attr=None, param_attr=None,
bias_attr=None): bias_attr=None):
""" """
:api_attr: Static Graph
**Bilinear Tensor Product Layer** **Bilinear Tensor Product Layer**
This layer performs bilinear tensor product on two inputs. This layer performs bilinear tensor product on two inputs.
...@@ -12920,6 +13244,10 @@ def shuffle_channel(x, group, name=None): ...@@ -12920,6 +13244,10 @@ def shuffle_channel(x, group, name=None):
@templatedoc() @templatedoc()
def temporal_shift(x, seg_num, shift_ratio=0.25, name=None): def temporal_shift(x, seg_num, shift_ratio=0.25, name=None):
""" """
:alias_main: paddle.nn.functional.temporal_shift
:alias: paddle.nn.functional.temporal_shift,paddle.nn.functional.extension.temporal_shift
:old_api: paddle.fluid.layers.temporal_shift
**Temporal Shift Operator** **Temporal Shift Operator**
${comment} ${comment}
...@@ -13042,6 +13370,8 @@ class PyFuncRegistry(object): ...@@ -13042,6 +13370,8 @@ class PyFuncRegistry(object):
@templatedoc() @templatedoc()
def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
""" """
:api_attr: Static Graph
This OP is used to register customized Python OP to Paddle Fluid. The design This OP is used to register customized Python OP to Paddle Fluid. The design
principe of py_func is that LodTensor and numpy array can be converted to each principe of py_func is that LodTensor and numpy array can be converted to each
other easily. So you can use Python and numpy API to register a python OP. other easily. So you can use Python and numpy API to register a python OP.
...@@ -13269,6 +13599,10 @@ def psroi_pool(input, ...@@ -13269,6 +13599,10 @@ def psroi_pool(input,
pooled_width, pooled_width,
name=None): name=None):
""" """
:alias_main: paddle.nn.functional.psroi_pool
:alias: paddle.nn.functional.psroi_pool,paddle.nn.functional.vision.psroi_pool
:old_api: paddle.fluid.layers.psroi_pool
${comment} ${comment}
Parameters: Parameters:
...@@ -13335,6 +13669,10 @@ def prroi_pool(input, ...@@ -13335,6 +13669,10 @@ def prroi_pool(input,
batch_roi_nums=None, batch_roi_nums=None,
name=None): name=None):
""" """
:alias_main: paddle.nn.functional.prroi_pool
:alias: paddle.nn.functional.prroi_pool,paddle.nn.functional.vision.prroi_pool
:old_api: paddle.fluid.layers.prroi_pool
The precise roi pooling implementation for paddle. Reference: https://arxiv.org/pdf/1807.11590.pdf The precise roi pooling implementation for paddle. Reference: https://arxiv.org/pdf/1807.11590.pdf
Args: Args:
...@@ -13626,6 +13964,10 @@ def where(condition): ...@@ -13626,6 +13964,10 @@ def where(condition):
def sign(x): def sign(x):
""" """
:alias_main: paddle.sign
:alias: paddle.sign,paddle.tensor.sign,paddle.tensor.math.sign
:old_api: paddle.fluid.layers.sign
This OP returns sign of every element in `x`: 1 for positive, -1 for negative and 0 for zero. This OP returns sign of every element in `x`: 1 for positive, -1 for negative and 0 for zero.
Args: Args:
...@@ -13659,6 +14001,10 @@ def sign(x): ...@@ -13659,6 +14001,10 @@ def sign(x):
def unique(x, dtype='int32'): def unique(x, dtype='int32'):
""" """
:alias_main: paddle.unique
:alias: paddle.unique,paddle.tensor.unique,paddle.tensor.manipulation.unique
:old_api: paddle.fluid.layers.unique
**unique** **unique**
Return a unique tensor for `x` and an index tensor pointing to this unique tensor. Return a unique tensor for `x` and an index tensor pointing to this unique tensor.
...@@ -13772,6 +14118,8 @@ def deformable_conv(input, ...@@ -13772,6 +14118,8 @@ def deformable_conv(input,
modulated=True, modulated=True,
name=None): name=None):
""" """
:api_attr: Static Graph
**Deformable Convolution op** **Deformable Convolution op**
Compute 2-D deformable convolution on 4-D input. Compute 2-D deformable convolution on 4-D input.
...@@ -13984,6 +14332,9 @@ def deformable_conv(input, ...@@ -13984,6 +14332,9 @@ def deformable_conv(input,
def unfold(x, kernel_sizes, strides=1, paddings=0, dilations=1, name=None): def unfold(x, kernel_sizes, strides=1, paddings=0, dilations=1, name=None):
""" """
:alias_main: paddle.nn.functional.unfold
:alias: paddle.nn.functional.unfold,paddle.nn.functional.common.unfold
:old_api: paddle.fluid.layers.unfold
This op returns a col buffer of sliding local blocks of input x, also known This op returns a col buffer of sliding local blocks of input x, also known
as im2col for batched 2D image tensors. For each block under the convolution filter, as im2col for batched 2D image tensors. For each block under the convolution filter,
...@@ -14119,6 +14470,10 @@ def deformable_roi_pooling(input, ...@@ -14119,6 +14470,10 @@ def deformable_roi_pooling(input,
position_sensitive=False, position_sensitive=False,
name=None): name=None):
""" """
:alias_main: paddle.nn.functional.deformable_roi_pooling
:alias: paddle.nn.functional.deformable_roi_pooling,paddle.nn.functional.vision.deformable_roi_pooling
:old_api: paddle.fluid.layers.deformable_roi_pooling
Deformable ROI Pooling Layer Deformable ROI Pooling Layer
Performs deformable region-of-interest pooling on inputs. As described Performs deformable region-of-interest pooling on inputs. As described
...@@ -14348,6 +14703,10 @@ def shard_index(input, index_num, nshards, shard_id, ignore_value=-1): ...@@ -14348,6 +14703,10 @@ def shard_index(input, index_num, nshards, shard_id, ignore_value=-1):
@templatedoc() @templatedoc()
def hard_swish(x, threshold=6.0, scale=6.0, offset=3.0, name=None): def hard_swish(x, threshold=6.0, scale=6.0, offset=3.0, name=None):
""" """
:alias_main: paddle.nn.functional.hard_swish
:alias: paddle.nn.functional.hard_swish,paddle.nn.functional.activation.hard_swish
:old_api: paddle.fluid.layers.hard_swish
This operator implements the hard_swish activation function. This operator implements the hard_swish activation function.
Hard_swish is proposed in MobileNetV3, and performs better in computational stability and efficiency compared to swish function. Hard_swish is proposed in MobileNetV3, and performs better in computational stability and efficiency compared to swish function.
For more details please refer to: https://arxiv.org/pdf/1905.02244.pdf For more details please refer to: https://arxiv.org/pdf/1905.02244.pdf
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册