提交 b86d7f21 编写于 作者: T Travis CI

Deploy to GitHub Pages: 95ea54fd

上级 7ffe51a4
...@@ -8,7 +8,7 @@ data_feeder ...@@ -8,7 +8,7 @@ data_feeder
DataFeeder DataFeeder
---------- ----------
.. autoclass:: paddle.v2.fluid.data_feeder.DataFeeder .. autoclass:: paddle.fluid.data_feeder.DataFeeder
:members: :members:
:noindex: :noindex:
...@@ -8,14 +8,14 @@ evaluator ...@@ -8,14 +8,14 @@ evaluator
Accuracy Accuracy
-------- --------
.. autoclass:: paddle.v2.fluid.evaluator.Accuracy .. autoclass:: paddle.fluid.evaluator.Accuracy
:members: :members:
:noindex: :noindex:
ChunkEvaluator ChunkEvaluator
-------------- --------------
.. autoclass:: paddle.v2.fluid.evaluator.ChunkEvaluator .. autoclass:: paddle.fluid.evaluator.ChunkEvaluator
:members: :members:
:noindex: :noindex:
...@@ -8,25 +8,25 @@ executor ...@@ -8,25 +8,25 @@ executor
Executor Executor
-------- --------
.. autoclass:: paddle.v2.fluid.executor.Executor .. autoclass:: paddle.fluid.executor.Executor
:members: :members:
:noindex: :noindex:
global_scope global_scope
------------ ------------
.. autofunction:: paddle.v2.fluid.executor.global_scope .. autofunction:: paddle.fluid.executor.global_scope
:noindex: :noindex:
scope_guard scope_guard
----------- -----------
.. autofunction:: paddle.v2.fluid.executor.scope_guard .. autofunction:: paddle.fluid.executor.scope_guard
:noindex: :noindex:
switch_scope switch_scope
------------ ------------
.. autofunction:: paddle.v2.fluid.executor.switch_scope .. autofunction:: paddle.fluid.executor.switch_scope
:noindex: :noindex:
...@@ -8,28 +8,28 @@ initializer ...@@ -8,28 +8,28 @@ initializer
Constant Constant
-------- --------
.. autoclass:: paddle.v2.fluid.initializer.Constant .. autoclass:: paddle.fluid.initializer.Constant
:members: :members:
:noindex: :noindex:
Uniform Uniform
------- -------
.. autoclass:: paddle.v2.fluid.initializer.Uniform .. autoclass:: paddle.fluid.initializer.Uniform
:members: :members:
:noindex: :noindex:
Normal Normal
------ ------
.. autoclass:: paddle.v2.fluid.initializer.Normal .. autoclass:: paddle.fluid.initializer.Normal
:members: :members:
:noindex: :noindex:
Xavier Xavier
------ ------
.. autoclass:: paddle.v2.fluid.initializer.Xavier .. autoclass:: paddle.fluid.initializer.Xavier
:members: :members:
:noindex: :noindex:
...@@ -8,54 +8,54 @@ io ...@@ -8,54 +8,54 @@ io
save_vars save_vars
--------- ---------
.. autofunction:: paddle.v2.fluid.io.save_vars .. autofunction:: paddle.fluid.io.save_vars
:noindex: :noindex:
save_params save_params
----------- -----------
.. autofunction:: paddle.v2.fluid.io.save_params .. autofunction:: paddle.fluid.io.save_params
:noindex: :noindex:
save_persistables save_persistables
----------------- -----------------
.. autofunction:: paddle.v2.fluid.io.save_persistables .. autofunction:: paddle.fluid.io.save_persistables
:noindex: :noindex:
load_vars load_vars
--------- ---------
.. autofunction:: paddle.v2.fluid.io.load_vars .. autofunction:: paddle.fluid.io.load_vars
:noindex: :noindex:
load_params load_params
----------- -----------
.. autofunction:: paddle.v2.fluid.io.load_params .. autofunction:: paddle.fluid.io.load_params
:noindex: :noindex:
load_persistables load_persistables
----------------- -----------------
.. autofunction:: paddle.v2.fluid.io.load_persistables .. autofunction:: paddle.fluid.io.load_persistables
:noindex: :noindex:
save_inference_model save_inference_model
-------------------- --------------------
.. autofunction:: paddle.v2.fluid.io.save_inference_model .. autofunction:: paddle.fluid.io.save_inference_model
:noindex: :noindex:
load_inference_model load_inference_model
-------------------- --------------------
.. autofunction:: paddle.v2.fluid.io.load_inference_model .. autofunction:: paddle.fluid.io.load_inference_model
:noindex: :noindex:
get_inference_program get_inference_program
--------------------- ---------------------
.. autofunction:: paddle.v2.fluid.io.get_inference_program .. autofunction:: paddle.fluid.io.get_inference_program
:noindex: :noindex:
...@@ -11,167 +11,167 @@ control_flow ...@@ -11,167 +11,167 @@ control_flow
split_lod_tensor split_lod_tensor
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.split_lod_tensor .. autofunction:: paddle.fluid.layers.split_lod_tensor
:noindex: :noindex:
merge_lod_tensor merge_lod_tensor
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.merge_lod_tensor .. autofunction:: paddle.fluid.layers.merge_lod_tensor
:noindex: :noindex:
BlockGuard BlockGuard
---------- ----------
.. autoclass:: paddle.v2.fluid.layers.BlockGuard .. autoclass:: paddle.fluid.layers.BlockGuard
:members: :members:
:noindex: :noindex:
BlockGuardWithCompletion BlockGuardWithCompletion
------------------------ ------------------------
.. autoclass:: paddle.v2.fluid.layers.BlockGuardWithCompletion .. autoclass:: paddle.fluid.layers.BlockGuardWithCompletion
:members: :members:
:noindex: :noindex:
StaticRNNMemoryLink StaticRNNMemoryLink
------------------- -------------------
.. autoclass:: paddle.v2.fluid.layers.StaticRNNMemoryLink .. autoclass:: paddle.fluid.layers.StaticRNNMemoryLink
:members: :members:
:noindex: :noindex:
WhileGuard WhileGuard
---------- ----------
.. autoclass:: paddle.v2.fluid.layers.WhileGuard .. autoclass:: paddle.fluid.layers.WhileGuard
:members: :members:
:noindex: :noindex:
While While
----- -----
.. autoclass:: paddle.v2.fluid.layers.While .. autoclass:: paddle.fluid.layers.While
:members: :members:
:noindex: :noindex:
lod_rank_table lod_rank_table
-------------- --------------
.. autofunction:: paddle.v2.fluid.layers.lod_rank_table .. autofunction:: paddle.fluid.layers.lod_rank_table
:noindex: :noindex:
max_sequence_len max_sequence_len
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.max_sequence_len .. autofunction:: paddle.fluid.layers.max_sequence_len
:noindex: :noindex:
topk topk
---- ----
.. autofunction:: paddle.v2.fluid.layers.topk .. autofunction:: paddle.fluid.layers.topk
:noindex: :noindex:
lod_tensor_to_array lod_tensor_to_array
------------------- -------------------
.. autofunction:: paddle.v2.fluid.layers.lod_tensor_to_array .. autofunction:: paddle.fluid.layers.lod_tensor_to_array
:noindex: :noindex:
array_to_lod_tensor array_to_lod_tensor
------------------- -------------------
.. autofunction:: paddle.v2.fluid.layers.array_to_lod_tensor .. autofunction:: paddle.fluid.layers.array_to_lod_tensor
:noindex: :noindex:
increment increment
--------- ---------
.. autofunction:: paddle.v2.fluid.layers.increment .. autofunction:: paddle.fluid.layers.increment
:noindex: :noindex:
array_write array_write
----------- -----------
.. autofunction:: paddle.v2.fluid.layers.array_write .. autofunction:: paddle.fluid.layers.array_write
:noindex: :noindex:
create_array create_array
------------ ------------
.. autofunction:: paddle.v2.fluid.layers.create_array .. autofunction:: paddle.fluid.layers.create_array
:noindex: :noindex:
less_than less_than
--------- ---------
.. autofunction:: paddle.v2.fluid.layers.less_than .. autofunction:: paddle.fluid.layers.less_than
:noindex: :noindex:
array_read array_read
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.array_read .. autofunction:: paddle.fluid.layers.array_read
:noindex: :noindex:
shrink_memory shrink_memory
------------- -------------
.. autofunction:: paddle.v2.fluid.layers.shrink_memory .. autofunction:: paddle.fluid.layers.shrink_memory
:noindex: :noindex:
array_length array_length
------------ ------------
.. autofunction:: paddle.v2.fluid.layers.array_length .. autofunction:: paddle.fluid.layers.array_length
:noindex: :noindex:
IfElse IfElse
------ ------
.. autoclass:: paddle.v2.fluid.layers.IfElse .. autoclass:: paddle.fluid.layers.IfElse
:members: :members:
:noindex: :noindex:
DynamicRNN DynamicRNN
---------- ----------
.. autoclass:: paddle.v2.fluid.layers.DynamicRNN .. autoclass:: paddle.fluid.layers.DynamicRNN
:members: :members:
:noindex: :noindex:
ConditionalBlock ConditionalBlock
---------------- ----------------
.. autoclass:: paddle.v2.fluid.layers.ConditionalBlock .. autoclass:: paddle.fluid.layers.ConditionalBlock
:members: :members:
:noindex: :noindex:
StaticRNN StaticRNN
--------- ---------
.. autoclass:: paddle.v2.fluid.layers.StaticRNN .. autoclass:: paddle.fluid.layers.StaticRNN
:members: :members:
:noindex: :noindex:
reorder_lod_tensor_by_rank reorder_lod_tensor_by_rank
-------------------------- --------------------------
.. autofunction:: paddle.v2.fluid.layers.reorder_lod_tensor_by_rank .. autofunction:: paddle.fluid.layers.reorder_lod_tensor_by_rank
:noindex: :noindex:
ParallelDo ParallelDo
---------- ----------
.. autoclass:: paddle.v2.fluid.layers.ParallelDo .. autoclass:: paddle.fluid.layers.ParallelDo
:members: :members:
:noindex: :noindex:
Print Print
----- -----
.. autofunction:: paddle.v2.fluid.layers.Print .. autofunction:: paddle.fluid.layers.Print
:noindex: :noindex:
device device
...@@ -180,7 +180,7 @@ device ...@@ -180,7 +180,7 @@ device
get_places get_places
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.get_places .. autofunction:: paddle.fluid.layers.get_places
:noindex: :noindex:
io io
...@@ -189,27 +189,27 @@ io ...@@ -189,27 +189,27 @@ io
data data
---- ----
.. autofunction:: paddle.v2.fluid.layers.data .. autofunction:: paddle.fluid.layers.data
:noindex: :noindex:
BlockGuardServ BlockGuardServ
-------------- --------------
.. autoclass:: paddle.v2.fluid.layers.BlockGuardServ .. autoclass:: paddle.fluid.layers.BlockGuardServ
:members: :members:
:noindex: :noindex:
ListenAndServ ListenAndServ
------------- -------------
.. autoclass:: paddle.v2.fluid.layers.ListenAndServ .. autoclass:: paddle.fluid.layers.ListenAndServ
:members: :members:
:noindex: :noindex:
Send Send
---- ----
.. autofunction:: paddle.v2.fluid.layers.Send .. autofunction:: paddle.fluid.layers.Send
:noindex: :noindex:
nn nn
...@@ -218,259 +218,259 @@ nn ...@@ -218,259 +218,259 @@ nn
fc fc
-- --
.. autofunction:: paddle.v2.fluid.layers.fc .. autofunction:: paddle.fluid.layers.fc
:noindex: :noindex:
embedding embedding
--------- ---------
.. autofunction:: paddle.v2.fluid.layers.embedding .. autofunction:: paddle.fluid.layers.embedding
:noindex: :noindex:
dynamic_lstm dynamic_lstm
------------ ------------
.. autofunction:: paddle.v2.fluid.layers.dynamic_lstm .. autofunction:: paddle.fluid.layers.dynamic_lstm
:noindex: :noindex:
dynamic_lstmp dynamic_lstmp
------------- -------------
.. autofunction:: paddle.v2.fluid.layers.dynamic_lstmp .. autofunction:: paddle.fluid.layers.dynamic_lstmp
:noindex: :noindex:
dynamic_gru dynamic_gru
----------- -----------
.. autofunction:: paddle.v2.fluid.layers.dynamic_gru .. autofunction:: paddle.fluid.layers.dynamic_gru
:noindex: :noindex:
gru_unit gru_unit
-------- --------
.. autofunction:: paddle.v2.fluid.layers.gru_unit .. autofunction:: paddle.fluid.layers.gru_unit
:noindex: :noindex:
linear_chain_crf linear_chain_crf
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.linear_chain_crf .. autofunction:: paddle.fluid.layers.linear_chain_crf
:noindex: :noindex:
crf_decoding crf_decoding
------------ ------------
.. autofunction:: paddle.v2.fluid.layers.crf_decoding .. autofunction:: paddle.fluid.layers.crf_decoding
:noindex: :noindex:
cos_sim cos_sim
------- -------
.. autofunction:: paddle.v2.fluid.layers.cos_sim .. autofunction:: paddle.fluid.layers.cos_sim
:noindex: :noindex:
cross_entropy cross_entropy
------------- -------------
.. autofunction:: paddle.v2.fluid.layers.cross_entropy .. autofunction:: paddle.fluid.layers.cross_entropy
:noindex: :noindex:
square_error_cost square_error_cost
----------------- -----------------
.. autofunction:: paddle.v2.fluid.layers.square_error_cost .. autofunction:: paddle.fluid.layers.square_error_cost
:noindex: :noindex:
accuracy accuracy
-------- --------
.. autofunction:: paddle.v2.fluid.layers.accuracy .. autofunction:: paddle.fluid.layers.accuracy
:noindex: :noindex:
chunk_eval chunk_eval
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.chunk_eval .. autofunction:: paddle.fluid.layers.chunk_eval
:noindex: :noindex:
sequence_conv sequence_conv
------------- -------------
.. autofunction:: paddle.v2.fluid.layers.sequence_conv .. autofunction:: paddle.fluid.layers.sequence_conv
:noindex: :noindex:
conv2d conv2d
------ ------
.. autofunction:: paddle.v2.fluid.layers.conv2d .. autofunction:: paddle.fluid.layers.conv2d
:noindex: :noindex:
sequence_pool sequence_pool
------------- -------------
.. autofunction:: paddle.v2.fluid.layers.sequence_pool .. autofunction:: paddle.fluid.layers.sequence_pool
:noindex: :noindex:
pool2d pool2d
------ ------
.. autofunction:: paddle.v2.fluid.layers.pool2d .. autofunction:: paddle.fluid.layers.pool2d
:noindex: :noindex:
batch_norm batch_norm
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.batch_norm .. autofunction:: paddle.fluid.layers.batch_norm
:noindex: :noindex:
layer_norm layer_norm
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.layer_norm .. autofunction:: paddle.fluid.layers.layer_norm
:noindex: :noindex:
beam_search_decode beam_search_decode
------------------ ------------------
.. autofunction:: paddle.v2.fluid.layers.beam_search_decode .. autofunction:: paddle.fluid.layers.beam_search_decode
:noindex: :noindex:
conv2d_transpose conv2d_transpose
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.conv2d_transpose .. autofunction:: paddle.fluid.layers.conv2d_transpose
:noindex: :noindex:
sequence_expand sequence_expand
--------------- ---------------
.. autofunction:: paddle.v2.fluid.layers.sequence_expand .. autofunction:: paddle.fluid.layers.sequence_expand
:noindex: :noindex:
lstm_unit lstm_unit
--------- ---------
.. autofunction:: paddle.v2.fluid.layers.lstm_unit .. autofunction:: paddle.fluid.layers.lstm_unit
:noindex: :noindex:
reduce_sum reduce_sum
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.reduce_sum .. autofunction:: paddle.fluid.layers.reduce_sum
:noindex: :noindex:
reduce_mean reduce_mean
----------- -----------
.. autofunction:: paddle.v2.fluid.layers.reduce_mean .. autofunction:: paddle.fluid.layers.reduce_mean
:noindex: :noindex:
reduce_max reduce_max
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.reduce_max .. autofunction:: paddle.fluid.layers.reduce_max
:noindex: :noindex:
reduce_min reduce_min
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.reduce_min .. autofunction:: paddle.fluid.layers.reduce_min
:noindex: :noindex:
sequence_first_step sequence_first_step
------------------- -------------------
.. autofunction:: paddle.v2.fluid.layers.sequence_first_step .. autofunction:: paddle.fluid.layers.sequence_first_step
:noindex: :noindex:
sequence_last_step sequence_last_step
------------------ ------------------
.. autofunction:: paddle.v2.fluid.layers.sequence_last_step .. autofunction:: paddle.fluid.layers.sequence_last_step
:noindex: :noindex:
dropout dropout
------- -------
.. autofunction:: paddle.v2.fluid.layers.dropout .. autofunction:: paddle.fluid.layers.dropout
:noindex: :noindex:
split split
----- -----
.. autofunction:: paddle.v2.fluid.layers.split .. autofunction:: paddle.fluid.layers.split
:noindex: :noindex:
ctc_greedy_decoder ctc_greedy_decoder
------------------ ------------------
.. autofunction:: paddle.v2.fluid.layers.ctc_greedy_decoder .. autofunction:: paddle.fluid.layers.ctc_greedy_decoder
:noindex: :noindex:
edit_distance edit_distance
------------- -------------
.. autofunction:: paddle.v2.fluid.layers.edit_distance .. autofunction:: paddle.fluid.layers.edit_distance
:noindex: :noindex:
l2_normalize l2_normalize
------------ ------------
.. autofunction:: paddle.v2.fluid.layers.l2_normalize .. autofunction:: paddle.fluid.layers.l2_normalize
:noindex: :noindex:
matmul matmul
------ ------
.. autofunction:: paddle.v2.fluid.layers.matmul .. autofunction:: paddle.fluid.layers.matmul
:noindex: :noindex:
warpctc warpctc
------- -------
.. autofunction:: paddle.v2.fluid.layers.warpctc .. autofunction:: paddle.fluid.layers.warpctc
:noindex: :noindex:
sequence_reshape sequence_reshape
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.sequence_reshape .. autofunction:: paddle.fluid.layers.sequence_reshape
:noindex: :noindex:
transpose transpose
--------- ---------
.. autofunction:: paddle.v2.fluid.layers.transpose .. autofunction:: paddle.fluid.layers.transpose
:noindex: :noindex:
im2sequence im2sequence
----------- -----------
.. autofunction:: paddle.v2.fluid.layers.im2sequence .. autofunction:: paddle.fluid.layers.im2sequence
:noindex: :noindex:
nce nce
--- ---
.. autofunction:: paddle.v2.fluid.layers.nce .. autofunction:: paddle.fluid.layers.nce
:noindex: :noindex:
beam_search beam_search
----------- -----------
.. autofunction:: paddle.v2.fluid.layers.beam_search .. autofunction:: paddle.fluid.layers.beam_search
:noindex: :noindex:
row_conv row_conv
-------- --------
.. autofunction:: paddle.v2.fluid.layers.row_conv .. autofunction:: paddle.fluid.layers.row_conv
:noindex: :noindex:
multiplex multiplex
--------- ---------
.. autofunction:: paddle.v2.fluid.layers.multiplex .. autofunction:: paddle.fluid.layers.multiplex
:noindex: :noindex:
ops ops
...@@ -479,259 +479,259 @@ ops ...@@ -479,259 +479,259 @@ ops
mean mean
---- ----
.. autofunction:: paddle.v2.fluid.layers.mean .. autofunction:: paddle.fluid.layers.mean
:noindex: :noindex:
mul mul
--- ---
.. autofunction:: paddle.v2.fluid.layers.mul .. autofunction:: paddle.fluid.layers.mul
:noindex: :noindex:
reshape reshape
------- -------
.. autofunction:: paddle.v2.fluid.layers.reshape .. autofunction:: paddle.fluid.layers.reshape
:noindex: :noindex:
scale scale
----- -----
.. autofunction:: paddle.v2.fluid.layers.scale .. autofunction:: paddle.fluid.layers.scale
:noindex: :noindex:
sigmoid_cross_entropy_with_logits sigmoid_cross_entropy_with_logits
--------------------------------- ---------------------------------
.. autofunction:: paddle.v2.fluid.layers.sigmoid_cross_entropy_with_logits .. autofunction:: paddle.fluid.layers.sigmoid_cross_entropy_with_logits
:noindex: :noindex:
elementwise_add elementwise_add
--------------- ---------------
.. autofunction:: paddle.v2.fluid.layers.elementwise_add .. autofunction:: paddle.fluid.layers.elementwise_add
:noindex: :noindex:
elementwise_div elementwise_div
--------------- ---------------
.. autofunction:: paddle.v2.fluid.layers.elementwise_div .. autofunction:: paddle.fluid.layers.elementwise_div
:noindex: :noindex:
elementwise_sub elementwise_sub
--------------- ---------------
.. autofunction:: paddle.v2.fluid.layers.elementwise_sub .. autofunction:: paddle.fluid.layers.elementwise_sub
:noindex: :noindex:
elementwise_mul elementwise_mul
--------------- ---------------
.. autofunction:: paddle.v2.fluid.layers.elementwise_mul .. autofunction:: paddle.fluid.layers.elementwise_mul
:noindex: :noindex:
elementwise_max elementwise_max
--------------- ---------------
.. autofunction:: paddle.v2.fluid.layers.elementwise_max .. autofunction:: paddle.fluid.layers.elementwise_max
:noindex: :noindex:
elementwise_min elementwise_min
--------------- ---------------
.. autofunction:: paddle.v2.fluid.layers.elementwise_min .. autofunction:: paddle.fluid.layers.elementwise_min
:noindex: :noindex:
elementwise_pow elementwise_pow
--------------- ---------------
.. autofunction:: paddle.v2.fluid.layers.elementwise_pow .. autofunction:: paddle.fluid.layers.elementwise_pow
:noindex: :noindex:
clip clip
---- ----
.. autofunction:: paddle.v2.fluid.layers.clip .. autofunction:: paddle.fluid.layers.clip
:noindex: :noindex:
clip_by_norm clip_by_norm
------------ ------------
.. autofunction:: paddle.v2.fluid.layers.clip_by_norm .. autofunction:: paddle.fluid.layers.clip_by_norm
:noindex: :noindex:
sequence_softmax sequence_softmax
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.sequence_softmax .. autofunction:: paddle.fluid.layers.sequence_softmax
:noindex: :noindex:
sigmoid sigmoid
------- -------
.. autofunction:: paddle.v2.fluid.layers.sigmoid .. autofunction:: paddle.fluid.layers.sigmoid
:noindex: :noindex:
logsigmoid logsigmoid
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.logsigmoid .. autofunction:: paddle.fluid.layers.logsigmoid
:noindex: :noindex:
exp exp
--- ---
.. autofunction:: paddle.v2.fluid.layers.exp .. autofunction:: paddle.fluid.layers.exp
:noindex: :noindex:
relu relu
---- ----
.. autofunction:: paddle.v2.fluid.layers.relu .. autofunction:: paddle.fluid.layers.relu
:noindex: :noindex:
tanh tanh
---- ----
.. autofunction:: paddle.v2.fluid.layers.tanh .. autofunction:: paddle.fluid.layers.tanh
:noindex: :noindex:
tanh_shrink tanh_shrink
----------- -----------
.. autofunction:: paddle.v2.fluid.layers.tanh_shrink .. autofunction:: paddle.fluid.layers.tanh_shrink
:noindex: :noindex:
softshrink softshrink
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.softshrink .. autofunction:: paddle.fluid.layers.softshrink
:noindex: :noindex:
sqrt sqrt
---- ----
.. autofunction:: paddle.v2.fluid.layers.sqrt .. autofunction:: paddle.fluid.layers.sqrt
:noindex: :noindex:
abs abs
--- ---
.. autofunction:: paddle.v2.fluid.layers.abs .. autofunction:: paddle.fluid.layers.abs
:noindex: :noindex:
ceil ceil
---- ----
.. autofunction:: paddle.v2.fluid.layers.ceil .. autofunction:: paddle.fluid.layers.ceil
:noindex: :noindex:
floor floor
----- -----
.. autofunction:: paddle.v2.fluid.layers.floor .. autofunction:: paddle.fluid.layers.floor
:noindex: :noindex:
round round
----- -----
.. autofunction:: paddle.v2.fluid.layers.round .. autofunction:: paddle.fluid.layers.round
:noindex: :noindex:
reciprocal reciprocal
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.reciprocal .. autofunction:: paddle.fluid.layers.reciprocal
:noindex: :noindex:
log log
--- ---
.. autofunction:: paddle.v2.fluid.layers.log .. autofunction:: paddle.fluid.layers.log
:noindex: :noindex:
square square
------ ------
.. autofunction:: paddle.v2.fluid.layers.square .. autofunction:: paddle.fluid.layers.square
:noindex: :noindex:
softplus softplus
-------- --------
.. autofunction:: paddle.v2.fluid.layers.softplus .. autofunction:: paddle.fluid.layers.softplus
:noindex: :noindex:
softsign softsign
-------- --------
.. autofunction:: paddle.v2.fluid.layers.softsign .. autofunction:: paddle.fluid.layers.softsign
:noindex: :noindex:
brelu brelu
----- -----
.. autofunction:: paddle.v2.fluid.layers.brelu .. autofunction:: paddle.fluid.layers.brelu
:noindex: :noindex:
leaky_relu leaky_relu
---------- ----------
.. autofunction:: paddle.v2.fluid.layers.leaky_relu .. autofunction:: paddle.fluid.layers.leaky_relu
:noindex: :noindex:
soft_relu soft_relu
--------- ---------
.. autofunction:: paddle.v2.fluid.layers.soft_relu .. autofunction:: paddle.fluid.layers.soft_relu
:noindex: :noindex:
elu elu
--- ---
.. autofunction:: paddle.v2.fluid.layers.elu .. autofunction:: paddle.fluid.layers.elu
:noindex: :noindex:
relu6 relu6
----- -----
.. autofunction:: paddle.v2.fluid.layers.relu6 .. autofunction:: paddle.fluid.layers.relu6
:noindex: :noindex:
pow pow
--- ---
.. autofunction:: paddle.v2.fluid.layers.pow .. autofunction:: paddle.fluid.layers.pow
:noindex: :noindex:
stanh stanh
----- -----
.. autofunction:: paddle.v2.fluid.layers.stanh .. autofunction:: paddle.fluid.layers.stanh
:noindex: :noindex:
hard_shrink hard_shrink
----------- -----------
.. autofunction:: paddle.v2.fluid.layers.hard_shrink .. autofunction:: paddle.fluid.layers.hard_shrink
:noindex: :noindex:
thresholded_relu thresholded_relu
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.thresholded_relu .. autofunction:: paddle.fluid.layers.thresholded_relu
:noindex: :noindex:
hard_sigmoid hard_sigmoid
------------ ------------
.. autofunction:: paddle.v2.fluid.layers.hard_sigmoid .. autofunction:: paddle.fluid.layers.hard_sigmoid
:noindex: :noindex:
swish swish
----- -----
.. autofunction:: paddle.v2.fluid.layers.swish .. autofunction:: paddle.fluid.layers.swish
:noindex: :noindex:
tensor tensor
...@@ -740,66 +740,66 @@ tensor ...@@ -740,66 +740,66 @@ tensor
create_tensor create_tensor
------------- -------------
.. autofunction:: paddle.v2.fluid.layers.create_tensor .. autofunction:: paddle.fluid.layers.create_tensor
:noindex: :noindex:
create_parameter create_parameter
---------------- ----------------
.. autofunction:: paddle.v2.fluid.layers.create_parameter .. autofunction:: paddle.fluid.layers.create_parameter
:noindex: :noindex:
create_global_var create_global_var
----------------- -----------------
.. autofunction:: paddle.v2.fluid.layers.create_global_var .. autofunction:: paddle.fluid.layers.create_global_var
:noindex: :noindex:
cast cast
---- ----
.. autofunction:: paddle.v2.fluid.layers.cast .. autofunction:: paddle.fluid.layers.cast
:noindex: :noindex:
concat concat
------ ------
.. autofunction:: paddle.v2.fluid.layers.concat .. autofunction:: paddle.fluid.layers.concat
:noindex: :noindex:
sums sums
---- ----
.. autofunction:: paddle.v2.fluid.layers.sums .. autofunction:: paddle.fluid.layers.sums
:noindex: :noindex:
assign assign
------ ------
.. autofunction:: paddle.v2.fluid.layers.assign .. autofunction:: paddle.fluid.layers.assign
:noindex: :noindex:
fill_constant_batch_size_like fill_constant_batch_size_like
----------------------------- -----------------------------
.. autofunction:: paddle.v2.fluid.layers.fill_constant_batch_size_like .. autofunction:: paddle.fluid.layers.fill_constant_batch_size_like
:noindex: :noindex:
fill_constant fill_constant
------------- -------------
.. autofunction:: paddle.v2.fluid.layers.fill_constant .. autofunction:: paddle.fluid.layers.fill_constant
:noindex: :noindex:
ones ones
---- ----
.. autofunction:: paddle.v2.fluid.layers.ones .. autofunction:: paddle.fluid.layers.ones
:noindex: :noindex:
zeros zeros
----- -----
.. autofunction:: paddle.v2.fluid.layers.zeros .. autofunction:: paddle.fluid.layers.zeros
:noindex: :noindex:
...@@ -8,24 +8,24 @@ nets ...@@ -8,24 +8,24 @@ nets
simple_img_conv_pool simple_img_conv_pool
-------------------- --------------------
.. autofunction:: paddle.v2.fluid.nets.simple_img_conv_pool .. autofunction:: paddle.fluid.nets.simple_img_conv_pool
:noindex: :noindex:
sequence_conv_pool sequence_conv_pool
------------------ ------------------
.. autofunction:: paddle.v2.fluid.nets.sequence_conv_pool .. autofunction:: paddle.fluid.nets.sequence_conv_pool
:noindex: :noindex:
glu glu
--- ---
.. autofunction:: paddle.v2.fluid.nets.glu .. autofunction:: paddle.fluid.nets.glu
:noindex: :noindex:
scaled_dot_product_attention scaled_dot_product_attention
---------------------------- ----------------------------
.. autofunction:: paddle.v2.fluid.nets.scaled_dot_product_attention .. autofunction:: paddle.fluid.nets.scaled_dot_product_attention
:noindex: :noindex:
...@@ -8,42 +8,42 @@ optimizer ...@@ -8,42 +8,42 @@ optimizer
SGD SGD
--- ---
.. autoclass:: paddle.v2.fluid.optimizer.SGD .. autoclass:: paddle.fluid.optimizer.SGD
:members: :members:
:noindex: :noindex:
Momentum Momentum
-------- --------
.. autoclass:: paddle.v2.fluid.optimizer.Momentum .. autoclass:: paddle.fluid.optimizer.Momentum
:members: :members:
:noindex: :noindex:
Adagrad Adagrad
------- -------
.. autoclass:: paddle.v2.fluid.optimizer.Adagrad .. autoclass:: paddle.fluid.optimizer.Adagrad
:members: :members:
:noindex: :noindex:
Adam Adam
---- ----
.. autoclass:: paddle.v2.fluid.optimizer.Adam .. autoclass:: paddle.fluid.optimizer.Adam
:members: :members:
:noindex: :noindex:
Adamax Adamax
------ ------
.. autoclass:: paddle.v2.fluid.optimizer.Adamax .. autoclass:: paddle.fluid.optimizer.Adamax
:members: :members:
:noindex: :noindex:
DecayedAdagrad DecayedAdagrad
-------------- --------------
.. autoclass:: paddle.v2.fluid.optimizer.DecayedAdagrad .. autoclass:: paddle.fluid.optimizer.DecayedAdagrad
:members: :members:
:noindex: :noindex:
...@@ -8,14 +8,14 @@ param_attr ...@@ -8,14 +8,14 @@ param_attr
ParamAttr ParamAttr
--------- ---------
.. autoclass:: paddle.v2.fluid.param_attr.ParamAttr .. autoclass:: paddle.fluid.param_attr.ParamAttr
:members: :members:
:noindex: :noindex:
WeightNormParamAttr WeightNormParamAttr
------------------- -------------------
.. autoclass:: paddle.v2.fluid.param_attr.WeightNormParamAttr .. autoclass:: paddle.fluid.param_attr.WeightNormParamAttr
:members: :members:
:noindex: :noindex:
...@@ -8,18 +8,18 @@ profiler ...@@ -8,18 +8,18 @@ profiler
cuda_profiler cuda_profiler
------------- -------------
.. autofunction:: paddle.v2.fluid.profiler.cuda_profiler .. autofunction:: paddle.fluid.profiler.cuda_profiler
:noindex: :noindex:
reset_profiler reset_profiler
-------------- --------------
.. autofunction:: paddle.v2.fluid.profiler.reset_profiler .. autofunction:: paddle.fluid.profiler.reset_profiler
:noindex: :noindex:
profiler profiler
-------- --------
.. autofunction:: paddle.v2.fluid.profiler.profiler .. autofunction:: paddle.fluid.profiler.profiler
:noindex: :noindex:
...@@ -8,20 +8,20 @@ regularizer ...@@ -8,20 +8,20 @@ regularizer
append_regularization_ops append_regularization_ops
------------------------- -------------------------
.. autofunction:: paddle.v2.fluid.regularizer.append_regularization_ops .. autofunction:: paddle.fluid.regularizer.append_regularization_ops
:noindex: :noindex:
L1Decay L1Decay
------- -------
.. autoclass:: paddle.v2.fluid.regularizer.L1Decay .. autoclass:: paddle.fluid.regularizer.L1Decay
:members: :members:
:noindex: :noindex:
L2Decay L2Decay
------- -------
.. autoclass:: paddle.v2.fluid.regularizer.L2Decay .. autoclass:: paddle.fluid.regularizer.L2Decay
:members: :members:
:noindex: :noindex:
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>DataFeeder<a class="headerlink" href="#datafeeder" title="Permalink to this headline"></a></h2> <h2>DataFeeder<a class="headerlink" href="#datafeeder" title="Permalink to this headline"></a></h2>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.data_feeder.</code><code class="descname">DataFeeder</code><span class="sig-paren">(</span><em>feed_list</em>, <em>place</em>, <em>program=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.data_feeder.</code><code class="descname">DataFeeder</code><span class="sig-paren">(</span><em>feed_list</em>, <em>place</em>, <em>program=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
......
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>Accuracy<a class="headerlink" href="#accuracy" title="Permalink to this headline"></a></h2> <h2>Accuracy<a class="headerlink" href="#accuracy" title="Permalink to this headline"></a></h2>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.evaluator.</code><code class="descname">Accuracy</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>k=1</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.evaluator.</code><code class="descname">Accuracy</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>k=1</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Average Accuracy for multiple mini-batches.</p> <dd><p>Average Accuracy for multiple mini-batches.</p>
</dd></dl> </dd></dl>
...@@ -188,7 +188,7 @@ ...@@ -188,7 +188,7 @@
<h2>ChunkEvaluator<a class="headerlink" href="#chunkevaluator" title="Permalink to this headline"></a></h2> <h2>ChunkEvaluator<a class="headerlink" href="#chunkevaluator" title="Permalink to this headline"></a></h2>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.evaluator.</code><code class="descname">ChunkEvaluator</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>chunk_scheme</em>, <em>num_chunk_types</em>, <em>excluded_chunk_types=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.evaluator.</code><code class="descname">ChunkEvaluator</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>chunk_scheme</em>, <em>num_chunk_types</em>, <em>excluded_chunk_types=None</em><span class="sig-paren">)</span></dt>
<dd><p>Accumulate counter numbers output by chunk_eval from mini-batches and <dd><p>Accumulate counter numbers output by chunk_eval from mini-batches and
compute the precision recall and F1-score using the accumulated counter compute the precision recall and F1-score using the accumulated counter
numbers.</p> numbers.</p>
......
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>Executor<a class="headerlink" href="#id1" title="Permalink to this headline"></a></h2> <h2>Executor<a class="headerlink" href="#id1" title="Permalink to this headline"></a></h2>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.executor.</code><code class="descname">Executor</code><span class="sig-paren">(</span><em>places</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.executor.</code><code class="descname">Executor</code><span class="sig-paren">(</span><em>places</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -187,7 +187,7 @@ ...@@ -187,7 +187,7 @@
<h2>global_scope<a class="headerlink" href="#global-scope" title="Permalink to this headline"></a></h2> <h2>global_scope<a class="headerlink" href="#global-scope" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.executor.</code><code class="descname">global_scope</code><span class="sig-paren">(</span><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.executor.</code><code class="descname">global_scope</code><span class="sig-paren">(</span><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -195,7 +195,7 @@ ...@@ -195,7 +195,7 @@
<h2>scope_guard<a class="headerlink" href="#scope-guard" title="Permalink to this headline"></a></h2> <h2>scope_guard<a class="headerlink" href="#scope-guard" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.executor.</code><code class="descname">scope_guard</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwds</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.executor.</code><code class="descname">scope_guard</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwds</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -203,7 +203,7 @@ ...@@ -203,7 +203,7 @@
<h2>switch_scope<a class="headerlink" href="#switch-scope" title="Permalink to this headline"></a></h2> <h2>switch_scope<a class="headerlink" href="#switch-scope" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.executor.</code><code class="descname">switch_scope</code><span class="sig-paren">(</span><em>scope</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.executor.</code><code class="descname">switch_scope</code><span class="sig-paren">(</span><em>scope</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
......
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>Constant<a class="headerlink" href="#constant" title="Permalink to this headline"></a></h2> <h2>Constant<a class="headerlink" href="#constant" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.initializer.</code><code class="descname">Constant</code></dt> <code class="descclassname">paddle.fluid.initializer.</code><code class="descname">Constant</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">ConstantInitializer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">ConstantInitializer</span></code></p>
</dd></dl> </dd></dl>
...@@ -188,7 +188,7 @@ ...@@ -188,7 +188,7 @@
<h2>Uniform<a class="headerlink" href="#uniform" title="Permalink to this headline"></a></h2> <h2>Uniform<a class="headerlink" href="#uniform" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.initializer.</code><code class="descname">Uniform</code></dt> <code class="descclassname">paddle.fluid.initializer.</code><code class="descname">Uniform</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">UniformInitializer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">UniformInitializer</span></code></p>
</dd></dl> </dd></dl>
...@@ -197,7 +197,7 @@ ...@@ -197,7 +197,7 @@
<h2>Normal<a class="headerlink" href="#normal" title="Permalink to this headline"></a></h2> <h2>Normal<a class="headerlink" href="#normal" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.initializer.</code><code class="descname">Normal</code></dt> <code class="descclassname">paddle.fluid.initializer.</code><code class="descname">Normal</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">NormalInitializer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">NormalInitializer</span></code></p>
</dd></dl> </dd></dl>
...@@ -206,7 +206,7 @@ ...@@ -206,7 +206,7 @@
<h2>Xavier<a class="headerlink" href="#xavier" title="Permalink to this headline"></a></h2> <h2>Xavier<a class="headerlink" href="#xavier" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.initializer.</code><code class="descname">Xavier</code></dt> <code class="descclassname">paddle.fluid.initializer.</code><code class="descname">Xavier</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">XavierInitializer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">XavierInitializer</span></code></p>
</dd></dl> </dd></dl>
......
...@@ -178,7 +178,7 @@ ...@@ -178,7 +178,7 @@
<h2>save_vars<a class="headerlink" href="#save-vars" title="Permalink to this headline"></a></h2> <h2>save_vars<a class="headerlink" href="#save-vars" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">save_vars</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>vars=None</em>, <em>predicate=None</em>, <em>save_file_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">save_vars</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>vars=None</em>, <em>predicate=None</em>, <em>save_file_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Save variables to directory by executor.</p> <dd><p>Save variables to directory by executor.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -215,7 +215,7 @@ If it is None, save variables to separate files.</p> ...@@ -215,7 +215,7 @@ If it is None, save variables to separate files.</p>
<h2>save_params<a class="headerlink" href="#save-params" title="Permalink to this headline"></a></h2> <h2>save_params<a class="headerlink" href="#save-params" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">save_params</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>save_file_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">save_params</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>save_file_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Save all parameters to directory with executor.</p> <dd><p>Save all parameters to directory with executor.</p>
</dd></dl> </dd></dl>
...@@ -224,7 +224,7 @@ If it is None, save variables to separate files.</p> ...@@ -224,7 +224,7 @@ If it is None, save variables to separate files.</p>
<h2>save_persistables<a class="headerlink" href="#save-persistables" title="Permalink to this headline"></a></h2> <h2>save_persistables<a class="headerlink" href="#save-persistables" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">save_persistables</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>save_file_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">save_persistables</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>save_file_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Save all persistables to directory with executor.</p> <dd><p>Save all persistables to directory with executor.</p>
</dd></dl> </dd></dl>
...@@ -233,7 +233,7 @@ If it is None, save variables to separate files.</p> ...@@ -233,7 +233,7 @@ If it is None, save variables to separate files.</p>
<h2>load_vars<a class="headerlink" href="#load-vars" title="Permalink to this headline"></a></h2> <h2>load_vars<a class="headerlink" href="#load-vars" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">load_vars</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>vars=None</em>, <em>predicate=None</em>, <em>load_file_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">load_vars</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>vars=None</em>, <em>predicate=None</em>, <em>load_file_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Load variables from directory by executor.</p> <dd><p>Load variables from directory by executor.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -270,7 +270,7 @@ If it is None, load variables from separate files.</p> ...@@ -270,7 +270,7 @@ If it is None, load variables from separate files.</p>
<h2>load_params<a class="headerlink" href="#load-params" title="Permalink to this headline"></a></h2> <h2>load_params<a class="headerlink" href="#load-params" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">load_params</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>load_file_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">load_params</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>load_file_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>load all parameters from directory by executor.</p> <dd><p>load all parameters from directory by executor.</p>
</dd></dl> </dd></dl>
...@@ -279,7 +279,7 @@ If it is None, load variables from separate files.</p> ...@@ -279,7 +279,7 @@ If it is None, load variables from separate files.</p>
<h2>load_persistables<a class="headerlink" href="#load-persistables" title="Permalink to this headline"></a></h2> <h2>load_persistables<a class="headerlink" href="#load-persistables" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">load_persistables</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>load_file_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">load_persistables</code><span class="sig-paren">(</span><em>executor</em>, <em>dirname</em>, <em>main_program=None</em>, <em>load_file_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>load all persistables from directory by executor.</p> <dd><p>load all persistables from directory by executor.</p>
</dd></dl> </dd></dl>
...@@ -288,7 +288,7 @@ If it is None, load variables from separate files.</p> ...@@ -288,7 +288,7 @@ If it is None, load variables from separate files.</p>
<h2>save_inference_model<a class="headerlink" href="#save-inference-model" title="Permalink to this headline"></a></h2> <h2>save_inference_model<a class="headerlink" href="#save-inference-model" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">save_inference_model</code><span class="sig-paren">(</span><em>dirname</em>, <em>feeded_var_names</em>, <em>target_vars</em>, <em>executor</em>, <em>main_program=None</em>, <em>save_file_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">save_inference_model</code><span class="sig-paren">(</span><em>dirname</em>, <em>feeded_var_names</em>, <em>target_vars</em>, <em>executor</em>, <em>main_program=None</em>, <em>save_file_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Build a model especially for inference, <dd><p>Build a model especially for inference,
and save it to directory by the executor.</p> and save it to directory by the executor.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -324,7 +324,7 @@ Default default_main_program().</li> ...@@ -324,7 +324,7 @@ Default default_main_program().</li>
<h2>load_inference_model<a class="headerlink" href="#load-inference-model" title="Permalink to this headline"></a></h2> <h2>load_inference_model<a class="headerlink" href="#load-inference-model" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">load_inference_model</code><span class="sig-paren">(</span><em>dirname</em>, <em>executor</em>, <em>load_file_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">load_inference_model</code><span class="sig-paren">(</span><em>dirname</em>, <em>executor</em>, <em>load_file_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Load inference model from a directory</p> <dd><p>Load inference model from a directory</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -358,7 +358,7 @@ fetch_targets: Variables from which we can get inference results.</td> ...@@ -358,7 +358,7 @@ fetch_targets: Variables from which we can get inference results.</td>
<h2>get_inference_program<a class="headerlink" href="#get-inference-program" title="Permalink to this headline"></a></h2> <h2>get_inference_program<a class="headerlink" href="#get-inference-program" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.io.</code><code class="descname">get_inference_program</code><span class="sig-paren">(</span><em>target_vars</em>, <em>main_program=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.io.</code><code class="descname">get_inference_program</code><span class="sig-paren">(</span><em>target_vars</em>, <em>main_program=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
......
...@@ -181,7 +181,7 @@ ...@@ -181,7 +181,7 @@
<h3>split_lod_tensor<a class="headerlink" href="#split-lod-tensor" title="Permalink to this headline"></a></h3> <h3>split_lod_tensor<a class="headerlink" href="#split-lod-tensor" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">split_lod_tensor</code><span class="sig-paren">(</span><em>input</em>, <em>mask</em>, <em>level=0</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">split_lod_tensor</code><span class="sig-paren">(</span><em>input</em>, <em>mask</em>, <em>level=0</em><span class="sig-paren">)</span></dt>
<dd><p><strong>split_lod_tensor</strong></p> <dd><p><strong>split_lod_tensor</strong></p>
<p>This function takes in an input that contains the complete lod information, <p>This function takes in an input that contains the complete lod information,
and takes in a mask which is used to mask certain parts of the input. and takes in a mask which is used to mask certain parts of the input.
...@@ -226,7 +226,7 @@ Variable: The false branch of tensor as per the mask applied to input.</p> ...@@ -226,7 +226,7 @@ Variable: The false branch of tensor as per the mask applied to input.</p>
<h3>merge_lod_tensor<a class="headerlink" href="#merge-lod-tensor" title="Permalink to this headline"></a></h3> <h3>merge_lod_tensor<a class="headerlink" href="#merge-lod-tensor" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">merge_lod_tensor</code><span class="sig-paren">(</span><em>in_true</em>, <em>in_false</em>, <em>x</em>, <em>mask</em>, <em>level=0</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">merge_lod_tensor</code><span class="sig-paren">(</span><em>in_true</em>, <em>in_false</em>, <em>x</em>, <em>mask</em>, <em>level=0</em><span class="sig-paren">)</span></dt>
<dd><p><strong>merge_lod_tensor</strong></p> <dd><p><strong>merge_lod_tensor</strong></p>
<p>This function takes in an input <span class="math">\(x\)</span>, the True branch, the False <p>This function takes in an input <span class="math">\(x\)</span>, the True branch, the False
branch and a binary <span class="math">\(mask\)</span>. Using this information, this function branch and a binary <span class="math">\(mask\)</span>. Using this information, this function
...@@ -275,7 +275,7 @@ lod information needed to construct the output.</li> ...@@ -275,7 +275,7 @@ lod information needed to construct the output.</li>
<h3>BlockGuard<a class="headerlink" href="#blockguard" title="Permalink to this headline"></a></h3> <h3>BlockGuard<a class="headerlink" href="#blockguard" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">BlockGuard</code><span class="sig-paren">(</span><em>main_program</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">BlockGuard</code><span class="sig-paren">(</span><em>main_program</em><span class="sig-paren">)</span></dt>
<dd><p>BlockGuard class.</p> <dd><p>BlockGuard class.</p>
<p>BlockGuard class is used to create a sub-block in a program by <p>BlockGuard class is used to create a sub-block in a program by
using the Python <cite>with</cite> keyword.</p> using the Python <cite>with</cite> keyword.</p>
...@@ -286,7 +286,7 @@ using the Python <cite>with</cite> keyword.</p> ...@@ -286,7 +286,7 @@ using the Python <cite>with</cite> keyword.</p>
<h3>BlockGuardWithCompletion<a class="headerlink" href="#blockguardwithcompletion" title="Permalink to this headline"></a></h3> <h3>BlockGuardWithCompletion<a class="headerlink" href="#blockguardwithcompletion" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">BlockGuardWithCompletion</code><span class="sig-paren">(</span><em>rnn</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">BlockGuardWithCompletion</code><span class="sig-paren">(</span><em>rnn</em><span class="sig-paren">)</span></dt>
<dd><p>BlockGuardWithCompletion class.</p> <dd><p>BlockGuardWithCompletion class.</p>
<p>BlockGuardWithCompletion class is used to create an op with a block in a program.</p> <p>BlockGuardWithCompletion class is used to create an op with a block in a program.</p>
</dd></dl> </dd></dl>
...@@ -296,7 +296,7 @@ using the Python <cite>with</cite> keyword.</p> ...@@ -296,7 +296,7 @@ using the Python <cite>with</cite> keyword.</p>
<h3>StaticRNNMemoryLink<a class="headerlink" href="#staticrnnmemorylink" title="Permalink to this headline"></a></h3> <h3>StaticRNNMemoryLink<a class="headerlink" href="#staticrnnmemorylink" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">StaticRNNMemoryLink</code><span class="sig-paren">(</span><em>init</em>, <em>pre_mem</em>, <em>mem=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">StaticRNNMemoryLink</code><span class="sig-paren">(</span><em>init</em>, <em>pre_mem</em>, <em>mem=None</em><span class="sig-paren">)</span></dt>
<dd><p>StaticRNNMemoryLink class.</p> <dd><p>StaticRNNMemoryLink class.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -323,7 +323,7 @@ memory cells of a StaticRNN.</p> ...@@ -323,7 +323,7 @@ memory cells of a StaticRNN.</p>
<h3>WhileGuard<a class="headerlink" href="#whileguard" title="Permalink to this headline"></a></h3> <h3>WhileGuard<a class="headerlink" href="#whileguard" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">WhileGuard</code><span class="sig-paren">(</span><em>while_op</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">WhileGuard</code><span class="sig-paren">(</span><em>while_op</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -331,7 +331,7 @@ memory cells of a StaticRNN.</p> ...@@ -331,7 +331,7 @@ memory cells of a StaticRNN.</p>
<h3>While<a class="headerlink" href="#while" title="Permalink to this headline"></a></h3> <h3>While<a class="headerlink" href="#while" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">While</code><span class="sig-paren">(</span><em>cond</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">While</code><span class="sig-paren">(</span><em>cond</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -339,7 +339,7 @@ memory cells of a StaticRNN.</p> ...@@ -339,7 +339,7 @@ memory cells of a StaticRNN.</p>
<h3>lod_rank_table<a class="headerlink" href="#lod-rank-table" title="Permalink to this headline"></a></h3> <h3>lod_rank_table<a class="headerlink" href="#lod-rank-table" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">lod_rank_table</code><span class="sig-paren">(</span><em>x</em>, <em>level=0</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">lod_rank_table</code><span class="sig-paren">(</span><em>x</em>, <em>level=0</em><span class="sig-paren">)</span></dt>
<dd><p>LoD Rank Table Operator. Given an input variable <strong>x</strong> and a level number <dd><p>LoD Rank Table Operator. Given an input variable <strong>x</strong> and a level number
of LoD, this layer creates a LodRankTable object. A LoDRankTable object of LoD, this layer creates a LodRankTable object. A LoDRankTable object
contains a list of bi-element tuples. Each tuple consists of an index and contains a list of bi-element tuples. Each tuple consists of an index and
...@@ -402,7 +402,7 @@ table.</li> ...@@ -402,7 +402,7 @@ table.</li>
<h3>max_sequence_len<a class="headerlink" href="#max-sequence-len" title="Permalink to this headline"></a></h3> <h3>max_sequence_len<a class="headerlink" href="#max-sequence-len" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">max_sequence_len</code><span class="sig-paren">(</span><em>rank_table</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">max_sequence_len</code><span class="sig-paren">(</span><em>rank_table</em><span class="sig-paren">)</span></dt>
<dd><p>Max Sequence Len Operator. Given a LoDRankTable object, this layer <dd><p>Max Sequence Len Operator. Given a LoDRankTable object, this layer
returns the max length of a batch of sequences. In fact, a LoDRankTable returns the max length of a batch of sequences. In fact, a LoDRankTable
object contains a list of tuples(&lt;sequence index, sequence length&gt;) and object contains a list of tuples(&lt;sequence index, sequence length&gt;) and
...@@ -434,7 +434,7 @@ operator just returns the sequence length of the first tuple element.</p> ...@@ -434,7 +434,7 @@ operator just returns the sequence length of the first tuple element.</p>
<h3>topk<a class="headerlink" href="#topk" title="Permalink to this headline"></a></h3> <h3>topk<a class="headerlink" href="#topk" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">topk</code><span class="sig-paren">(</span><em>input</em>, <em>k</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">topk</code><span class="sig-paren">(</span><em>input</em>, <em>k</em><span class="sig-paren">)</span></dt>
<dd><p><strong>topk</strong></p> <dd><p><strong>topk</strong></p>
<p>This function performs the operation that selects the k entries in the input <p>This function performs the operation that selects the k entries in the input
vector and outputs their values and indices as vectors. Thus topk_out[j] is vector and outputs their values and indices as vectors. Thus topk_out[j] is
...@@ -478,7 +478,7 @@ the j-th largest entry in input, and its index is topk_indices[j]</p> ...@@ -478,7 +478,7 @@ the j-th largest entry in input, and its index is topk_indices[j]</p>
<h3>lod_tensor_to_array<a class="headerlink" href="#lod-tensor-to-array" title="Permalink to this headline"></a></h3> <h3>lod_tensor_to_array<a class="headerlink" href="#lod-tensor-to-array" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">lod_tensor_to_array</code><span class="sig-paren">(</span><em>x</em>, <em>table</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">lod_tensor_to_array</code><span class="sig-paren">(</span><em>x</em>, <em>table</em><span class="sig-paren">)</span></dt>
<dd><p>Convert a LOD_TENSOR to an LOD_TENSOR_ARRAY.</p> <dd><p>Convert a LOD_TENSOR to an LOD_TENSOR_ARRAY.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -518,7 +518,7 @@ descending order.</li> ...@@ -518,7 +518,7 @@ descending order.</li>
<h3>array_to_lod_tensor<a class="headerlink" href="#array-to-lod-tensor" title="Permalink to this headline"></a></h3> <h3>array_to_lod_tensor<a class="headerlink" href="#array-to-lod-tensor" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">array_to_lod_tensor</code><span class="sig-paren">(</span><em>x</em>, <em>table</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">array_to_lod_tensor</code><span class="sig-paren">(</span><em>x</em>, <em>table</em><span class="sig-paren">)</span></dt>
<dd><p>Convert a LoD_Tensor_Aarry to an LoDTensor.</p> <dd><p>Convert a LoD_Tensor_Aarry to an LoDTensor.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -559,7 +559,7 @@ descending order.</li> ...@@ -559,7 +559,7 @@ descending order.</li>
<h3>increment<a class="headerlink" href="#increment" title="Permalink to this headline"></a></h3> <h3>increment<a class="headerlink" href="#increment" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">increment</code><span class="sig-paren">(</span><em>x</em>, <em>value=1.0</em>, <em>in_place=True</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">increment</code><span class="sig-paren">(</span><em>x</em>, <em>value=1.0</em>, <em>in_place=True</em><span class="sig-paren">)</span></dt>
<dd><p>This function performs an operation that increments each value in the <dd><p>This function performs an operation that increments each value in the
input <span class="math">\(x\)</span> by an amount: <span class="math">\(value\)</span> as mentioned in the input input <span class="math">\(x\)</span> by an amount: <span class="math">\(value\)</span> as mentioned in the input
parameter. This operation is performed in-place by default.</p> parameter. This operation is performed in-place by default.</p>
...@@ -599,7 +599,7 @@ parameter. This operation is performed in-place by default.</p> ...@@ -599,7 +599,7 @@ parameter. This operation is performed in-place by default.</p>
<h3>array_write<a class="headerlink" href="#array-write" title="Permalink to this headline"></a></h3> <h3>array_write<a class="headerlink" href="#array-write" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">array_write</code><span class="sig-paren">(</span><em>x</em>, <em>i</em>, <em>array=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">array_write</code><span class="sig-paren">(</span><em>x</em>, <em>i</em>, <em>array=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function writes the given input variable to the specified position <dd><p>This function writes the given input variable to the specified position
indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the
output LOD_TENSOR_ARRAY is not given(None), a new one will be created and output LOD_TENSOR_ARRAY is not given(None), a new one will be created and
...@@ -636,7 +636,7 @@ returned.</li> ...@@ -636,7 +636,7 @@ returned.</li>
<h3>create_array<a class="headerlink" href="#create-array" title="Permalink to this headline"></a></h3> <h3>create_array<a class="headerlink" href="#create-array" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">create_array</code><span class="sig-paren">(</span><em>dtype</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">create_array</code><span class="sig-paren">(</span><em>dtype</em><span class="sig-paren">)</span></dt>
<dd><p>This function creates an array of type <span class="math">\(LOD_TENSOR_ARRAY\)</span> using the <dd><p>This function creates an array of type <span class="math">\(LOD_TENSOR_ARRAY\)</span> using the
LayerHelper.</p> LayerHelper.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -662,7 +662,7 @@ LayerHelper.</p> ...@@ -662,7 +662,7 @@ LayerHelper.</p>
<h3>less_than<a class="headerlink" href="#less-than" title="Permalink to this headline"></a></h3> <h3>less_than<a class="headerlink" href="#less-than" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">less_than</code><span class="sig-paren">(</span><em>x</em>, <em>y</em>, <em>cond=None</em>, <em>**ignored</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">less_than</code><span class="sig-paren">(</span><em>x</em>, <em>y</em>, <em>cond=None</em>, <em>**ignored</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Less than</strong></p> <dd><p><strong>Less than</strong></p>
<p>This layer returns the truth value of <span class="math">\(x &lt; y\)</span> elementwise.</p> <p>This layer returns the truth value of <span class="math">\(x &lt; y\)</span> elementwise.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -695,7 +695,7 @@ LayerHelper.</p> ...@@ -695,7 +695,7 @@ LayerHelper.</p>
<h3>array_read<a class="headerlink" href="#array-read" title="Permalink to this headline"></a></h3> <h3>array_read<a class="headerlink" href="#array-read" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">array_read</code><span class="sig-paren">(</span><em>array</em>, <em>i</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">array_read</code><span class="sig-paren">(</span><em>array</em>, <em>i</em><span class="sig-paren">)</span></dt>
<dd><p>This function performs the operation to read the data in as an <dd><p>This function performs the operation to read the data in as an
LOD_TENSOR_ARRAY. LOD_TENSOR_ARRAY.
:param array: The input tensor that will be written to an array. :param array: The input tensor that will be written to an array.
...@@ -721,7 +721,7 @@ LOD_TENSOR_ARRAY. ...@@ -721,7 +721,7 @@ LOD_TENSOR_ARRAY.
<h3>shrink_memory<a class="headerlink" href="#shrink-memory" title="Permalink to this headline"></a></h3> <h3>shrink_memory<a class="headerlink" href="#shrink-memory" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">shrink_memory</code><span class="sig-paren">(</span><em>x</em>, <em>i</em>, <em>table</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">shrink_memory</code><span class="sig-paren">(</span><em>x</em>, <em>i</em>, <em>table</em><span class="sig-paren">)</span></dt>
<dd><p>This function creates an operator to shrink_rnn_memory using the RankTable <dd><p>This function creates an operator to shrink_rnn_memory using the RankTable
as mentioned in the input parameter.</p> as mentioned in the input parameter.</p>
</dd></dl> </dd></dl>
...@@ -731,7 +731,7 @@ as mentioned in the input parameter.</p> ...@@ -731,7 +731,7 @@ as mentioned in the input parameter.</p>
<h3>array_length<a class="headerlink" href="#array-length" title="Permalink to this headline"></a></h3> <h3>array_length<a class="headerlink" href="#array-length" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">array_length</code><span class="sig-paren">(</span><em>array</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">array_length</code><span class="sig-paren">(</span><em>array</em><span class="sig-paren">)</span></dt>
<dd><p>This function performs the operation to find the length of the input <dd><p>This function performs the operation to find the length of the input
LOD_TENSOR_ARRAY.</p> LOD_TENSOR_ARRAY.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -755,7 +755,7 @@ to compute the length.</td> ...@@ -755,7 +755,7 @@ to compute the length.</td>
<h3>IfElse<a class="headerlink" href="#ifelse" title="Permalink to this headline"></a></h3> <h3>IfElse<a class="headerlink" href="#ifelse" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">IfElse</code><span class="sig-paren">(</span><em>cond</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">IfElse</code><span class="sig-paren">(</span><em>cond</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -763,7 +763,7 @@ to compute the length.</td> ...@@ -763,7 +763,7 @@ to compute the length.</td>
<h3>DynamicRNN<a class="headerlink" href="#dynamicrnn" title="Permalink to this headline"></a></h3> <h3>DynamicRNN<a class="headerlink" href="#dynamicrnn" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">DynamicRNN</code><span class="sig-paren">(</span><em>name=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">DynamicRNN</code><span class="sig-paren">(</span><em>name=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -771,7 +771,7 @@ to compute the length.</td> ...@@ -771,7 +771,7 @@ to compute the length.</td>
<h3>ConditionalBlock<a class="headerlink" href="#conditionalblock" title="Permalink to this headline"></a></h3> <h3>ConditionalBlock<a class="headerlink" href="#conditionalblock" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">ConditionalBlock</code><span class="sig-paren">(</span><em>inputs</em>, <em>is_scalar_condition=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">ConditionalBlock</code><span class="sig-paren">(</span><em>inputs</em>, <em>is_scalar_condition=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -779,7 +779,7 @@ to compute the length.</td> ...@@ -779,7 +779,7 @@ to compute the length.</td>
<h3>StaticRNN<a class="headerlink" href="#staticrnn" title="Permalink to this headline"></a></h3> <h3>StaticRNN<a class="headerlink" href="#staticrnn" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">StaticRNN</code><span class="sig-paren">(</span><em>name=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">StaticRNN</code><span class="sig-paren">(</span><em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>StaticRNN class.</p> <dd><p>StaticRNN class.</p>
<p>StaticRNN class is used to create a StaticRNN. The RNN will have its <p>StaticRNN class is used to create a StaticRNN. The RNN will have its
own parameters like inputs, outputs, memories, status and length.</p> own parameters like inputs, outputs, memories, status and length.</p>
...@@ -811,7 +811,7 @@ own parameters like inputs, outputs, memories, status and length.</p> ...@@ -811,7 +811,7 @@ own parameters like inputs, outputs, memories, status and length.</p>
<h3>reorder_lod_tensor_by_rank<a class="headerlink" href="#reorder-lod-tensor-by-rank" title="Permalink to this headline"></a></h3> <h3>reorder_lod_tensor_by_rank<a class="headerlink" href="#reorder-lod-tensor-by-rank" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reorder_lod_tensor_by_rank</code><span class="sig-paren">(</span><em>x</em>, <em>rank_table</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">reorder_lod_tensor_by_rank</code><span class="sig-paren">(</span><em>x</em>, <em>rank_table</em><span class="sig-paren">)</span></dt>
<dd><p>ReorderLoDTensorByRankTable operator.</p> <dd><p>ReorderLoDTensorByRankTable operator.</p>
<p>Input(X) is a batch of sequences. Input(RankTable) stores new orders of the <p>Input(X) is a batch of sequences. Input(RankTable) stores new orders of the
input sequence batch. The reorder_lod_tensor_by_rank operator reorders the input sequence batch. The reorder_lod_tensor_by_rank operator reorders the
...@@ -859,7 +859,7 @@ Duplicable: False Optional: False</li> ...@@ -859,7 +859,7 @@ Duplicable: False Optional: False</li>
<h3>ParallelDo<a class="headerlink" href="#paralleldo" title="Permalink to this headline"></a></h3> <h3>ParallelDo<a class="headerlink" href="#paralleldo" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">ParallelDo</code><span class="sig-paren">(</span><em>places</em>, <em>use_nccl=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">ParallelDo</code><span class="sig-paren">(</span><em>places</em>, <em>use_nccl=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>ParallelDo class.</p> <dd><p>ParallelDo class.</p>
<p>ParallelDo class is used to create a ParallelDo.</p> <p>ParallelDo class is used to create a ParallelDo.</p>
</dd></dl> </dd></dl>
...@@ -869,7 +869,7 @@ Duplicable: False Optional: False</li> ...@@ -869,7 +869,7 @@ Duplicable: False Optional: False</li>
<h3>Print<a class="headerlink" href="#print" title="Permalink to this headline"></a></h3> <h3>Print<a class="headerlink" href="#print" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">Print</code><span class="sig-paren">(</span><em>input</em>, <em>first_n=-1</em>, <em>message=None</em>, <em>summarize=-1</em>, <em>print_tensor_name=True</em>, <em>print_tensor_type=True</em>, <em>print_tensor_shape=True</em>, <em>print_tensor_lod=True</em>, <em>print_phase='both'</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">Print</code><span class="sig-paren">(</span><em>input</em>, <em>first_n=-1</em>, <em>message=None</em>, <em>summarize=-1</em>, <em>print_tensor_name=True</em>, <em>print_tensor_type=True</em>, <em>print_tensor_shape=True</em>, <em>print_tensor_lod=True</em>, <em>print_phase='both'</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Print operator</strong></p> <dd><p><strong>Print operator</strong></p>
<p>This creates a print op that will print when a tensor is accessed.</p> <p>This creates a print op that will print when a tensor is accessed.</p>
<p>Wraps the tensor passed in so that whenever that a tensor is accessed, <p>Wraps the tensor passed in so that whenever that a tensor is accessed,
...@@ -921,7 +921,7 @@ Print(value, summarize=10,</p> ...@@ -921,7 +921,7 @@ Print(value, summarize=10,</p>
<h3>get_places<a class="headerlink" href="#get-places" title="Permalink to this headline"></a></h3> <h3>get_places<a class="headerlink" href="#get-places" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">get_places</code><span class="sig-paren">(</span><em>device_count=None</em>, <em>device_type=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">get_places</code><span class="sig-paren">(</span><em>device_count=None</em>, <em>device_type=None</em><span class="sig-paren">)</span></dt>
<dd><p>Returns a list of places based on flags. The list will be used for parallel <dd><p>Returns a list of places based on flags. The list will be used for parallel
execution.</p> execution.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -949,7 +949,7 @@ execution.</p> ...@@ -949,7 +949,7 @@ execution.</p>
<h3>data<a class="headerlink" href="#data" title="Permalink to this headline"></a></h3> <h3>data<a class="headerlink" href="#data" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">data</code><span class="sig-paren">(</span><em>name</em>, <em>shape</em>, <em>append_batch_size=True</em>, <em>dtype='float32'</em>, <em>lod_level=0</em>, <em>type=VarType.LOD_TENSOR</em>, <em>stop_gradient=True</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">data</code><span class="sig-paren">(</span><em>name</em>, <em>shape</em>, <em>append_batch_size=True</em>, <em>dtype='float32'</em>, <em>lod_level=0</em>, <em>type=VarType.LOD_TENSOR</em>, <em>stop_gradient=True</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Data Layer</strong></p> <dd><p><strong>Data Layer</strong></p>
<p>This function takes in the input and based on whether data has <p>This function takes in the input and based on whether data has
to be returned back as a minibatch, it creates the global variable by using to be returned back as a minibatch, it creates the global variable by using
...@@ -993,7 +993,7 @@ to the LayerHelper constructor.</p> ...@@ -993,7 +993,7 @@ to the LayerHelper constructor.</p>
<h3>BlockGuardServ<a class="headerlink" href="#blockguardserv" title="Permalink to this headline"></a></h3> <h3>BlockGuardServ<a class="headerlink" href="#blockguardserv" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">BlockGuardServ</code><span class="sig-paren">(</span><em>server</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">BlockGuardServ</code><span class="sig-paren">(</span><em>server</em><span class="sig-paren">)</span></dt>
<dd><p>BlockGuardServ class.</p> <dd><p>BlockGuardServ class.</p>
<p>BlockGuardServ class is used to create an op with a block in a program.</p> <p>BlockGuardServ class is used to create an op with a block in a program.</p>
</dd></dl> </dd></dl>
...@@ -1003,7 +1003,7 @@ to the LayerHelper constructor.</p> ...@@ -1003,7 +1003,7 @@ to the LayerHelper constructor.</p>
<h3>ListenAndServ<a class="headerlink" href="#listenandserv" title="Permalink to this headline"></a></h3> <h3>ListenAndServ<a class="headerlink" href="#listenandserv" title="Permalink to this headline"></a></h3>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">ListenAndServ</code><span class="sig-paren">(</span><em>endpoint</em>, <em>fan_in=1</em>, <em>optimizer_mode=True</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.layers.</code><code class="descname">ListenAndServ</code><span class="sig-paren">(</span><em>endpoint</em>, <em>fan_in=1</em>, <em>optimizer_mode=True</em><span class="sig-paren">)</span></dt>
<dd><p>ListenAndServ class.</p> <dd><p>ListenAndServ class.</p>
<p>ListenAndServ class is used to wrap listen_and_serv op to create a server <p>ListenAndServ class is used to wrap listen_and_serv op to create a server
which can receive variables from clients and run a block.</p> which can receive variables from clients and run a block.</p>
...@@ -1014,7 +1014,7 @@ which can receive variables from clients and run a block.</p> ...@@ -1014,7 +1014,7 @@ which can receive variables from clients and run a block.</p>
<h3>Send<a class="headerlink" href="#send" title="Permalink to this headline"></a></h3> <h3>Send<a class="headerlink" href="#send" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">Send</code><span class="sig-paren">(</span><em>endpoints</em>, <em>send_vars</em>, <em>get_vars</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">Send</code><span class="sig-paren">(</span><em>endpoints</em>, <em>send_vars</em>, <em>get_vars</em><span class="sig-paren">)</span></dt>
<dd><p>Send layer</p> <dd><p>Send layer</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -1042,7 +1042,7 @@ side when server have finished running server side program.</p> ...@@ -1042,7 +1042,7 @@ side when server have finished running server side program.</p>
<h3>fc<a class="headerlink" href="#fc" title="Permalink to this headline"></a></h3> <h3>fc<a class="headerlink" href="#fc" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">fc</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>num_flatten_dims=1</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Fully Connected Layer</strong></p> <dd><p><strong>Fully Connected Layer</strong></p>
<p>The fully connected layer can take multiple tensors as its inputs. It <p>The fully connected layer can take multiple tensors as its inputs. It
creates a variable (one for each input tensor) called weights for each creates a variable (one for each input tensor) called weights for each
...@@ -1130,7 +1130,7 @@ layer.</li> ...@@ -1130,7 +1130,7 @@ layer.</li>
<h3>embedding<a class="headerlink" href="#embedding" title="Permalink to this headline"></a></h3> <h3>embedding<a class="headerlink" href="#embedding" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>padding_idx=None</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">embedding</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>is_sparse=False</em>, <em>padding_idx=None</em>, <em>param_attr=None</em>, <em>dtype='float32'</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Embedding Layer</strong></p> <dd><p><strong>Embedding Layer</strong></p>
<p>This layer is used to lookup embeddings of IDs, provided by <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>, in <p>This layer is used to lookup embeddings of IDs, provided by <code class="xref py py-attr docutils literal"><span class="pre">input</span></code>, in
a lookup table. The result of this lookup is the embedding of each ID in the a lookup table. The result of this lookup is the embedding of each ID in the
...@@ -1178,7 +1178,7 @@ with zeros whenever lookup encounters it in <code class="xref py py-attr docutil ...@@ -1178,7 +1178,7 @@ with zeros whenever lookup encounters it in <code class="xref py py-attr docutil
<h3>dynamic_lstm<a class="headerlink" href="#dynamic-lstm" title="Permalink to this headline"></a></h3> <h3>dynamic_lstm<a class="headerlink" href="#dynamic-lstm" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">dynamic_lstm</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>use_peepholes=True</em>, <em>is_reverse=False</em>, <em>gate_activation='sigmoid'</em>, <em>cell_activation='tanh'</em>, <em>candidate_activation='tanh'</em>, <em>dtype='float32'</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">dynamic_lstm</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>use_peepholes=True</em>, <em>is_reverse=False</em>, <em>gate_activation='sigmoid'</em>, <em>cell_activation='tanh'</em>, <em>candidate_activation='tanh'</em>, <em>dtype='float32'</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Dynamic LSTM Layer</strong></p> <dd><p><strong>Dynamic LSTM Layer</strong></p>
<p>The defalut implementation is diagonal/peephole connection <p>The defalut implementation is diagonal/peephole connection
(<a class="reference external" href="https://arxiv.org/pdf/1402.1128.pdf">https://arxiv.org/pdf/1402.1128.pdf</a>), the formula is as follows:</p> (<a class="reference external" href="https://arxiv.org/pdf/1402.1128.pdf">https://arxiv.org/pdf/1402.1128.pdf</a>), the formula is as follows:</p>
...@@ -1285,7 +1285,7 @@ will be named automatically.</li> ...@@ -1285,7 +1285,7 @@ will be named automatically.</li>
<h3>dynamic_lstmp<a class="headerlink" href="#dynamic-lstmp" title="Permalink to this headline"></a></h3> <h3>dynamic_lstmp<a class="headerlink" href="#dynamic-lstmp" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">dynamic_lstmp</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>proj_size</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>use_peepholes=True</em>, <em>is_reverse=False</em>, <em>gate_activation='sigmoid'</em>, <em>cell_activation='tanh'</em>, <em>candidate_activation='tanh'</em>, <em>proj_activation='tanh'</em>, <em>dtype='float32'</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">dynamic_lstmp</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>proj_size</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>use_peepholes=True</em>, <em>is_reverse=False</em>, <em>gate_activation='sigmoid'</em>, <em>cell_activation='tanh'</em>, <em>candidate_activation='tanh'</em>, <em>proj_activation='tanh'</em>, <em>dtype='float32'</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Dynamic LSTMP Layer</strong></p> <dd><p><strong>Dynamic LSTMP Layer</strong></p>
<p>LSTMP (LSTM with recurrent projection) layer has a separate projection <p>LSTMP (LSTM with recurrent projection) layer has a separate projection
layer after the LSTM layer, projecting the original hidden state to a layer after the LSTM layer, projecting the original hidden state to a
...@@ -1410,7 +1410,7 @@ will be named automatically.</li> ...@@ -1410,7 +1410,7 @@ will be named automatically.</li>
<h3>dynamic_gru<a class="headerlink" href="#dynamic-gru" title="Permalink to this headline"></a></h3> <h3>dynamic_gru<a class="headerlink" href="#dynamic-gru" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">dynamic_gru</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>is_reverse=False</em>, <em>gate_activation='sigmoid'</em>, <em>candidate_activation='tanh'</em>, <em>h_0=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">dynamic_gru</code><span class="sig-paren">(</span><em>input</em>, <em>size</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>is_reverse=False</em>, <em>gate_activation='sigmoid'</em>, <em>candidate_activation='tanh'</em>, <em>h_0=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Dynamic GRU Layer</strong></p> <dd><p><strong>Dynamic GRU Layer</strong></p>
<p>Refer to <a class="reference external" href="https://arxiv.org/abs/1412.3555">Empirical Evaluation of Gated Recurrent Neural Networks on <p>Refer to <a class="reference external" href="https://arxiv.org/abs/1412.3555">Empirical Evaluation of Gated Recurrent Neural Networks on
Sequence Modeling</a></p> Sequence Modeling</a></p>
...@@ -1478,7 +1478,7 @@ Choices = [&#8220;sigmoid&#8221;, &#8220;tanh&#8221;, &#8220;relu&#8221;, &#8220 ...@@ -1478,7 +1478,7 @@ Choices = [&#8220;sigmoid&#8221;, &#8220;tanh&#8221;, &#8220;relu&#8221;, &#8220
<h3>gru_unit<a class="headerlink" href="#gru-unit" title="Permalink to this headline"></a></h3> <h3>gru_unit<a class="headerlink" href="#gru-unit" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">gru_unit</code><span class="sig-paren">(</span><em>input</em>, <em>hidden</em>, <em>size</em>, <em>weight=None</em>, <em>bias=None</em>, <em>activation='tanh'</em>, <em>gate_activation='sigmoid'</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">gru_unit</code><span class="sig-paren">(</span><em>input</em>, <em>hidden</em>, <em>size</em>, <em>weight=None</em>, <em>bias=None</em>, <em>activation='tanh'</em>, <em>gate_activation='sigmoid'</em><span class="sig-paren">)</span></dt>
<dd><p>GRU unit layer. The equation of a gru step is:</p> <dd><p>GRU unit layer. The equation of a gru step is:</p>
<blockquote> <blockquote>
<div><div class="math"> <div><div class="math">
...@@ -1533,7 +1533,7 @@ Default: &#8216;sigmoid&#8217;</li> ...@@ -1533,7 +1533,7 @@ Default: &#8216;sigmoid&#8217;</li>
<h3>linear_chain_crf<a class="headerlink" href="#linear-chain-crf" title="Permalink to this headline"></a></h3> <h3>linear_chain_crf<a class="headerlink" href="#linear-chain-crf" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">linear_chain_crf</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>param_attr=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">linear_chain_crf</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>param_attr=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -1541,7 +1541,7 @@ Default: &#8216;sigmoid&#8217;</li> ...@@ -1541,7 +1541,7 @@ Default: &#8216;sigmoid&#8217;</li>
<h3>crf_decoding<a class="headerlink" href="#crf-decoding" title="Permalink to this headline"></a></h3> <h3>crf_decoding<a class="headerlink" href="#crf-decoding" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">crf_decoding</code><span class="sig-paren">(</span><em>input</em>, <em>param_attr</em>, <em>label=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">crf_decoding</code><span class="sig-paren">(</span><em>input</em>, <em>param_attr</em>, <em>label=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -1549,7 +1549,7 @@ Default: &#8216;sigmoid&#8217;</li> ...@@ -1549,7 +1549,7 @@ Default: &#8216;sigmoid&#8217;</li>
<h3>cos_sim<a class="headerlink" href="#cos-sim" title="Permalink to this headline"></a></h3> <h3>cos_sim<a class="headerlink" href="#cos-sim" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">cos_sim</code><span class="sig-paren">(</span><em>X</em>, <em>Y</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">cos_sim</code><span class="sig-paren">(</span><em>X</em>, <em>Y</em><span class="sig-paren">)</span></dt>
<dd><p>This function performs the cosine similarity between two tensors <dd><p>This function performs the cosine similarity between two tensors
X and Y and returns that as the output.</p> X and Y and returns that as the output.</p>
</dd></dl> </dd></dl>
...@@ -1559,7 +1559,7 @@ X and Y and returns that as the output.</p> ...@@ -1559,7 +1559,7 @@ X and Y and returns that as the output.</p>
<h3>cross_entropy<a class="headerlink" href="#cross-entropy" title="Permalink to this headline"></a></h3> <h3>cross_entropy<a class="headerlink" href="#cross-entropy" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">cross_entropy</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>soft_label=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">cross_entropy</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>soft_label=False</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Cross Entropy Layer</strong></p> <dd><p><strong>Cross Entropy Layer</strong></p>
<p>This layer computes the cross entropy between <cite>input</cite> and <cite>label</cite>. It <p>This layer computes the cross entropy between <cite>input</cite> and <cite>label</cite>. It
supports both standard cross-entropy and soft-label cross-entropy loss supports both standard cross-entropy and soft-label cross-entropy loss
...@@ -1642,7 +1642,7 @@ labels, default <cite>False</cite>.</li> ...@@ -1642,7 +1642,7 @@ labels, default <cite>False</cite>.</li>
<h3>square_error_cost<a class="headerlink" href="#square-error-cost" title="Permalink to this headline"></a></h3> <h3>square_error_cost<a class="headerlink" href="#square-error-cost" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">square_error_cost</code><span class="sig-paren">(</span><em>input</em>, <em>label</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">square_error_cost</code><span class="sig-paren">(</span><em>input</em>, <em>label</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Square error cost layer</strong></p> <dd><p><strong>Square error cost layer</strong></p>
<p>This layer accepts input predictions and target label and returns the <p>This layer accepts input predictions and target label and returns the
squared error cost.</p> squared error cost.</p>
...@@ -1688,7 +1688,7 @@ squared error cost.</p> ...@@ -1688,7 +1688,7 @@ squared error cost.</p>
<h3>accuracy<a class="headerlink" href="#accuracy" title="Permalink to this headline"></a></h3> <h3>accuracy<a class="headerlink" href="#accuracy" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">accuracy</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>k=1</em>, <em>correct=None</em>, <em>total=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">accuracy</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>k=1</em>, <em>correct=None</em>, <em>total=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function computes the accuracy using the input and label. <dd><p>This function computes the accuracy using the input and label.
The output is the top_k inputs and their indices.</p> The output is the top_k inputs and their indices.</p>
</dd></dl> </dd></dl>
...@@ -1698,7 +1698,7 @@ The output is the top_k inputs and their indices.</p> ...@@ -1698,7 +1698,7 @@ The output is the top_k inputs and their indices.</p>
<h3>chunk_eval<a class="headerlink" href="#chunk-eval" title="Permalink to this headline"></a></h3> <h3>chunk_eval<a class="headerlink" href="#chunk-eval" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">chunk_eval</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>chunk_scheme</em>, <em>num_chunk_types</em>, <em>excluded_chunk_types=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">chunk_eval</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>chunk_scheme</em>, <em>num_chunk_types</em>, <em>excluded_chunk_types=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function computes and outputs the precision, recall and <dd><p>This function computes and outputs the precision, recall and
F1-score of chunk detection.</p> F1-score of chunk detection.</p>
</dd></dl> </dd></dl>
...@@ -1708,7 +1708,7 @@ F1-score of chunk detection.</p> ...@@ -1708,7 +1708,7 @@ F1-score of chunk detection.</p>
<h3>sequence_conv<a class="headerlink" href="#sequence-conv" title="Permalink to this headline"></a></h3> <h3>sequence_conv<a class="headerlink" href="#sequence-conv" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_conv</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>filter_size=3</em>, <em>filter_stride=1</em>, <em>padding=None</em>, <em>bias_attr=None</em>, <em>param_attr=None</em>, <em>act=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sequence_conv</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>filter_size=3</em>, <em>filter_stride=1</em>, <em>padding=None</em>, <em>bias_attr=None</em>, <em>param_attr=None</em>, <em>act=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function creates the op for sequence_conv, using the inputs and <dd><p>This function creates the op for sequence_conv, using the inputs and
other convolutional configurations for the filters and stride as given other convolutional configurations for the filters and stride as given
in the input parameters to the function.</p> in the input parameters to the function.</p>
...@@ -1719,7 +1719,7 @@ in the input parameters to the function.</p> ...@@ -1719,7 +1719,7 @@ in the input parameters to the function.</p>
<h3>conv2d<a class="headerlink" href="#conv2d" title="Permalink to this headline"></a></h3> <h3>conv2d<a class="headerlink" href="#conv2d" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>filter_size</em>, <em>stride=None</em>, <em>padding=None</em>, <em>groups=None</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>use_cudnn=True</em>, <em>act=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">conv2d</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>filter_size</em>, <em>stride=None</em>, <em>padding=None</em>, <em>groups=None</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>use_cudnn=True</em>, <em>act=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Convlution2D Layer</strong></p> <dd><p><strong>Convlution2D Layer</strong></p>
<p>The convolution2D layer calculates the output based on the input, filter <p>The convolution2D layer calculates the output based on the input, filter
and strides, paddings, dilations, groups parameters. Input(Input) and and strides, paddings, dilations, groups parameters. Input(Input) and
...@@ -1816,7 +1816,7 @@ groups mismatch.</p> ...@@ -1816,7 +1816,7 @@ groups mismatch.</p>
<h3>sequence_pool<a class="headerlink" href="#sequence-pool" title="Permalink to this headline"></a></h3> <h3>sequence_pool<a class="headerlink" href="#sequence-pool" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_pool</code><span class="sig-paren">(</span><em>input</em>, <em>pool_type</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sequence_pool</code><span class="sig-paren">(</span><em>input</em>, <em>pool_type</em><span class="sig-paren">)</span></dt>
<dd><p>This function add the operator for sequence pooling. <dd><p>This function add the operator for sequence pooling.
It pools features of all time-steps of each instance, and is applied It pools features of all time-steps of each instance, and is applied
on top of the input using pool_type mentioned in the parameters.</p> on top of the input using pool_type mentioned in the parameters.</p>
...@@ -1876,7 +1876,7 @@ It supports average, sum, sqrt and max.</li> ...@@ -1876,7 +1876,7 @@ It supports average, sum, sqrt and max.</li>
<h3>pool2d<a class="headerlink" href="#pool2d" title="Permalink to this headline"></a></h3> <h3>pool2d<a class="headerlink" href="#pool2d" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">pool2d</code><span class="sig-paren">(</span><em>input</em>, <em>pool_size</em>, <em>pool_type</em>, <em>pool_stride=None</em>, <em>pool_padding=None</em>, <em>global_pooling=False</em>, <em>use_cudnn=True</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">pool2d</code><span class="sig-paren">(</span><em>input</em>, <em>pool_size</em>, <em>pool_type</em>, <em>pool_stride=None</em>, <em>pool_padding=None</em>, <em>global_pooling=False</em>, <em>use_cudnn=True</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function adds the operator for pooling in 2 dimensions, using the <dd><p>This function adds the operator for pooling in 2 dimensions, using the
pooling configurations mentioned in input parameters.</p> pooling configurations mentioned in input parameters.</p>
</dd></dl> </dd></dl>
...@@ -1886,7 +1886,7 @@ pooling configurations mentioned in input parameters.</p> ...@@ -1886,7 +1886,7 @@ pooling configurations mentioned in input parameters.</p>
<h3>batch_norm<a class="headerlink" href="#batch-norm" title="Permalink to this headline"></a></h3> <h3>batch_norm<a class="headerlink" href="#batch-norm" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">batch_norm</code><span class="sig-paren">(</span><em>input</em>, <em>act=None</em>, <em>is_test=False</em>, <em>momentum=0.9</em>, <em>epsilon=1e-05</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>data_layout='NCHW'</em>, <em>name=None</em>, <em>moving_mean_name=None</em>, <em>moving_variance_name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">batch_norm</code><span class="sig-paren">(</span><em>input</em>, <em>act=None</em>, <em>is_test=False</em>, <em>momentum=0.9</em>, <em>epsilon=1e-05</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>data_layout='NCHW'</em>, <em>name=None</em>, <em>moving_mean_name=None</em>, <em>moving_variance_name=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function helps create an operator to implement <dd><p>This function helps create an operator to implement
the BatchNorm layer using the configurations from the input parameters.</p> the BatchNorm layer using the configurations from the input parameters.</p>
</dd></dl> </dd></dl>
...@@ -1896,7 +1896,7 @@ the BatchNorm layer using the configurations from the input parameters.</p> ...@@ -1896,7 +1896,7 @@ the BatchNorm layer using the configurations from the input parameters.</p>
<h3>layer_norm<a class="headerlink" href="#layer-norm" title="Permalink to this headline"></a></h3> <h3>layer_norm<a class="headerlink" href="#layer-norm" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">layer_norm</code><span class="sig-paren">(</span><em>input</em>, <em>scale=True</em>, <em>shift=True</em>, <em>begin_norm_axis=1</em>, <em>epsilon=1e-05</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">layer_norm</code><span class="sig-paren">(</span><em>input</em>, <em>scale=True</em>, <em>shift=True</em>, <em>begin_norm_axis=1</em>, <em>epsilon=1e-05</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>act=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Layer Normalization</strong></p> <dd><p><strong>Layer Normalization</strong></p>
<p>Assume feature vectors exist on dimensions <p>Assume feature vectors exist on dimensions
<code class="xref py py-attr docutils literal"><span class="pre">begin_norm_axis</span> <span class="pre">...</span> <span class="pre">rank(input)</span></code> and calculate the moment statistics <code class="xref py py-attr docutils literal"><span class="pre">begin_norm_axis</span> <span class="pre">...</span> <span class="pre">rank(input)</span></code> and calculate the moment statistics
...@@ -1951,7 +1951,7 @@ bias <span class="math">\(b\)</span>.</li> ...@@ -1951,7 +1951,7 @@ bias <span class="math">\(b\)</span>.</li>
<h3>beam_search_decode<a class="headerlink" href="#beam-search-decode" title="Permalink to this headline"></a></h3> <h3>beam_search_decode<a class="headerlink" href="#beam-search-decode" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">beam_search_decode</code><span class="sig-paren">(</span><em>ids</em>, <em>scores</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">beam_search_decode</code><span class="sig-paren">(</span><em>ids</em>, <em>scores</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -1959,7 +1959,7 @@ bias <span class="math">\(b\)</span>.</li> ...@@ -1959,7 +1959,7 @@ bias <span class="math">\(b\)</span>.</li>
<h3>conv2d_transpose<a class="headerlink" href="#conv2d-transpose" title="Permalink to this headline"></a></h3> <h3>conv2d_transpose<a class="headerlink" href="#conv2d-transpose" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em>, <em>use_cudnn=True</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">conv2d_transpose</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>output_size=None</em>, <em>filter_size=None</em>, <em>padding=None</em>, <em>stride=None</em>, <em>dilation=None</em>, <em>param_attr=None</em>, <em>use_cudnn=True</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Convlution2D transpose layer</strong></p> <dd><p><strong>Convlution2D transpose layer</strong></p>
<p>The convolution2D transpose layer calculates the output based on the input, <p>The convolution2D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output) filter, and dilations, strides, paddings. Input(Input) and output(Output)
...@@ -2056,7 +2056,7 @@ groups mismatch.</p> ...@@ -2056,7 +2056,7 @@ groups mismatch.</p>
<h3>sequence_expand<a class="headerlink" href="#sequence-expand" title="Permalink to this headline"></a></h3> <h3>sequence_expand<a class="headerlink" href="#sequence-expand" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_expand</code><span class="sig-paren">(</span><em>x</em>, <em>y</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sequence_expand</code><span class="sig-paren">(</span><em>x</em>, <em>y</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Sequence Expand Layer. This layer will expand the input variable <strong>x</strong> <dd><p>Sequence Expand Layer. This layer will expand the input variable <strong>x</strong>
according to LoD information of <strong>y</strong>. And the following examples will according to LoD information of <strong>y</strong>. And the following examples will
explain how sequence_expand works:</p> explain how sequence_expand works:</p>
...@@ -2129,7 +2129,7 @@ will be named automatically.</li> ...@@ -2129,7 +2129,7 @@ will be named automatically.</li>
<h3>lstm_unit<a class="headerlink" href="#lstm-unit" title="Permalink to this headline"></a></h3> <h3>lstm_unit<a class="headerlink" href="#lstm-unit" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">lstm_unit</code><span class="sig-paren">(</span><em>x_t</em>, <em>hidden_t_prev</em>, <em>cell_t_prev</em>, <em>forget_bias=0.0</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">lstm_unit</code><span class="sig-paren">(</span><em>x_t</em>, <em>hidden_t_prev</em>, <em>cell_t_prev</em>, <em>forget_bias=0.0</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Lstm unit layer. The equation of a lstm step is:</p> <dd><p>Lstm unit layer. The equation of a lstm step is:</p>
<blockquote> <blockquote>
<div><div class="math"> <div><div class="math">
...@@ -2203,7 +2203,7 @@ and <strong>cell_t_prev</strong> not be the same or the 2nd dimensions of ...@@ -2203,7 +2203,7 @@ and <strong>cell_t_prev</strong> not be the same or the 2nd dimensions of
<h3>reduce_sum<a class="headerlink" href="#reduce-sum" title="Permalink to this headline"></a></h3> <h3>reduce_sum<a class="headerlink" href="#reduce-sum" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_sum</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">reduce_sum</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the sum of tensor elements over the given dimension.</p> <dd><p>Computes the sum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2250,7 +2250,7 @@ will be named automatically.</li> ...@@ -2250,7 +2250,7 @@ will be named automatically.</li>
<h3>reduce_mean<a class="headerlink" href="#reduce-mean" title="Permalink to this headline"></a></h3> <h3>reduce_mean<a class="headerlink" href="#reduce-mean" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_mean</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">reduce_mean</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the mean of tensor elements over the given dimension.</p> <dd><p>Computes the mean of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2297,7 +2297,7 @@ will be named automatically.</li> ...@@ -2297,7 +2297,7 @@ will be named automatically.</li>
<h3>reduce_max<a class="headerlink" href="#reduce-max" title="Permalink to this headline"></a></h3> <h3>reduce_max<a class="headerlink" href="#reduce-max" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_max</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">reduce_max</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the maximum of tensor elements over the given dimension.</p> <dd><p>Computes the maximum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2344,7 +2344,7 @@ will be named automatically.</li> ...@@ -2344,7 +2344,7 @@ will be named automatically.</li>
<h3>reduce_min<a class="headerlink" href="#reduce-min" title="Permalink to this headline"></a></h3> <h3>reduce_min<a class="headerlink" href="#reduce-min" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reduce_min</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">reduce_min</code><span class="sig-paren">(</span><em>input</em>, <em>dim=None</em>, <em>keep_dim=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes the minimum of tensor elements over the given dimension.</p> <dd><p>Computes the minimum of tensor elements over the given dimension.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2391,7 +2391,7 @@ will be named automatically.</li> ...@@ -2391,7 +2391,7 @@ will be named automatically.</li>
<h3>sequence_first_step<a class="headerlink" href="#sequence-first-step" title="Permalink to this headline"></a></h3> <h3>sequence_first_step<a class="headerlink" href="#sequence-first-step" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_first_step</code><span class="sig-paren">(</span><em>input</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sequence_first_step</code><span class="sig-paren">(</span><em>input</em><span class="sig-paren">)</span></dt>
<dd><p>This funciton get the first step of sequence.</p> <dd><p>This funciton get the first step of sequence.</p>
<div class="highlight-text"><div class="highlight"><pre><span></span>x is a 1-level LoDTensor: <div class="highlight-text"><div class="highlight"><pre><span></span>x is a 1-level LoDTensor:
x.lod = [[0, 2, 5, 7]] x.lod = [[0, 2, 5, 7]]
...@@ -2427,7 +2427,7 @@ then output is a Tensor: ...@@ -2427,7 +2427,7 @@ then output is a Tensor:
<h3>sequence_last_step<a class="headerlink" href="#sequence-last-step" title="Permalink to this headline"></a></h3> <h3>sequence_last_step<a class="headerlink" href="#sequence-last-step" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_last_step</code><span class="sig-paren">(</span><em>input</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sequence_last_step</code><span class="sig-paren">(</span><em>input</em><span class="sig-paren">)</span></dt>
<dd><p>This funciton get the last step of sequence.</p> <dd><p>This funciton get the last step of sequence.</p>
<div class="highlight-text"><div class="highlight"><pre><span></span>x is a 1-level LoDTensor: <div class="highlight-text"><div class="highlight"><pre><span></span>x is a 1-level LoDTensor:
x.lod = [[0, 2, 5, 7]] x.lod = [[0, 2, 5, 7]]
...@@ -2463,7 +2463,7 @@ then output is a Tensor: ...@@ -2463,7 +2463,7 @@ then output is a Tensor:
<h3>dropout<a class="headerlink" href="#dropout" title="Permalink to this headline"></a></h3> <h3>dropout<a class="headerlink" href="#dropout" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">dropout</code><span class="sig-paren">(</span><em>x</em>, <em>dropout_prob</em>, <em>is_test=False</em>, <em>seed=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">dropout</code><span class="sig-paren">(</span><em>x</em>, <em>dropout_prob</em>, <em>is_test=False</em>, <em>seed=None</em><span class="sig-paren">)</span></dt>
<dd><p>Computes dropout.</p> <dd><p>Computes dropout.</p>
<p>Drop or keep each element of <cite>x</cite> independently. Dropout is a regularization <p>Drop or keep each element of <cite>x</cite> independently. Dropout is a regularization
technique for reducing overfitting by preventing neuron co-adaption during technique for reducing overfitting by preventing neuron co-adaption during
...@@ -2505,7 +2505,7 @@ units will be dropped. DO NOT use a fixed seed in training.</li> ...@@ -2505,7 +2505,7 @@ units will be dropped. DO NOT use a fixed seed in training.</li>
<h3>split<a class="headerlink" href="#split" title="Permalink to this headline"></a></h3> <h3>split<a class="headerlink" href="#split" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">split</code><span class="sig-paren">(</span><em>input</em>, <em>num_or_sections</em>, <em>dim=-1</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">split</code><span class="sig-paren">(</span><em>input</em>, <em>num_or_sections</em>, <em>dim=-1</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Split the input tensor into multiple sub-tensors.</p> <dd><p>Split the input tensor into multiple sub-tensors.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
<col class="field-name" /> <col class="field-name" />
...@@ -2553,7 +2553,7 @@ will be named automatically.</li> ...@@ -2553,7 +2553,7 @@ will be named automatically.</li>
<h3>ctc_greedy_decoder<a class="headerlink" href="#ctc-greedy-decoder" title="Permalink to this headline"></a></h3> <h3>ctc_greedy_decoder<a class="headerlink" href="#ctc-greedy-decoder" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">ctc_greedy_decoder</code><span class="sig-paren">(</span><em>input</em>, <em>blank</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">ctc_greedy_decoder</code><span class="sig-paren">(</span><em>input</em>, <em>blank</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>This op is used to decode sequences by greedy policy by below steps: <dd><p>This op is used to decode sequences by greedy policy by below steps:
1. Get the indexes of max value for each row in input. a.k.a.</p> 1. Get the indexes of max value for each row in input. a.k.a.</p>
<blockquote> <blockquote>
...@@ -2625,7 +2625,7 @@ empty, the result LoDTensor will be [-1] with LoD [[0]] and dims [1, 1].</p> ...@@ -2625,7 +2625,7 @@ empty, the result LoDTensor will be [-1] with LoD [[0]] and dims [1, 1].</p>
<h3>edit_distance<a class="headerlink" href="#edit-distance" title="Permalink to this headline"></a></h3> <h3>edit_distance<a class="headerlink" href="#edit-distance" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">edit_distance</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>normalized=False</em>, <em>ignored_tokens=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">edit_distance</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>normalized=False</em>, <em>ignored_tokens=None</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>EditDistance operator computes the edit distances between a batch of <dd><p>EditDistance operator computes the edit distances between a batch of
hypothesis strings and their references. Edit distance, also called hypothesis strings and their references. Edit distance, also called
Levenshtein distance, measures how dissimilar two strings are by counting Levenshtein distance, measures how dissimilar two strings are by counting
...@@ -2678,7 +2678,7 @@ calculating edit distance.</li> ...@@ -2678,7 +2678,7 @@ calculating edit distance.</li>
<h3>l2_normalize<a class="headerlink" href="#l2-normalize" title="Permalink to this headline"></a></h3> <h3>l2_normalize<a class="headerlink" href="#l2-normalize" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">l2_normalize</code><span class="sig-paren">(</span><em>x</em>, <em>axis</em>, <em>epsilon=1e-12</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">l2_normalize</code><span class="sig-paren">(</span><em>x</em>, <em>axis</em>, <em>epsilon=1e-12</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>L2 normalize Layer</strong></p> <dd><p><strong>L2 normalize Layer</strong></p>
<p>The l2 normalize layer normalizes <cite>x</cite> along dimension <cite>axis</cite> using an L2 <p>The l2 normalize layer normalizes <cite>x</cite> along dimension <cite>axis</cite> using an L2
norm. For a 1-D tensor (<cite>dim</cite> is fixed to 0), this layer computes</p> norm. For a 1-D tensor (<cite>dim</cite> is fixed to 0), this layer computes</p>
...@@ -2722,7 +2722,7 @@ will be named automatically.</li> ...@@ -2722,7 +2722,7 @@ will be named automatically.</li>
<h3>matmul<a class="headerlink" href="#matmul" title="Permalink to this headline"></a></h3> <h3>matmul<a class="headerlink" href="#matmul" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">matmul</code><span class="sig-paren">(</span><em>x</em>, <em>y</em>, <em>transpose_x=False</em>, <em>transpose_y=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">matmul</code><span class="sig-paren">(</span><em>x</em>, <em>y</em>, <em>transpose_x=False</em>, <em>transpose_y=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Applies matrix multiplication to two tensors.</p> <dd><p>Applies matrix multiplication to two tensors.</p>
<p>Currently, the input tensors&#8217; rank can be any, but when the rank of any <p>Currently, the input tensors&#8217; rank can be any, but when the rank of any
inputs is bigger than 3, this two inputs&#8217; rank should be equal.</p> inputs is bigger than 3, this two inputs&#8217; rank should be equal.</p>
...@@ -2800,7 +2800,7 @@ will be named automatically.</li> ...@@ -2800,7 +2800,7 @@ will be named automatically.</li>
<h3>warpctc<a class="headerlink" href="#warpctc" title="Permalink to this headline"></a></h3> <h3>warpctc<a class="headerlink" href="#warpctc" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">warpctc</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>blank=0</em>, <em>norm_by_times=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">warpctc</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>blank=0</em>, <em>norm_by_times=False</em><span class="sig-paren">)</span></dt>
<dd><p>An operator integrating the open source Warp-CTC library <dd><p>An operator integrating the open source Warp-CTC library
(<a class="reference external" href="https://github.com/baidu-research/warp-ctc">https://github.com/baidu-research/warp-ctc</a>) (<a class="reference external" href="https://github.com/baidu-research/warp-ctc">https://github.com/baidu-research/warp-ctc</a>)
to compute Connectionist Temporal Classification (CTC) loss. to compute Connectionist Temporal Classification (CTC) loss.
...@@ -2849,7 +2849,7 @@ which is a 2-D Tensor of the shape [batch_size, 1].</p> ...@@ -2849,7 +2849,7 @@ which is a 2-D Tensor of the shape [batch_size, 1].</p>
<h3>sequence_reshape<a class="headerlink" href="#sequence-reshape" title="Permalink to this headline"></a></h3> <h3>sequence_reshape<a class="headerlink" href="#sequence-reshape" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_reshape</code><span class="sig-paren">(</span><em>input</em>, <em>new_dim</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sequence_reshape</code><span class="sig-paren">(</span><em>input</em>, <em>new_dim</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Sequence Reshape Layer</strong></p> <dd><p><strong>Sequence Reshape Layer</strong></p>
<p>This layer will rearrange the input sequences. The new dimension is set by <p>This layer will rearrange the input sequences. The new dimension is set by
user. Length of each sequence is computed according to original length, user. Length of each sequence is computed according to original length,
...@@ -2905,7 +2905,7 @@ with shape being [N, M] where M for dimension.</li> ...@@ -2905,7 +2905,7 @@ with shape being [N, M] where M for dimension.</li>
<h3>transpose<a class="headerlink" href="#transpose" title="Permalink to this headline"></a></h3> <h3>transpose<a class="headerlink" href="#transpose" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">transpose</code><span class="sig-paren">(</span><em>x</em>, <em>perm</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">transpose</code><span class="sig-paren">(</span><em>x</em>, <em>perm</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>transpose Layer</strong></p> <dd><p><strong>transpose Layer</strong></p>
<p>Permute the dimensions of <cite>input</cite> according to <cite>perm</cite>.</p> <p>Permute the dimensions of <cite>input</cite> according to <cite>perm</cite>.</p>
<p>The <cite>i</cite>-th dimension of the returned tensor will correspond to the <p>The <cite>i</cite>-th dimension of the returned tensor will correspond to the
...@@ -2940,7 +2940,7 @@ perm[i]-th dimension of <cite>input</cite>.</p> ...@@ -2940,7 +2940,7 @@ perm[i]-th dimension of <cite>input</cite>.</p>
<h3>im2sequence<a class="headerlink" href="#im2sequence" title="Permalink to this headline"></a></h3> <h3>im2sequence<a class="headerlink" href="#im2sequence" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">im2sequence</code><span class="sig-paren">(</span><em>input</em>, <em>filter_size=1</em>, <em>stride=1</em>, <em>padding=0</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">im2sequence</code><span class="sig-paren">(</span><em>input</em>, <em>filter_size=1</em>, <em>stride=1</em>, <em>padding=0</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Extracts image patches from the input tensor to form a tensor of shape <dd><p>Extracts image patches from the input tensor to form a tensor of shape
{input.batch_size * output_height * output_width, filter_size_H * {input.batch_size * output_height * output_width, filter_size_H *
filter_size_W * input.channels} which is similar with im2col. filter_size_W * input.channels} which is similar with im2col.
...@@ -3045,7 +3045,7 @@ output.lod = [[0, 4, 8]] ...@@ -3045,7 +3045,7 @@ output.lod = [[0, 4, 8]]
<h3>nce<a class="headerlink" href="#nce" title="Permalink to this headline"></a></h3> <h3>nce<a class="headerlink" href="#nce" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">nce</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>num_total_classes</em>, <em>sample_weight=None</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>num_neg_samples=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">nce</code><span class="sig-paren">(</span><em>input</em>, <em>label</em>, <em>num_total_classes</em>, <em>sample_weight=None</em>, <em>param_attr=None</em>, <em>bias_attr=None</em>, <em>num_neg_samples=None</em><span class="sig-paren">)</span></dt>
<dd><p>Compute and return the noise-contrastive estimation training loss. <dd><p>Compute and return the noise-contrastive estimation training loss.
See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](<a class="reference external" href="http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf">http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf</a>). See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](<a class="reference external" href="http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf">http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf</a>).
By default this operator uses a uniform distribution for sampling.</p> By default this operator uses a uniform distribution for sampling.</p>
...@@ -3082,7 +3082,7 @@ Duplicable: False Optional: True</li> ...@@ -3082,7 +3082,7 @@ Duplicable: False Optional: True</li>
<h3>beam_search<a class="headerlink" href="#beam-search" title="Permalink to this headline"></a></h3> <h3>beam_search<a class="headerlink" href="#beam-search" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">beam_search</code><span class="sig-paren">(</span><em>pre_ids</em>, <em>ids</em>, <em>scores</em>, <em>beam_size</em>, <em>end_id</em>, <em>level=0</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">beam_search</code><span class="sig-paren">(</span><em>pre_ids</em>, <em>ids</em>, <em>scores</em>, <em>beam_size</em>, <em>end_id</em>, <em>level=0</em><span class="sig-paren">)</span></dt>
<dd><p>This function implements the beam search algorithm.</p> <dd><p>This function implements the beam search algorithm.</p>
</dd></dl> </dd></dl>
...@@ -3091,7 +3091,7 @@ Duplicable: False Optional: True</li> ...@@ -3091,7 +3091,7 @@ Duplicable: False Optional: True</li>
<h3>row_conv<a class="headerlink" href="#row-conv" title="Permalink to this headline"></a></h3> <h3>row_conv<a class="headerlink" href="#row-conv" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">row_conv</code><span class="sig-paren">(</span><em>input</em>, <em>future_context_size</em>, <em>param_attr=None</em>, <em>act=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">row_conv</code><span class="sig-paren">(</span><em>input</em>, <em>future_context_size</em>, <em>param_attr=None</em>, <em>act=None</em><span class="sig-paren">)</span></dt>
<dd><p>Row Conv Operator. This layer will apply lookahead convolution to <dd><p>Row Conv Operator. This layer will apply lookahead convolution to
<strong>input</strong>. The input variable should be a 2D LoDTensor with shape [T, D]. <strong>input</strong>. The input variable should be a 2D LoDTensor with shape [T, D].
Parameters with shape [future_context_size + 1, D] will be created. The math Parameters with shape [future_context_size + 1, D] will be created. The math
...@@ -3142,7 +3142,7 @@ name, initializer etc.</li> ...@@ -3142,7 +3142,7 @@ name, initializer etc.</li>
<h3>multiplex<a class="headerlink" href="#multiplex" title="Permalink to this headline"></a></h3> <h3>multiplex<a class="headerlink" href="#multiplex" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">multiplex</code><span class="sig-paren">(</span><em>inputs</em>, <em>index</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">multiplex</code><span class="sig-paren">(</span><em>inputs</em>, <em>index</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Multiplex Layer</strong></p> <dd><p><strong>Multiplex Layer</strong></p>
<p>Referring to the given index variable, this layer selects rows from the <p>Referring to the given index variable, this layer selects rows from the
input variables to construct a multiplex variable. Assuming that there are input variables to construct a multiplex variable. Assuming that there are
...@@ -3196,7 +3196,7 @@ with shape [M, 1] where M is the batch size.</li> ...@@ -3196,7 +3196,7 @@ with shape [M, 1] where M is the batch size.</li>
<h3>mean<a class="headerlink" href="#mean" title="Permalink to this headline"></a></h3> <h3>mean<a class="headerlink" href="#mean" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">mean</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">mean</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Mean Operator.</p> <dd><p>Mean Operator.</p>
<p>Out is a scalar which is the mean of all elements in X.</p> <p>Out is a scalar which is the mean of all elements in X.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3217,7 +3217,7 @@ Duplicable: False Optional: False</td> ...@@ -3217,7 +3217,7 @@ Duplicable: False Optional: False</td>
<h3>mul<a class="headerlink" href="#mul" title="Permalink to this headline"></a></h3> <h3>mul<a class="headerlink" href="#mul" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">mul</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">mul</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Mul Operator.</p> <dd><p>Mul Operator.</p>
<p>This operator is used to perform matrix multiplication for input $X$ and $Y$.</p> <p>This operator is used to perform matrix multiplication for input $X$ and $Y$.</p>
<p>The equation is:</p> <p>The equation is:</p>
...@@ -3268,7 +3268,7 @@ flattened. See comments of <cite>x_num_col_dims</cite> for more details.</li> ...@@ -3268,7 +3268,7 @@ flattened. See comments of <cite>x_num_col_dims</cite> for more details.</li>
<h3>reshape<a class="headerlink" href="#reshape" title="Permalink to this headline"></a></h3> <h3>reshape<a class="headerlink" href="#reshape" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reshape</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">reshape</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Reshape Operator.</p> <dd><p>Reshape Operator.</p>
<p>Reshape Input(X) into the shape specified by Attr(shape).</p> <p>Reshape Input(X) into the shape specified by Attr(shape).</p>
<p>An example: <p>An example:
...@@ -3301,7 +3301,7 @@ Duplicable: False Optional: False</li> ...@@ -3301,7 +3301,7 @@ Duplicable: False Optional: False</li>
<h3>scale<a class="headerlink" href="#scale" title="Permalink to this headline"></a></h3> <h3>scale<a class="headerlink" href="#scale" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">scale</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">scale</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Scale operator</p> <dd><p>Scale operator</p>
<p>$$Out = scale*X$$</p> <p>$$Out = scale*X$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3327,7 +3327,7 @@ Duplicable: False Optional: False</li> ...@@ -3327,7 +3327,7 @@ Duplicable: False Optional: False</li>
<h3>sigmoid_cross_entropy_with_logits<a class="headerlink" href="#sigmoid-cross-entropy-with-logits" title="Permalink to this headline"></a></h3> <h3>sigmoid_cross_entropy_with_logits<a class="headerlink" href="#sigmoid-cross-entropy-with-logits" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sigmoid_cross_entropy_with_logits</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sigmoid_cross_entropy_with_logits</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>SigmoidCrossEntropyWithLogits Operator.</p> <dd><p>SigmoidCrossEntropyWithLogits Operator.</p>
<p>This measures the element-wise probability error in classification tasks <p>This measures the element-wise probability error in classification tasks
in which each class is independent. This can be thought of as predicting labels in which each class is independent. This can be thought of as predicting labels
...@@ -3370,7 +3370,7 @@ Duplicable: False Optional: False</li> ...@@ -3370,7 +3370,7 @@ Duplicable: False Optional: False</li>
<h3>elementwise_add<a class="headerlink" href="#elementwise-add" title="Permalink to this headline"></a></h3> <h3>elementwise_add<a class="headerlink" href="#elementwise-add" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_add</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">elementwise_add</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Add Operator.</p> <dd><p>Limited Elementwise Add Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<p>$$Out = X + Y$$</p> <p>$$Out = X + Y$$</p>
...@@ -3420,7 +3420,7 @@ Duplicable: False Optional: False</li> ...@@ -3420,7 +3420,7 @@ Duplicable: False Optional: False</li>
<h3>elementwise_div<a class="headerlink" href="#elementwise-div" title="Permalink to this headline"></a></h3> <h3>elementwise_div<a class="headerlink" href="#elementwise-div" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_div</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">elementwise_div</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Div Operator.</p> <dd><p>Limited Elementwise Div Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<p>$$Out = X / Y$$</p> <p>$$Out = X / Y$$</p>
...@@ -3470,7 +3470,7 @@ Duplicable: False Optional: False</li> ...@@ -3470,7 +3470,7 @@ Duplicable: False Optional: False</li>
<h3>elementwise_sub<a class="headerlink" href="#elementwise-sub" title="Permalink to this headline"></a></h3> <h3>elementwise_sub<a class="headerlink" href="#elementwise-sub" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_sub</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">elementwise_sub</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Sub Operator.</p> <dd><p>Limited Elementwise Sub Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<p>$$Out = X - Y$$</p> <p>$$Out = X - Y$$</p>
...@@ -3520,7 +3520,7 @@ Duplicable: False Optional: False</li> ...@@ -3520,7 +3520,7 @@ Duplicable: False Optional: False</li>
<h3>elementwise_mul<a class="headerlink" href="#elementwise-mul" title="Permalink to this headline"></a></h3> <h3>elementwise_mul<a class="headerlink" href="#elementwise-mul" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_mul</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">elementwise_mul</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Mul Operator.</p> <dd><p>Limited Elementwise Mul Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<p>$$Out = X odotY$$</p> <p>$$Out = X odotY$$</p>
...@@ -3570,7 +3570,7 @@ Duplicable: False Optional: False</li> ...@@ -3570,7 +3570,7 @@ Duplicable: False Optional: False</li>
<h3>elementwise_max<a class="headerlink" href="#elementwise-max" title="Permalink to this headline"></a></h3> <h3>elementwise_max<a class="headerlink" href="#elementwise-max" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_max</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">elementwise_max</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Max Operator.</p> <dd><p>Limited Elementwise Max Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<p>$$Out = max(X, Y)$$</p> <p>$$Out = max(X, Y)$$</p>
...@@ -3620,7 +3620,7 @@ Duplicable: False Optional: False</li> ...@@ -3620,7 +3620,7 @@ Duplicable: False Optional: False</li>
<h3>elementwise_min<a class="headerlink" href="#elementwise-min" title="Permalink to this headline"></a></h3> <h3>elementwise_min<a class="headerlink" href="#elementwise-min" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_min</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">elementwise_min</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Max Operator.</p> <dd><p>Limited Elementwise Max Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<p>$$Out = min(X, Y)$$</p> <p>$$Out = min(X, Y)$$</p>
...@@ -3670,7 +3670,7 @@ Duplicable: False Optional: False</li> ...@@ -3670,7 +3670,7 @@ Duplicable: False Optional: False</li>
<h3>elementwise_pow<a class="headerlink" href="#elementwise-pow" title="Permalink to this headline"></a></h3> <h3>elementwise_pow<a class="headerlink" href="#elementwise-pow" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elementwise_pow</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">elementwise_pow</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Limited Elementwise Pow Operator.</p> <dd><p>Limited Elementwise Pow Operator.</p>
<p>The equation is:</p> <p>The equation is:</p>
<p>$$Out = X ^ Y$$</p> <p>$$Out = X ^ Y$$</p>
...@@ -3720,7 +3720,7 @@ Duplicable: False Optional: False</li> ...@@ -3720,7 +3720,7 @@ Duplicable: False Optional: False</li>
<h3>clip<a class="headerlink" href="#clip" title="Permalink to this headline"></a></h3> <h3>clip<a class="headerlink" href="#clip" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">clip</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">clip</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Clip Operator.</p> <dd><p>Clip Operator.</p>
<p>The clip operator limits the value of given input within an interval. The <p>The clip operator limits the value of given input within an interval. The
interval is specified with arguments &#8216;min&#8217; and &#8216;max&#8217;:</p> interval is specified with arguments &#8216;min&#8217; and &#8216;max&#8217;:</p>
...@@ -3751,7 +3751,7 @@ Duplicable: False Optional: False</li> ...@@ -3751,7 +3751,7 @@ Duplicable: False Optional: False</li>
<h3>clip_by_norm<a class="headerlink" href="#clip-by-norm" title="Permalink to this headline"></a></h3> <h3>clip_by_norm<a class="headerlink" href="#clip-by-norm" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">clip_by_norm</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">clip_by_norm</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>ClipByNorm Operator.</p> <dd><p>ClipByNorm Operator.</p>
<p>This operator limits the L2 norm of the input $X$ within $max_norm$. <p>This operator limits the L2 norm of the input $X$ within $max_norm$.
If the L2 norm of $X$ is less than or equal to $max_norm$, $Out$ will be If the L2 norm of $X$ is less than or equal to $max_norm$, $Out$ will be
...@@ -3785,7 +3785,7 @@ Duplicable: False Optional: False</li> ...@@ -3785,7 +3785,7 @@ Duplicable: False Optional: False</li>
<h3>sequence_softmax<a class="headerlink" href="#sequence-softmax" title="Permalink to this headline"></a></h3> <h3>sequence_softmax<a class="headerlink" href="#sequence-softmax" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sequence_softmax</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sequence_softmax</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Sequence Softmax Operator.</p> <dd><p>Sequence Softmax Operator.</p>
<p>SequenceSoftmaxOp computes the softmax activation among all time-steps for each <p>SequenceSoftmaxOp computes the softmax activation among all time-steps for each
sequence. The dimension of each time-step should be 1. Thus, the shape of sequence. The dimension of each time-step should be 1. Thus, the shape of
...@@ -3819,7 +3819,7 @@ Duplicable: False Optional: False</td> ...@@ -3819,7 +3819,7 @@ Duplicable: False Optional: False</td>
<h3>sigmoid<a class="headerlink" href="#sigmoid" title="Permalink to this headline"></a></h3> <h3>sigmoid<a class="headerlink" href="#sigmoid" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sigmoid</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sigmoid</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Sigmoid Activation Operator</p> <dd><p>Sigmoid Activation Operator</p>
<p>$$out = frac{1}{1 + e^{-x}}$$</p> <p>$$out = frac{1}{1 + e^{-x}}$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3840,7 +3840,7 @@ Duplicable: False Optional: False</td> ...@@ -3840,7 +3840,7 @@ Duplicable: False Optional: False</td>
<h3>logsigmoid<a class="headerlink" href="#logsigmoid" title="Permalink to this headline"></a></h3> <h3>logsigmoid<a class="headerlink" href="#logsigmoid" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">logsigmoid</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">logsigmoid</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Logsigmoid Activation Operator</p> <dd><p>Logsigmoid Activation Operator</p>
<p>$$out = log frac{1}{1 + e^{-x}}$$</p> <p>$$out = log frac{1}{1 + e^{-x}}$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3861,7 +3861,7 @@ Duplicable: False Optional: False</td> ...@@ -3861,7 +3861,7 @@ Duplicable: False Optional: False</td>
<h3>exp<a class="headerlink" href="#exp" title="Permalink to this headline"></a></h3> <h3>exp<a class="headerlink" href="#exp" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">exp</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">exp</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Exp Activation Operator.</p> <dd><p>Exp Activation Operator.</p>
<p>$out = e^x$</p> <p>$out = e^x$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3882,7 +3882,7 @@ Duplicable: False Optional: False</td> ...@@ -3882,7 +3882,7 @@ Duplicable: False Optional: False</td>
<h3>relu<a class="headerlink" href="#relu" title="Permalink to this headline"></a></h3> <h3>relu<a class="headerlink" href="#relu" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">relu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">relu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Relu Activation Operator.</p> <dd><p>Relu Activation Operator.</p>
<p>$out = max(x, 0)$</p> <p>$out = max(x, 0)$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3903,7 +3903,7 @@ Duplicable: False Optional: False</td> ...@@ -3903,7 +3903,7 @@ Duplicable: False Optional: False</td>
<h3>tanh<a class="headerlink" href="#tanh" title="Permalink to this headline"></a></h3> <h3>tanh<a class="headerlink" href="#tanh" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">tanh</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">tanh</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Tanh Activation Operator.</p> <dd><p>Tanh Activation Operator.</p>
<p>$$out = frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$</p> <p>$$out = frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3924,7 +3924,7 @@ Duplicable: False Optional: False</td> ...@@ -3924,7 +3924,7 @@ Duplicable: False Optional: False</td>
<h3>tanh_shrink<a class="headerlink" href="#tanh-shrink" title="Permalink to this headline"></a></h3> <h3>tanh_shrink<a class="headerlink" href="#tanh-shrink" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">tanh_shrink</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">tanh_shrink</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>TanhShrink Activation Operator.</p> <dd><p>TanhShrink Activation Operator.</p>
<p>$$out = x - frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$</p> <p>$$out = x - frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3945,7 +3945,7 @@ Duplicable: False Optional: False</td> ...@@ -3945,7 +3945,7 @@ Duplicable: False Optional: False</td>
<h3>softshrink<a class="headerlink" href="#softshrink" title="Permalink to this headline"></a></h3> <h3>softshrink<a class="headerlink" href="#softshrink" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">softshrink</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">softshrink</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Softshrink Activation Operator.</p> <dd><p>Softshrink Activation Operator.</p>
<p>$$ <p>$$
out = begin{cases}</p> out = begin{cases}</p>
...@@ -3978,7 +3978,7 @@ Duplicable: False Optional: False</li> ...@@ -3978,7 +3978,7 @@ Duplicable: False Optional: False</li>
<h3>sqrt<a class="headerlink" href="#sqrt" title="Permalink to this headline"></a></h3> <h3>sqrt<a class="headerlink" href="#sqrt" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sqrt</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sqrt</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Sqrt Activation Operator.</p> <dd><p>Sqrt Activation Operator.</p>
<p>$out = sqrt{x}$</p> <p>$out = sqrt{x}$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -3999,7 +3999,7 @@ Duplicable: False Optional: False</td> ...@@ -3999,7 +3999,7 @@ Duplicable: False Optional: False</td>
<h3>abs<a class="headerlink" href="#abs" title="Permalink to this headline"></a></h3> <h3>abs<a class="headerlink" href="#abs" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">abs</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">abs</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Abs Activation Operator.</p> <dd><p>Abs Activation Operator.</p>
<p>$out = <a href="#id1"><span class="problematic" id="id2">|</span></a>x|$</p> <p>$out = <a href="#id1"><span class="problematic" id="id2">|</span></a>x|$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4020,7 +4020,7 @@ Duplicable: False Optional: False</td> ...@@ -4020,7 +4020,7 @@ Duplicable: False Optional: False</td>
<h3>ceil<a class="headerlink" href="#ceil" title="Permalink to this headline"></a></h3> <h3>ceil<a class="headerlink" href="#ceil" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">ceil</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">ceil</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Ceil Activation Operator.</p> <dd><p>Ceil Activation Operator.</p>
<p>$out = ceil(x)$</p> <p>$out = ceil(x)$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4041,7 +4041,7 @@ Duplicable: False Optional: False</td> ...@@ -4041,7 +4041,7 @@ Duplicable: False Optional: False</td>
<h3>floor<a class="headerlink" href="#floor" title="Permalink to this headline"></a></h3> <h3>floor<a class="headerlink" href="#floor" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">floor</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">floor</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Floor Activation Operator.</p> <dd><p>Floor Activation Operator.</p>
<p>$out = floor(x)$</p> <p>$out = floor(x)$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4062,7 +4062,7 @@ Duplicable: False Optional: False</td> ...@@ -4062,7 +4062,7 @@ Duplicable: False Optional: False</td>
<h3>round<a class="headerlink" href="#round" title="Permalink to this headline"></a></h3> <h3>round<a class="headerlink" href="#round" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">round</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">round</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Round Activation Operator.</p> <dd><p>Round Activation Operator.</p>
<p>$out = [x]$</p> <p>$out = [x]$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4083,7 +4083,7 @@ Duplicable: False Optional: False</td> ...@@ -4083,7 +4083,7 @@ Duplicable: False Optional: False</td>
<h3>reciprocal<a class="headerlink" href="#reciprocal" title="Permalink to this headline"></a></h3> <h3>reciprocal<a class="headerlink" href="#reciprocal" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">reciprocal</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">reciprocal</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Reciprocal Activation Operator.</p> <dd><p>Reciprocal Activation Operator.</p>
<p>$$out = frac{1}{x}$$</p> <p>$$out = frac{1}{x}$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4104,7 +4104,7 @@ Duplicable: False Optional: False</td> ...@@ -4104,7 +4104,7 @@ Duplicable: False Optional: False</td>
<h3>log<a class="headerlink" href="#log" title="Permalink to this headline"></a></h3> <h3>log<a class="headerlink" href="#log" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">log</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">log</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Log Activation Operator.</p> <dd><p>Log Activation Operator.</p>
<p>$out = ln(x)$</p> <p>$out = ln(x)$</p>
<p>Natural logarithm of x.</p> <p>Natural logarithm of x.</p>
...@@ -4126,7 +4126,7 @@ Duplicable: False Optional: False</td> ...@@ -4126,7 +4126,7 @@ Duplicable: False Optional: False</td>
<h3>square<a class="headerlink" href="#square" title="Permalink to this headline"></a></h3> <h3>square<a class="headerlink" href="#square" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">square</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">square</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Square Activation Operator.</p> <dd><p>Square Activation Operator.</p>
<p>$out = x^2$</p> <p>$out = x^2$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4147,7 +4147,7 @@ Duplicable: False Optional: False</td> ...@@ -4147,7 +4147,7 @@ Duplicable: False Optional: False</td>
<h3>softplus<a class="headerlink" href="#softplus" title="Permalink to this headline"></a></h3> <h3>softplus<a class="headerlink" href="#softplus" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">softplus</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">softplus</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Softplus Activation Operator.</p> <dd><p>Softplus Activation Operator.</p>
<p>$out = ln(1 + e^{x})$</p> <p>$out = ln(1 + e^{x})$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4168,7 +4168,7 @@ Duplicable: False Optional: False</td> ...@@ -4168,7 +4168,7 @@ Duplicable: False Optional: False</td>
<h3>softsign<a class="headerlink" href="#softsign" title="Permalink to this headline"></a></h3> <h3>softsign<a class="headerlink" href="#softsign" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">softsign</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">softsign</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Softsign Activation Operator.</p> <dd><p>Softsign Activation Operator.</p>
<p>$$out = frac{x}{1 + <a href="#id5"><span class="problematic" id="id6">|x|</span></a>}$$</p> <p>$$out = frac{x}{1 + <a href="#id5"><span class="problematic" id="id6">|x|</span></a>}$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4189,7 +4189,7 @@ Duplicable: False Optional: False</td> ...@@ -4189,7 +4189,7 @@ Duplicable: False Optional: False</td>
<h3>brelu<a class="headerlink" href="#brelu" title="Permalink to this headline"></a></h3> <h3>brelu<a class="headerlink" href="#brelu" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">brelu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">brelu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>BRelu Activation Operator.</p> <dd><p>BRelu Activation Operator.</p>
<p>$out = max(min(x, t_{min}), t_{max})$</p> <p>$out = max(min(x, t_{min}), t_{max})$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4216,7 +4216,7 @@ Duplicable: False Optional: False</li> ...@@ -4216,7 +4216,7 @@ Duplicable: False Optional: False</li>
<h3>leaky_relu<a class="headerlink" href="#leaky-relu" title="Permalink to this headline"></a></h3> <h3>leaky_relu<a class="headerlink" href="#leaky-relu" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">leaky_relu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">leaky_relu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>LeakyRelu Activation Operator.</p> <dd><p>LeakyRelu Activation Operator.</p>
<p>$out = max(x, alpha * x)$</p> <p>$out = max(x, alpha * x)$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4242,7 +4242,7 @@ Duplicable: False Optional: False</li> ...@@ -4242,7 +4242,7 @@ Duplicable: False Optional: False</li>
<h3>soft_relu<a class="headerlink" href="#soft-relu" title="Permalink to this headline"></a></h3> <h3>soft_relu<a class="headerlink" href="#soft-relu" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">soft_relu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">soft_relu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>SoftRelu Activation Operator.</p> <dd><p>SoftRelu Activation Operator.</p>
<p>$out = ln(1 + exp(max(min(x, threshold), threshold))$</p> <p>$out = ln(1 + exp(max(min(x, threshold), threshold))$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4268,7 +4268,7 @@ Duplicable: False Optional: False</li> ...@@ -4268,7 +4268,7 @@ Duplicable: False Optional: False</li>
<h3>elu<a class="headerlink" href="#elu" title="Permalink to this headline"></a></h3> <h3>elu<a class="headerlink" href="#elu" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">elu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">elu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>ELU Activation Operator.</p> <dd><p>ELU Activation Operator.</p>
<p>Applies the following element-wise computation on the input according to <p>Applies the following element-wise computation on the input according to
<a class="reference external" href="https://arxiv.org/abs/1511.07289">https://arxiv.org/abs/1511.07289</a>.</p> <a class="reference external" href="https://arxiv.org/abs/1511.07289">https://arxiv.org/abs/1511.07289</a>.</p>
...@@ -4296,7 +4296,7 @@ Duplicable: False Optional: False</li> ...@@ -4296,7 +4296,7 @@ Duplicable: False Optional: False</li>
<h3>relu6<a class="headerlink" href="#relu6" title="Permalink to this headline"></a></h3> <h3>relu6<a class="headerlink" href="#relu6" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">relu6</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">relu6</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Relu6 Activation Operator.</p> <dd><p>Relu6 Activation Operator.</p>
<p>$out = min(max(0, x), 6)$</p> <p>$out = min(max(0, x), 6)$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4322,7 +4322,7 @@ Duplicable: False Optional: False</li> ...@@ -4322,7 +4322,7 @@ Duplicable: False Optional: False</li>
<h3>pow<a class="headerlink" href="#pow" title="Permalink to this headline"></a></h3> <h3>pow<a class="headerlink" href="#pow" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">pow</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">pow</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Pow Activation Operator.</p> <dd><p>Pow Activation Operator.</p>
<p>$out = x^{factor}$</p> <p>$out = x^{factor}$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4348,7 +4348,7 @@ Duplicable: False Optional: False</li> ...@@ -4348,7 +4348,7 @@ Duplicable: False Optional: False</li>
<h3>stanh<a class="headerlink" href="#stanh" title="Permalink to this headline"></a></h3> <h3>stanh<a class="headerlink" href="#stanh" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">stanh</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">stanh</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>STanh Activation Operator.</p> <dd><p>STanh Activation Operator.</p>
<p>$$out = b * frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}$$</p> <p>$$out = b * frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4375,7 +4375,7 @@ Duplicable: False Optional: False</li> ...@@ -4375,7 +4375,7 @@ Duplicable: False Optional: False</li>
<h3>hard_shrink<a class="headerlink" href="#hard-shrink" title="Permalink to this headline"></a></h3> <h3>hard_shrink<a class="headerlink" href="#hard-shrink" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">hard_shrink</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">hard_shrink</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>HardShrink Activation Operator.</p> <dd><p>HardShrink Activation Operator.</p>
<p>$$ <p>$$
out = begin{cases}</p> out = begin{cases}</p>
...@@ -4408,7 +4408,7 @@ Duplicable: False Optional: False</li> ...@@ -4408,7 +4408,7 @@ Duplicable: False Optional: False</li>
<h3>thresholded_relu<a class="headerlink" href="#thresholded-relu" title="Permalink to this headline"></a></h3> <h3>thresholded_relu<a class="headerlink" href="#thresholded-relu" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">thresholded_relu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">thresholded_relu</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>ThresholdedRelu Activation Operator.</p> <dd><p>ThresholdedRelu Activation Operator.</p>
<p>$$ <p>$$
out = begin{cases}</p> out = begin{cases}</p>
...@@ -4440,7 +4440,7 @@ Duplicable: False Optional: False</li> ...@@ -4440,7 +4440,7 @@ Duplicable: False Optional: False</li>
<h3>hard_sigmoid<a class="headerlink" href="#hard-sigmoid" title="Permalink to this headline"></a></h3> <h3>hard_sigmoid<a class="headerlink" href="#hard-sigmoid" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">hard_sigmoid</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">hard_sigmoid</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>HardSigmoid Activation Operator.</p> <dd><p>HardSigmoid Activation Operator.</p>
<p>Segment-wise linear approximation of sigmoid(<a class="reference external" href="https://arxiv.org/abs/1603.00391">https://arxiv.org/abs/1603.00391</a>), <p>Segment-wise linear approximation of sigmoid(<a class="reference external" href="https://arxiv.org/abs/1603.00391">https://arxiv.org/abs/1603.00391</a>),
which is much faster than sigmoid.</p> which is much faster than sigmoid.</p>
...@@ -4472,7 +4472,7 @@ Duplicable: False Optional: False</li> ...@@ -4472,7 +4472,7 @@ Duplicable: False Optional: False</li>
<h3>swish<a class="headerlink" href="#swish" title="Permalink to this headline"></a></h3> <h3>swish<a class="headerlink" href="#swish" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">swish</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">swish</code><span class="sig-paren">(</span><em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Swish Activation Operator.</p> <dd><p>Swish Activation Operator.</p>
<p>$$out = frac{x}{1 + e^{- beta x}}$$</p> <p>$$out = frac{x}{1 + e^{- beta x}}$$</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4501,7 +4501,7 @@ Duplicable: False Optional: False</li> ...@@ -4501,7 +4501,7 @@ Duplicable: False Optional: False</li>
<h3>create_tensor<a class="headerlink" href="#create-tensor" title="Permalink to this headline"></a></h3> <h3>create_tensor<a class="headerlink" href="#create-tensor" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">create_tensor</code><span class="sig-paren">(</span><em>dtype</em>, <em>name=None</em>, <em>persistable=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">create_tensor</code><span class="sig-paren">(</span><em>dtype</em>, <em>name=None</em>, <em>persistable=False</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -4509,7 +4509,7 @@ Duplicable: False Optional: False</li> ...@@ -4509,7 +4509,7 @@ Duplicable: False Optional: False</li>
<h3>create_parameter<a class="headerlink" href="#create-parameter" title="Permalink to this headline"></a></h3> <h3>create_parameter<a class="headerlink" href="#create-parameter" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">create_parameter</code><span class="sig-paren">(</span><em>shape</em>, <em>dtype</em>, <em>name=None</em>, <em>attr=None</em>, <em>is_bias=False</em>, <em>default_initializer=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">create_parameter</code><span class="sig-paren">(</span><em>shape</em>, <em>dtype</em>, <em>name=None</em>, <em>attr=None</em>, <em>is_bias=False</em>, <em>default_initializer=None</em><span class="sig-paren">)</span></dt>
<dd><p>Create a parameter <dd><p>Create a parameter
:param shape: shape of the parameter :param shape: shape of the parameter
:type shape: list[int] :type shape: list[int]
...@@ -4541,7 +4541,7 @@ Xavier() will be used.</div></blockquote> ...@@ -4541,7 +4541,7 @@ Xavier() will be used.</div></blockquote>
<h3>create_global_var<a class="headerlink" href="#create-global-var" title="Permalink to this headline"></a></h3> <h3>create_global_var<a class="headerlink" href="#create-global-var" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">create_global_var</code><span class="sig-paren">(</span><em>shape</em>, <em>value</em>, <em>dtype</em>, <em>persistable=False</em>, <em>force_cpu=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">create_global_var</code><span class="sig-paren">(</span><em>shape</em>, <em>value</em>, <em>dtype</em>, <em>persistable=False</em>, <em>force_cpu=False</em>, <em>name=None</em><span class="sig-paren">)</span></dt>
<dd><p>Create a global variable. such as global_step <dd><p>Create a global variable. such as global_step
:param shape: shape of the variable :param shape: shape of the variable
:type shape: list[int] :type shape: list[int]
...@@ -4570,7 +4570,7 @@ Xavier() will be used.</div></blockquote> ...@@ -4570,7 +4570,7 @@ Xavier() will be used.</div></blockquote>
<h3>cast<a class="headerlink" href="#cast" title="Permalink to this headline"></a></h3> <h3>cast<a class="headerlink" href="#cast" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">cast</code><span class="sig-paren">(</span><em>x</em>, <em>dtype</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">cast</code><span class="sig-paren">(</span><em>x</em>, <em>dtype</em><span class="sig-paren">)</span></dt>
<dd><p>This function takes in the input with input_dtype <dd><p>This function takes in the input with input_dtype
and casts it to the output_dtype as the output.</p> and casts it to the output_dtype as the output.</p>
</dd></dl> </dd></dl>
...@@ -4580,7 +4580,7 @@ and casts it to the output_dtype as the output.</p> ...@@ -4580,7 +4580,7 @@ and casts it to the output_dtype as the output.</p>
<h3>concat<a class="headerlink" href="#concat" title="Permalink to this headline"></a></h3> <h3>concat<a class="headerlink" href="#concat" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">concat</code><span class="sig-paren">(</span><em>input</em>, <em>axis=0</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">concat</code><span class="sig-paren">(</span><em>input</em>, <em>axis=0</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Concat</strong></p> <dd><p><strong>Concat</strong></p>
<p>This function concatenates the input along the axis mentioned <p>This function concatenates the input along the axis mentioned
and returns that as the output.</p> and returns that as the output.</p>
...@@ -4610,7 +4610,7 @@ and returns that as the output.</p> ...@@ -4610,7 +4610,7 @@ and returns that as the output.</p>
<h3>sums<a class="headerlink" href="#sums" title="Permalink to this headline"></a></h3> <h3>sums<a class="headerlink" href="#sums" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">sums</code><span class="sig-paren">(</span><em>input</em>, <em>out=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">sums</code><span class="sig-paren">(</span><em>input</em>, <em>out=None</em><span class="sig-paren">)</span></dt>
<dd><p>This function performs the sum operation on the input and returns the <dd><p>This function performs the sum operation on the input and returns the
result as the output.</p> result as the output.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4638,7 +4638,7 @@ that need to be summed up.</td> ...@@ -4638,7 +4638,7 @@ that need to be summed up.</td>
<h3>assign<a class="headerlink" href="#assign" title="Permalink to this headline"></a></h3> <h3>assign<a class="headerlink" href="#assign" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">assign</code><span class="sig-paren">(</span><em>input</em>, <em>output</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">assign</code><span class="sig-paren">(</span><em>input</em>, <em>output</em><span class="sig-paren">)</span></dt>
<dd><p><strong>Assign</strong></p> <dd><p><strong>Assign</strong></p>
<p>This function copies the <em>input</em> Variable to the <em>output</em> Variable.</p> <p>This function copies the <em>input</em> Variable to the <em>output</em> Variable.</p>
<table class="docutils field-list" frame="void" rules="none"> <table class="docutils field-list" frame="void" rules="none">
...@@ -4667,7 +4667,7 @@ that need to be summed up.</td> ...@@ -4667,7 +4667,7 @@ that need to be summed up.</td>
<h3>fill_constant_batch_size_like<a class="headerlink" href="#fill-constant-batch-size-like" title="Permalink to this headline"></a></h3> <h3>fill_constant_batch_size_like<a class="headerlink" href="#fill-constant-batch-size-like" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fill_constant_batch_size_like</code><span class="sig-paren">(</span><em>input</em>, <em>shape</em>, <em>dtype</em>, <em>value</em>, <em>input_dim_idx=0</em>, <em>output_dim_idx=0</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">fill_constant_batch_size_like</code><span class="sig-paren">(</span><em>input</em>, <em>shape</em>, <em>dtype</em>, <em>value</em>, <em>input_dim_idx=0</em>, <em>output_dim_idx=0</em><span class="sig-paren">)</span></dt>
<dd><p><strong>fill_constant_batch_size_like</strong></p> <dd><p><strong>fill_constant_batch_size_like</strong></p>
<p>This function creates a tensor of specified <em>shape</em>, <em>dtype</em> and batch size, <p>This function creates a tensor of specified <em>shape</em>, <em>dtype</em> and batch size,
and initializes this with a constant supplied in <em>value</em>. The batch size is and initializes this with a constant supplied in <em>value</em>. The batch size is
...@@ -4707,7 +4707,7 @@ obtained from the <cite>input</cite> tensor.</p> ...@@ -4707,7 +4707,7 @@ obtained from the <cite>input</cite> tensor.</p>
<h3>fill_constant<a class="headerlink" href="#fill-constant" title="Permalink to this headline"></a></h3> <h3>fill_constant<a class="headerlink" href="#fill-constant" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">fill_constant</code><span class="sig-paren">(</span><em>shape</em>, <em>dtype</em>, <em>value</em>, <em>force_cpu=False</em>, <em>out=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">fill_constant</code><span class="sig-paren">(</span><em>shape</em>, <em>dtype</em>, <em>value</em>, <em>force_cpu=False</em>, <em>out=None</em><span class="sig-paren">)</span></dt>
<dd><p><strong>fill_constant</strong></p> <dd><p><strong>fill_constant</strong></p>
<p>This function creates a tensor with specified <cite>shape</cite> and <cite>dtype</cite>, and <p>This function creates a tensor with specified <cite>shape</cite> and <cite>dtype</cite>, and
initializes it with a constant specifed by <cite>value</cite>.</p> initializes it with a constant specifed by <cite>value</cite>.</p>
...@@ -4744,7 +4744,7 @@ initializes it with a constant specifed by <cite>value</cite>.</p> ...@@ -4744,7 +4744,7 @@ initializes it with a constant specifed by <cite>value</cite>.</p>
<h3>ones<a class="headerlink" href="#ones" title="Permalink to this headline"></a></h3> <h3>ones<a class="headerlink" href="#ones" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">ones</code><span class="sig-paren">(</span><em>shape</em>, <em>dtype</em>, <em>force_cpu=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">ones</code><span class="sig-paren">(</span><em>shape</em>, <em>dtype</em>, <em>force_cpu=False</em><span class="sig-paren">)</span></dt>
<dd><p><strong>ones</strong></p> <dd><p><strong>ones</strong></p>
<p>This function creates a tensor of specified <em>shape</em> and <p>This function creates a tensor of specified <em>shape</em> and
<em>dtype</em>, and initializes this with 1.</p> <em>dtype</em>, and initializes this with 1.</p>
...@@ -4778,7 +4778,7 @@ initializes it with a constant specifed by <cite>value</cite>.</p> ...@@ -4778,7 +4778,7 @@ initializes it with a constant specifed by <cite>value</cite>.</p>
<h3>zeros<a class="headerlink" href="#zeros" title="Permalink to this headline"></a></h3> <h3>zeros<a class="headerlink" href="#zeros" title="Permalink to this headline"></a></h3>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.layers.</code><code class="descname">zeros</code><span class="sig-paren">(</span><em>shape</em>, <em>dtype</em>, <em>force_cpu=False</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.layers.</code><code class="descname">zeros</code><span class="sig-paren">(</span><em>shape</em>, <em>dtype</em>, <em>force_cpu=False</em><span class="sig-paren">)</span></dt>
<dd><p><strong>zeros</strong></p> <dd><p><strong>zeros</strong></p>
<p>This function creates a tensor of specified <em>shape</em> and <p>This function creates a tensor of specified <em>shape</em> and
<em>dtype</em>, and initializes this with 0.</p> <em>dtype</em>, and initializes this with 0.</p>
......
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>simple_img_conv_pool<a class="headerlink" href="#simple-img-conv-pool" title="Permalink to this headline"></a></h2> <h2>simple_img_conv_pool<a class="headerlink" href="#simple-img-conv-pool" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.nets.</code><code class="descname">simple_img_conv_pool</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>filter_size</em>, <em>pool_size</em>, <em>pool_stride</em>, <em>act</em>, <em>param_attr=None</em>, <em>pool_type='max'</em>, <em>use_cudnn=True</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.nets.</code><code class="descname">simple_img_conv_pool</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>filter_size</em>, <em>pool_size</em>, <em>pool_stride</em>, <em>act</em>, <em>param_attr=None</em>, <em>pool_type='max'</em>, <em>use_cudnn=True</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -187,7 +187,7 @@ ...@@ -187,7 +187,7 @@
<h2>sequence_conv_pool<a class="headerlink" href="#sequence-conv-pool" title="Permalink to this headline"></a></h2> <h2>sequence_conv_pool<a class="headerlink" href="#sequence-conv-pool" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.nets.</code><code class="descname">sequence_conv_pool</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>filter_size</em>, <em>param_attr=None</em>, <em>act='sigmoid'</em>, <em>pool_type='max'</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.nets.</code><code class="descname">sequence_conv_pool</code><span class="sig-paren">(</span><em>input</em>, <em>num_filters</em>, <em>filter_size</em>, <em>param_attr=None</em>, <em>act='sigmoid'</em>, <em>pool_type='max'</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -195,7 +195,7 @@ ...@@ -195,7 +195,7 @@
<h2>glu<a class="headerlink" href="#glu" title="Permalink to this headline"></a></h2> <h2>glu<a class="headerlink" href="#glu" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.nets.</code><code class="descname">glu</code><span class="sig-paren">(</span><em>input</em>, <em>dim=-1</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.nets.</code><code class="descname">glu</code><span class="sig-paren">(</span><em>input</em>, <em>dim=-1</em><span class="sig-paren">)</span></dt>
<dd><p>The gated linear unit composed by split, sigmoid activation and elementwise <dd><p>The gated linear unit composed by split, sigmoid activation and elementwise
multiplication. Specifically, Split the input into two equal sized parts multiplication. Specifically, Split the input into two equal sized parts
<span class="math">\(a\)</span> and <span class="math">\(b\)</span> along the given dimension and then compute as <span class="math">\(a\)</span> and <span class="math">\(b\)</span> along the given dimension and then compute as
...@@ -236,7 +236,7 @@ dimension to split along is <span class="math">\(rank(input) + dim\)</span>.</li ...@@ -236,7 +236,7 @@ dimension to split along is <span class="math">\(rank(input) + dim\)</span>.</li
<h2>scaled_dot_product_attention<a class="headerlink" href="#scaled-dot-product-attention" title="Permalink to this headline"></a></h2> <h2>scaled_dot_product_attention<a class="headerlink" href="#scaled-dot-product-attention" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.nets.</code><code class="descname">scaled_dot_product_attention</code><span class="sig-paren">(</span><em>queries</em>, <em>keys</em>, <em>values</em>, <em>num_heads=1</em>, <em>dropout_rate=0.0</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.nets.</code><code class="descname">scaled_dot_product_attention</code><span class="sig-paren">(</span><em>queries</em>, <em>keys</em>, <em>values</em>, <em>num_heads=1</em>, <em>dropout_rate=0.0</em><span class="sig-paren">)</span></dt>
<dd><p>The dot-product attention.</p> <dd><p>The dot-product attention.</p>
<p>Attention mechanism can be seen as mapping a query and a set of key-value <p>Attention mechanism can be seen as mapping a query and a set of key-value
pairs to an output. The output is computed as a weighted sum of the values, pairs to an output. The output is computed as a weighted sum of the values,
......
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>SGD<a class="headerlink" href="#sgd" title="Permalink to this headline"></a></h2> <h2>SGD<a class="headerlink" href="#sgd" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">SGD</code></dt> <code class="descclassname">paddle.fluid.optimizer.</code><code class="descname">SGD</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">SGDOptimizer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">SGDOptimizer</span></code></p>
</dd></dl> </dd></dl>
...@@ -188,7 +188,7 @@ ...@@ -188,7 +188,7 @@
<h2>Momentum<a class="headerlink" href="#momentum" title="Permalink to this headline"></a></h2> <h2>Momentum<a class="headerlink" href="#momentum" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">Momentum</code></dt> <code class="descclassname">paddle.fluid.optimizer.</code><code class="descname">Momentum</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">MomentumOptimizer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">MomentumOptimizer</span></code></p>
</dd></dl> </dd></dl>
...@@ -197,7 +197,7 @@ ...@@ -197,7 +197,7 @@
<h2>Adagrad<a class="headerlink" href="#adagrad" title="Permalink to this headline"></a></h2> <h2>Adagrad<a class="headerlink" href="#adagrad" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">Adagrad</code></dt> <code class="descclassname">paddle.fluid.optimizer.</code><code class="descname">Adagrad</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">AdagradOptimizer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">AdagradOptimizer</span></code></p>
</dd></dl> </dd></dl>
...@@ -206,7 +206,7 @@ ...@@ -206,7 +206,7 @@
<h2>Adam<a class="headerlink" href="#adam" title="Permalink to this headline"></a></h2> <h2>Adam<a class="headerlink" href="#adam" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">Adam</code></dt> <code class="descclassname">paddle.fluid.optimizer.</code><code class="descname">Adam</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">AdamOptimizer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">AdamOptimizer</span></code></p>
</dd></dl> </dd></dl>
...@@ -215,7 +215,7 @@ ...@@ -215,7 +215,7 @@
<h2>Adamax<a class="headerlink" href="#adamax" title="Permalink to this headline"></a></h2> <h2>Adamax<a class="headerlink" href="#adamax" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">Adamax</code></dt> <code class="descclassname">paddle.fluid.optimizer.</code><code class="descname">Adamax</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">AdamaxOptimizer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">AdamaxOptimizer</span></code></p>
</dd></dl> </dd></dl>
...@@ -224,7 +224,7 @@ ...@@ -224,7 +224,7 @@
<h2>DecayedAdagrad<a class="headerlink" href="#decayedadagrad" title="Permalink to this headline"></a></h2> <h2>DecayedAdagrad<a class="headerlink" href="#decayedadagrad" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.optimizer.</code><code class="descname">DecayedAdagrad</code></dt> <code class="descclassname">paddle.fluid.optimizer.</code><code class="descname">DecayedAdagrad</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">DecayedAdagradOptimizer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">DecayedAdagradOptimizer</span></code></p>
</dd></dl> </dd></dl>
......
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>ParamAttr<a class="headerlink" href="#paramattr" title="Permalink to this headline"></a></h2> <h2>ParamAttr<a class="headerlink" href="#paramattr" title="Permalink to this headline"></a></h2>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.param_attr.</code><code class="descname">ParamAttr</code><span class="sig-paren">(</span><em>name=None</em>, <em>initializer=None</em>, <em>learning_rate=1.0</em>, <em>regularizer=None</em>, <em>trainable=True</em>, <em>gradient_clip=None</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.param_attr.</code><code class="descname">ParamAttr</code><span class="sig-paren">(</span><em>name=None</em>, <em>initializer=None</em>, <em>learning_rate=1.0</em>, <em>regularizer=None</em>, <em>trainable=True</em>, <em>gradient_clip=None</em><span class="sig-paren">)</span></dt>
<dd></dd></dl> <dd></dd></dl>
</div> </div>
...@@ -187,7 +187,7 @@ ...@@ -187,7 +187,7 @@
<h2>WeightNormParamAttr<a class="headerlink" href="#weightnormparamattr" title="Permalink to this headline"></a></h2> <h2>WeightNormParamAttr<a class="headerlink" href="#weightnormparamattr" title="Permalink to this headline"></a></h2>
<dl class="class"> <dl class="class">
<dt> <dt>
<em class="property">class </em><code class="descclassname">paddle.v2.fluid.param_attr.</code><code class="descname">WeightNormParamAttr</code><span class="sig-paren">(</span><em>dim=None</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt> <em class="property">class </em><code class="descclassname">paddle.fluid.param_attr.</code><code class="descname">WeightNormParamAttr</code><span class="sig-paren">(</span><em>dim=None</em>, <em>**kwargs</em><span class="sig-paren">)</span></dt>
<dd><p>Used for weight normalization. Any field in ParamAttr can also be set here. <dd><p>Used for weight normalization. Any field in ParamAttr can also be set here.
Besides, an extra field dim can be set to indicate the dimension except Besides, an extra field dim can be set to indicate the dimension except
which to normalize.</p> which to normalize.</p>
......
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>cuda_profiler<a class="headerlink" href="#cuda-profiler" title="Permalink to this headline"></a></h2> <h2>cuda_profiler<a class="headerlink" href="#cuda-profiler" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.profiler.</code><code class="descname">cuda_profiler</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwds</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.profiler.</code><code class="descname">cuda_profiler</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwds</em><span class="sig-paren">)</span></dt>
<dd><p>The CUDA profiler. <dd><p>The CUDA profiler.
This fuctions is used to profile CUDA program by CUDA runtime application This fuctions is used to profile CUDA program by CUDA runtime application
programming interface. The profiling result will be written into programming interface. The profiling result will be written into
...@@ -211,7 +211,7 @@ to &#8220;Compute Command Line Profiler User Guide&#8221;.</li> ...@@ -211,7 +211,7 @@ to &#8220;Compute Command Line Profiler User Guide&#8221;.</li>
<h2>reset_profiler<a class="headerlink" href="#reset-profiler" title="Permalink to this headline"></a></h2> <h2>reset_profiler<a class="headerlink" href="#reset-profiler" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.profiler.</code><code class="descname">reset_profiler</code><span class="sig-paren">(</span><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.profiler.</code><code class="descname">reset_profiler</code><span class="sig-paren">(</span><span class="sig-paren">)</span></dt>
<dd><p>The profiler clear interface. <dd><p>The profiler clear interface.
reset_profiler will clear the previous time record.</p> reset_profiler will clear the previous time record.</p>
</dd></dl> </dd></dl>
...@@ -221,7 +221,7 @@ reset_profiler will clear the previous time record.</p> ...@@ -221,7 +221,7 @@ reset_profiler will clear the previous time record.</p>
<h2>profiler<a class="headerlink" href="#id1" title="Permalink to this headline"></a></h2> <h2>profiler<a class="headerlink" href="#id1" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.profiler.</code><code class="descname">profiler</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwds</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.profiler.</code><code class="descname">profiler</code><span class="sig-paren">(</span><em>*args</em>, <em>**kwds</em><span class="sig-paren">)</span></dt>
<dd><p>The profiler interface. <dd><p>The profiler interface.
Different from cuda_profiler, this profiler can be used to profile both CPU Different from cuda_profiler, this profiler can be used to profile both CPU
and GPU program. By defalut, it records the CPU and GPU operator kernels, and GPU program. By defalut, it records the CPU and GPU operator kernels,
......
...@@ -179,7 +179,7 @@ ...@@ -179,7 +179,7 @@
<h2>append_regularization_ops<a class="headerlink" href="#append-regularization-ops" title="Permalink to this headline"></a></h2> <h2>append_regularization_ops<a class="headerlink" href="#append-regularization-ops" title="Permalink to this headline"></a></h2>
<dl class="function"> <dl class="function">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.regularizer.</code><code class="descname">append_regularization_ops</code><span class="sig-paren">(</span><em>parameters_and_grads</em>, <em>regularization=None</em><span class="sig-paren">)</span></dt> <code class="descclassname">paddle.fluid.regularizer.</code><code class="descname">append_regularization_ops</code><span class="sig-paren">(</span><em>parameters_and_grads</em>, <em>regularization=None</em><span class="sig-paren">)</span></dt>
<dd><p>Create and add backward regularization Operators</p> <dd><p>Create and add backward regularization Operators</p>
<p>Creates and adds backward regularization operators in the BlockDesc. <p>Creates and adds backward regularization operators in the BlockDesc.
This will add gradients of the regularizer function to the gradients This will add gradients of the regularizer function to the gradients
...@@ -212,7 +212,7 @@ set. It will be applied with regularizer.</li> ...@@ -212,7 +212,7 @@ set. It will be applied with regularizer.</li>
<h2>L1Decay<a class="headerlink" href="#l1decay" title="Permalink to this headline"></a></h2> <h2>L1Decay<a class="headerlink" href="#l1decay" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.regularizer.</code><code class="descname">L1Decay</code></dt> <code class="descclassname">paddle.fluid.regularizer.</code><code class="descname">L1Decay</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">L1DecayRegularizer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">L1DecayRegularizer</span></code></p>
</dd></dl> </dd></dl>
...@@ -221,7 +221,7 @@ set. It will be applied with regularizer.</li> ...@@ -221,7 +221,7 @@ set. It will be applied with regularizer.</li>
<h2>L2Decay<a class="headerlink" href="#l2decay" title="Permalink to this headline"></a></h2> <h2>L2Decay<a class="headerlink" href="#l2decay" title="Permalink to this headline"></a></h2>
<dl class="attribute"> <dl class="attribute">
<dt> <dt>
<code class="descclassname">paddle.v2.fluid.regularizer.</code><code class="descname">L2Decay</code></dt> <code class="descclassname">paddle.fluid.regularizer.</code><code class="descname">L2Decay</code></dt>
<dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">L2DecayRegularizer</span></code></p> <dd><p>alias of <code class="xref py py-class docutils literal"><span class="pre">L2DecayRegularizer</span></code></p>
</dd></dl> </dd></dl>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册