未验证 提交 fe2cf39f 编写于 作者: W Wilber 提交者: GitHub

[2.0] Update py_func English doc. (#28646)

上级 16a80814
...@@ -13503,9 +13503,9 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): ...@@ -13503,9 +13503,9 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
the output of ``func``, whose type can be either Tensor or numpy array. the output of ``func``, whose type can be either Tensor or numpy array.
The input of the backward function ``backward_func`` is ``x``, ``out`` and The input of the backward function ``backward_func`` is ``x``, ``out`` and
the gradient of ``out``. If some variables of ``out`` have no gradient, the the gradient of ``out``. If ``out`` have no gradient, the relevant input of
relevant input variable of ``backward_func`` is None. If some variables of ``backward_func`` is None. If ``x`` do not have a gradient, the user should
``x`` do not have a gradient, the user should return None in ``backward_func``. return None in ``backward_func``.
The data type and shape of ``out`` should also be set correctly before this The data type and shape of ``out`` should also be set correctly before this
API is called, and the data type and shape of the gradient of ``out`` and API is called, and the data type and shape of the gradient of ``out`` and
...@@ -13520,27 +13520,26 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): ...@@ -13520,27 +13520,26 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
function and the forward input ``x``. In ``func`` , it's suggested that we function and the forward input ``x``. In ``func`` , it's suggested that we
actively convert Tensor into a numpy array, so that we can use Python and actively convert Tensor into a numpy array, so that we can use Python and
numpy API arbitrarily. If not, some operations of numpy may not be compatible. numpy API arbitrarily. If not, some operations of numpy may not be compatible.
x (Variable|tuple(Variale)|list[Variale]): The input of the forward function ``func``. x (Tensor|tuple(Tensor)|list[Tensor]): The input of the forward function ``func``.
It can be Variable|tuple(Variale)|list[Variale], where Variable is Tensor or It can be Tensor|tuple(Tensor)|list[Tensor]. In addition, Multiple Tensor
Tenosor. In addition, Multiple Variable should be passed in the form of tuple(Variale) should be passed in the form of tuple(Tensor) or list[Tensor].
or list[Variale]. out (T|tuple(T)|list[T]): The output of the forward function ``func``, it can be
out (Variable|tuple(Variale)|list[Variale]): The output of the forward function ``func``, T|tuple(T)|list[T], where T can be either Tensor or numpy array. Since Paddle
it can be Variable|tuple(Variale)|list[Variale], where Variable can be either Tensor cannot automatically infer the shape and type of ``out``, you must create
or numpy array. Since Paddle cannot automatically infer the shape and type of ``out``, ``out`` in advance.
you must create ``out`` in advance.
backward_func (callable, optional): The backward function of the registered OP. backward_func (callable, optional): The backward function of the registered OP.
Its default value is None, which means there is no reverse calculation. If Its default value is None, which means there is no reverse calculation. If
it is not None, ``backward_func`` is called to calculate the gradient of it is not None, ``backward_func`` is called to calculate the gradient of
``x`` when the network is at backward runtime. ``x`` when the network is at backward runtime.
skip_vars_in_backward_input (Variable, optional): It's used to limit the input skip_vars_in_backward_input (Tensor, optional): It's used to limit the input
variable list of ``backward_func``, and it can be Variable|tuple(Variale)|list[Variale]. list of ``backward_func``, and it can be Tensor|tuple(Tensor)|list[Tensor].
It must belong to either ``x`` or ``out``. The default value is None, which means It must belong to either ``x`` or ``out``. The default value is None, which means
that no variables need to be removed from ``x`` and ``out``. If it is not None, that no tensors need to be removed from ``x`` and ``out``. If it is not None,
these variables will not be the input of ``backward_func``. This parameter is only these tensors will not be the input of ``backward_func``. This parameter is only
useful when ``backward_func`` is not None. useful when ``backward_func`` is not None.
Returns: Returns:
Variable|tuple(Variale)|list[Variale]: The output ``out`` of the forward function ``func``. Tensor|tuple(Tensor)|list[Tensor]: The output ``out`` of the forward function ``func``.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -13548,6 +13547,7 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): ...@@ -13548,6 +13547,7 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
# example 1: # example 1:
import paddle import paddle
import six import six
import numpy as np
paddle.enable_static() paddle.enable_static()
...@@ -13578,16 +13578,31 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): ...@@ -13578,16 +13578,31 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
dtype=hidden.dtype, shape=hidden.shape) dtype=hidden.dtype, shape=hidden.shape)
# User-defined forward and backward # User-defined forward and backward
hidden = paddle.static.nn.py_func(func=tanh, x=hidden, hidden = paddle.static.py_func(func=tanh, x=hidden,
out=new_hidden, backward_func=tanh_grad, out=new_hidden, backward_func=tanh_grad,
skip_vars_in_backward_input=hidden) skip_vars_in_backward_input=hidden)
# User-defined debug functions that print out the input Tensor # User-defined debug functions that print out the input Tensor
paddle.static.nn.py_func(func=debug_func, x=hidden, out=None) paddle.static.py_func(func=debug_func, x=hidden, out=None)
prediction = paddle.static.nn.fc(hidden, size=10, activation='softmax') prediction = paddle.static.nn.fc(hidden, size=10, activation='softmax')
loss = paddle.static.nn.cross_entropy(input=prediction, label=label) ce_loss = paddle.nn.loss.CrossEntropyLoss()
return paddle.mean(loss) return ce_loss(prediction, label)
x = paddle.static.data(name='x', shape=[1,4], dtype='float32')
y = paddle.static.data(name='y', shape=[1,10], dtype='int64')
res = simple_net(x, y)
exe = paddle.static.Executor(paddle.CPUPlace())
exe.run(paddle.static.default_startup_program())
input1 = np.random.random(size=[1,4]).astype('float32')
input2 = np.random.randint(1, 10, size=[1,10], dtype='int64')
out = exe.run(paddle.static.default_main_program(),
feed={'x':input1, 'y':input2},
fetch_list=[res.name])
print(out)
.. code-block:: python
# example 2: # example 2:
# This example shows how to turn Tensor into numpy array and # This example shows how to turn Tensor into numpy array and
...@@ -13629,7 +13644,7 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None): ...@@ -13629,7 +13644,7 @@ def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
output = create_tmp_var('output','int32', [3,1]) output = create_tmp_var('output','int32', [3,1])
# Multiple Variable should be passed in the form of tuple(Variale) or list[Variale] # Multiple Variable should be passed in the form of tuple(Variale) or list[Variale]
paddle.static.nn.py_func(func=element_wise_add, x=[x,y], out=output) paddle.static.py_func(func=element_wise_add, x=[x,y], out=output)
exe=paddle.static.Executor(paddle.CPUPlace()) exe=paddle.static.Executor(paddle.CPUPlace())
exe.run(start_program) exe.run(start_program)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册