未验证 提交 0a051297 编写于 作者: 超级码牛 提交者: GitHub

fix The first round of evaluation (#47256)

* fix paddle.get_default_dtype 

Chinese and English return values are inconsistent

* fix paddle.matmul 文档评估 #4407

把函数的输出改成正确的

* fix paddle.std文档评估 #4370

增加了一个unbiased=False的代码示例,没有增加numpy,怕引起误会。

* fix paddle.load文档测评 #4455 

只把代码拆分了5段

* try

* try

* try

* Update io.py

* Update io.py

* Update creation.py

* Update creation.py

* [Docs]add name description

* [Docs]fix broadcasting issue
Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
上级 ae14bad1
...@@ -669,6 +669,7 @@ def save(obj, path, protocol=4, **configs): ...@@ -669,6 +669,7 @@ def save(obj, path, protocol=4, **configs):
Examples: Examples:
.. code-block:: python .. code-block:: python
:name: code-example-1
# example 1: dynamic graph # example 1: dynamic graph
import paddle import paddle
...@@ -690,7 +691,11 @@ def save(obj, path, protocol=4, **configs): ...@@ -690,7 +691,11 @@ def save(obj, path, protocol=4, **configs):
# save weight of emb # save weight of emb
paddle.save(emb.weight, "emb.weight.pdtensor") paddle.save(emb.weight, "emb.weight.pdtensor")
.. code-block:: python
:name: code-example-2
# example 2: Save multiple state_dict at the same time # example 2: Save multiple state_dict at the same time
import paddle
from paddle import nn from paddle import nn
from paddle.optimizer import Adam from paddle.optimizer import Adam
...@@ -700,6 +705,8 @@ def save(obj, path, protocol=4, **configs): ...@@ -700,6 +705,8 @@ def save(obj, path, protocol=4, **configs):
path = 'example/model.pdparams' path = 'example/model.pdparams'
paddle.save(obj, path) paddle.save(obj, path)
.. code-block:: python
:name: code-example-3
# example 3: static graph # example 3: static graph
import paddle import paddle
...@@ -728,6 +735,9 @@ def save(obj, path, protocol=4, **configs): ...@@ -728,6 +735,9 @@ def save(obj, path, protocol=4, **configs):
path_state_dict = 'temp/model.pdparams' path_state_dict = 'temp/model.pdparams'
paddle.save(prog.state_dict("param"), path_tensor) paddle.save(prog.state_dict("param"), path_tensor)
.. code-block:: python
:name: code-example-4
# example 4: save program # example 4: save program
import paddle import paddle
...@@ -740,6 +750,8 @@ def save(obj, path, protocol=4, **configs): ...@@ -740,6 +750,8 @@ def save(obj, path, protocol=4, **configs):
path = "example/main_program.pdmodel" path = "example/main_program.pdmodel"
paddle.save(main_program, path) paddle.save(main_program, path)
.. code-block:: python
:name: code-example-5
# example 5: save object to memory # example 5: save object to memory
from io import BytesIO from io import BytesIO
...@@ -918,6 +930,7 @@ def load(path, **configs): ...@@ -918,6 +930,7 @@ def load(path, **configs):
Examples: Examples:
.. code-block:: python .. code-block:: python
:name: code-example-1
# example 1: dynamic graph # example 1: dynamic graph
import paddle import paddle
...@@ -946,8 +959,11 @@ def load(path, **configs): ...@@ -946,8 +959,11 @@ def load(path, **configs):
# load weight of emb # load weight of emb
load_weight = paddle.load("emb.weight.pdtensor") load_weight = paddle.load("emb.weight.pdtensor")
.. code-block:: python
:name: code-example-2
# example 2: Load multiple state_dict at the same time # example 2: Load multiple state_dict at the same time
import paddle
from paddle import nn from paddle import nn
from paddle.optimizer import Adam from paddle.optimizer import Adam
...@@ -958,6 +974,8 @@ def load(path, **configs): ...@@ -958,6 +974,8 @@ def load(path, **configs):
paddle.save(obj, path) paddle.save(obj, path)
obj_load = paddle.load(path) obj_load = paddle.load(path)
.. code-block:: python
:name: code-example-3
# example 3: static graph # example 3: static graph
import paddle import paddle
...@@ -988,6 +1006,8 @@ def load(path, **configs): ...@@ -988,6 +1006,8 @@ def load(path, **configs):
paddle.save(prog.state_dict("param"), path_tensor) paddle.save(prog.state_dict("param"), path_tensor)
load_state_dict = paddle.load(path_tensor) load_state_dict = paddle.load(path_tensor)
.. code-block:: python
:name: code-example-4
# example 4: load program # example 4: load program
import paddle import paddle
...@@ -1003,6 +1023,8 @@ def load(path, **configs): ...@@ -1003,6 +1023,8 @@ def load(path, **configs):
load_main = paddle.load(path) load_main = paddle.load(path)
print(load_main) print(load_main)
.. code-block:: python
:name: code-example-5
# example 5: save object to memory # example 5: save object to memory
from io import BytesIO from io import BytesIO
......
...@@ -1174,7 +1174,7 @@ def triu(x, diagonal=0, name=None): ...@@ -1174,7 +1174,7 @@ def triu(x, diagonal=0, name=None):
def meshgrid(*args, **kwargs): def meshgrid(*args, **kwargs):
""" """
Takes a list of N tensors as input *args, each of which is 1-dimensional vector, and creates N-dimensional grids. Takes a list of N tensors as input :attr:`*args`, each of which is 1-dimensional vector, and creates N-dimensional grids.
Args: Args:
*args(Tensor|list of Tensor) : tensors (tuple(list) of tensor): the shapes of input k tensors are (N1,), *args(Tensor|list of Tensor) : tensors (tuple(list) of tensor): the shapes of input k tensors are (N1,),
......
...@@ -183,9 +183,9 @@ def matmul(x, y, transpose_x=False, transpose_y=False, name=None): ...@@ -183,9 +183,9 @@ def matmul(x, y, transpose_x=False, transpose_y=False, name=None):
Args: Args:
x (Tensor): The input tensor which is a Tensor. x (Tensor): The input tensor which is a Tensor.
y (Tensor): The input tensor which is a Tensor. y (Tensor): The input tensor which is a Tensor.
transpose_x (bool): Whether to transpose :math:`x` before multiplication. transpose_x (bool, optional): Whether to transpose :math:`x` before multiplication.
transpose_y (bool): Whether to transpose :math:`y` before multiplication. transpose_y (bool, optional): Whether to transpose :math:`y` before multiplication.
name(str|None): A name for this layer(optional). If set None, the layer name(str, optional): A name for this layer(optional). If set None, the layer
will be named automatically. will be named automatically.
Returns: Returns:
...@@ -202,35 +202,35 @@ def matmul(x, y, transpose_x=False, transpose_y=False, name=None): ...@@ -202,35 +202,35 @@ def matmul(x, y, transpose_x=False, transpose_y=False, name=None):
y = paddle.rand([10]) y = paddle.rand([10])
z = paddle.matmul(x, y) z = paddle.matmul(x, y)
print(z.shape) print(z.shape)
# [1] # (1,)
# matrix * vector # matrix * vector
x = paddle.rand([10, 5]) x = paddle.rand([10, 5])
y = paddle.rand([5]) y = paddle.rand([5])
z = paddle.matmul(x, y) z = paddle.matmul(x, y)
print(z.shape) print(z.shape)
# [10] # (10,)
# batched matrix * broadcasted vector # batched matrix * broadcasted vector
x = paddle.rand([10, 5, 2]) x = paddle.rand([10, 5, 2])
y = paddle.rand([2]) y = paddle.rand([2])
z = paddle.matmul(x, y) z = paddle.matmul(x, y)
print(z.shape) print(z.shape)
# [10, 5] # (10, 5)
# batched matrix * batched matrix # batched matrix * batched matrix
x = paddle.rand([10, 5, 2]) x = paddle.rand([10, 5, 2])
y = paddle.rand([10, 2, 5]) y = paddle.rand([10, 2, 5])
z = paddle.matmul(x, y) z = paddle.matmul(x, y)
print(z.shape) print(z.shape)
# [10, 5, 5] # (10, 5, 5)
# batched matrix * broadcasted matrix # batched matrix * broadcasted matrix
x = paddle.rand([10, 1, 5, 2]) x = paddle.rand([10, 1, 5, 2])
y = paddle.rand([1, 3, 2, 5]) y = paddle.rand([1, 3, 2, 5])
z = paddle.matmul(x, y) z = paddle.matmul(x, y)
print(z.shape) print(z.shape)
# [10, 3, 5, 5] # (10, 3, 5, 5)
""" """
if in_dygraph_mode(): if in_dygraph_mode():
...@@ -639,9 +639,9 @@ def norm(x, p='fro', axis=None, keepdim=False, name=None): ...@@ -639,9 +639,9 @@ def norm(x, p='fro', axis=None, keepdim=False, name=None):
def dist(x, y, p=2, name=None): def dist(x, y, p=2, name=None):
r""" r"""
This OP returns the p-norm of (x - y). It is not a norm in a strict sense, only as a measure Returns the p-norm of (x - y). It is not a norm in a strict sense, only as a measure
of distance. The shapes of x and y must be broadcastable. The definition is as follows, for of distance. The shapes of x and y must be broadcastable. The definition is as follows, for
details, please refer to the `numpy's broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`_: details, please refer to the `Introduction to Tensor <../../guides/beginner/tensor_en.html#chapter5-broadcasting-of-tensor>`_:
- Each input has at least one dimension. - Each input has at least one dimension.
- Match the two input dimensions from back to front, the dimension sizes must either be equal, one of them is 1, or one of them does not exist. - Match the two input dimensions from back to front, the dimension sizes must either be equal, one of them is 1, or one of them does not exist.
...@@ -695,6 +695,8 @@ def dist(x, y, p=2, name=None): ...@@ -695,6 +695,8 @@ def dist(x, y, p=2, name=None):
x (Tensor): 1-D to 6-D Tensor, its data type is float32 or float64. x (Tensor): 1-D to 6-D Tensor, its data type is float32 or float64.
y (Tensor): 1-D to 6-D Tensor, its data type is float32 or float64. y (Tensor): 1-D to 6-D Tensor, its data type is float32 or float64.
p (float, optional): The norm to be computed, its data type is float32 or float64. Default: 2. p (float, optional): The norm to be computed, its data type is float32 or float64. Default: 2.
name (str, optional): The default value is `None`. Normally there is no need for
user to set this property. For more information, please refer to :ref:`api_guide_Name`.
Returns: Returns:
Tensor: Tensor that is the p-norm of (x - y). Tensor: Tensor that is the p-norm of (x - y).
......
...@@ -384,7 +384,7 @@ def nonzero(x, as_tuple=False): ...@@ -384,7 +384,7 @@ def nonzero(x, as_tuple=False):
Args: Args:
x (Tensor): The input tensor variable. x (Tensor): The input tensor variable.
as_tuple (bool): Return type, Tensor or tuple of Tensor. as_tuple (bool, optional): Return type, Tensor or tuple of Tensor.
Returns: Returns:
Tensor. The data type is int64. Tensor. The data type is int64.
......
...@@ -206,8 +206,11 @@ def std(x, axis=None, unbiased=True, keepdim=False, name=None): ...@@ -206,8 +206,11 @@ def std(x, axis=None, unbiased=True, keepdim=False, name=None):
x = paddle.to_tensor([[1.0, 2.0, 3.0], [1.0, 4.0, 5.0]]) x = paddle.to_tensor([[1.0, 2.0, 3.0], [1.0, 4.0, 5.0]])
out1 = paddle.std(x) out1 = paddle.std(x)
# [1.63299316] # [1.63299316]
out2 = paddle.std(x, axis=1) out2 = paddle.std(x, unbiased=False)
# [1.49071205]
out3 = paddle.std(x, axis=1)
# [1. 2.081666] # [1. 2.081666]
""" """
if not paddle.in_dynamic_mode(): if not paddle.in_dynamic_mode():
check_variable_and_dtype(x, 'x', ['float32', 'float64'], 'std') check_variable_and_dtype(x, 'x', ['float32', 'float64'], 'std')
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册