未验证 提交 6439e91d 编写于 作者: Z Zhibao Li 提交者: GitHub

修复 paddle.static.load_from_file 等 API 英文文档 (#49042)

* fix api docs format problems 121-131

* fix English docs #49042

* resolve conflict; test=document_fix
Co-authored-by: 梦柳's avatarLigoml <limengliu@tiaozhan.com>
上级 4b803a4a
...@@ -24,7 +24,6 @@ __all__ = [] ...@@ -24,7 +24,6 @@ __all__ = []
@static_only @static_only
def data(name, shape, dtype=None, lod_level=0): def data(name, shape, dtype=None, lod_level=0):
""" """
**Data Layer**
This function creates a variable on the global block. The global variable This function creates a variable on the global block. The global variable
can be accessed by all the following operators in the graph. The variable can be accessed by all the following operators in the graph. The variable
...@@ -36,15 +35,14 @@ def data(name, shape, dtype=None, lod_level=0): ...@@ -36,15 +35,14 @@ def data(name, shape, dtype=None, lod_level=0):
name (str): The name/alias of the variable, see :ref:`api_guide_Name` name (str): The name/alias of the variable, see :ref:`api_guide_Name`
for more details. for more details.
shape (list|tuple): List|Tuple of integers declaring the shape. You can shape (list|tuple): List|Tuple of integers declaring the shape. You can
set "None" or -1 at a dimension to indicate the dimension can be of any set None or -1 at a dimension to indicate the dimension can be of any
size. For example, it is useful to set changeable batch size as "None" or -1. size. For example, it is useful to set changeable batch size as None or -1.
dtype (np.dtype|str, optional): The type of the data. Supported dtype (np.dtype|str, optional): The type of the data. Supported
dtype: bool, float16, float32, float64, int8, int16, int32, int64, dtype: bool, float16, float32, float64, int8, int16, int32, int64,
uint8. Default: None. When `dtype` is not set, the dtype will get uint8. Default: None. When `dtype` is not set, the dtype will get
from the global dtype by `paddle.get_default_dtype()`. from the global dtype by `paddle.get_default_dtype()`.
lod_level (int, optional): The LoD level of the LoDTensor. Usually users lod_level (int, optional): The LoD level of the LoDTensor. Usually users
don't have to set this value. For more details about when and how to don't have to set this value. Default: 0.
use LoD level, see :ref:`user_guide_lod_tensor` . Default: 0.
Returns: Returns:
Variable: The global variable that gives access to the data. Variable: The global variable that gives access to the data.
......
...@@ -128,14 +128,13 @@ def _clone_var_in_block(block, var): ...@@ -128,14 +128,13 @@ def _clone_var_in_block(block, var):
def normalize_program(program, feed_vars, fetch_vars): def normalize_program(program, feed_vars, fetch_vars):
""" """
:api_attr: Static Graph
Normalize/Optimize a program according to feed_vars and fetch_vars. Normalize/Optimize a program according to feed_vars and fetch_vars.
Args: Args:
program(Program): Specify a program you want to optimize. program(Program): Specify a program you want to optimize.
feed_vars(Variable | list[Variable]): Variables needed by inference. feed_vars(Tensor | list[Tensor]): Variables needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference. fetch_vars(Tensor | list[Tensor]): Variables returned by inference.
Returns: Returns:
Program: Normalized/Optimized program. Program: Normalized/Optimized program.
...@@ -233,6 +232,7 @@ def normalize_program(program, feed_vars, fetch_vars): ...@@ -233,6 +232,7 @@ def normalize_program(program, feed_vars, fetch_vars):
def is_persistable(var): def is_persistable(var):
""" """
Check whether the given variable is persistable. Check whether the given variable is persistable.
Args: Args:
...@@ -264,15 +264,15 @@ def is_persistable(var): ...@@ -264,15 +264,15 @@ def is_persistable(var):
@static_only @static_only
def serialize_program(feed_vars, fetch_vars, **kwargs): def serialize_program(feed_vars, fetch_vars, **kwargs):
""" """
:api_attr: Static Graph
Serialize default main program according to feed_vars and fetch_vars. Serialize default main program according to feed_vars and fetch_vars.
Args: Args:
feed_vars(Variable | list[Variable]): Variables needed by inference. feed_vars(Tensor | list[Tensor]): Tensor needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference. fetch_vars(Tensor | list[Tensor]): Tensor returned by inference.
kwargs: Supported keys including 'program'.Attention please, kwargs is used for backward compatibility mainly. kwargs: Supported keys including ``program``. Attention please, kwargs is used for backward compatibility mainly.
- program(Program): specify a program if you don't want to use default main program.
- program(Program): specify a program if you don't want to use default main program.
Returns: Returns:
bytes: serialized program. bytes: serialized program.
...@@ -323,15 +323,15 @@ def _serialize_program(program): ...@@ -323,15 +323,15 @@ def _serialize_program(program):
@static_only @static_only
def serialize_persistables(feed_vars, fetch_vars, executor, **kwargs): def serialize_persistables(feed_vars, fetch_vars, executor, **kwargs):
""" """
:api_attr: Static Graph
Serialize parameters using given executor and default main program according to feed_vars and fetch_vars. Serialize parameters using given executor and default main program according to feed_vars and fetch_vars.
Args: Args:
feed_vars(Variable | list[Variable]): Variables needed by inference. feed_vars(Tensor | list[Tensor]): Tensor needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference. fetch_vars(Tensor | list[Tensor]): Tensor returned by inference.
kwargs: Supported keys including 'program'.Attention please, kwargs is used for backward compatibility mainly. kwargs: Supported keys including ``program``. Attention please, kwargs is used for backward compatibility mainly.
- program(Program): specify a program if you don't want to use default main program.
- program(Program): specify a program if you don't want to use default main program.
Returns: Returns:
bytes: serialized program. bytes: serialized program.
...@@ -423,9 +423,11 @@ def _serialize_persistables(program, executor): ...@@ -423,9 +423,11 @@ def _serialize_persistables(program, executor):
def save_to_file(path, content): def save_to_file(path, content):
""" """
Save content to given path. Save content to given path.
Args: Args:
path(str): Path to write content to. path(str): Path to write content to.
content(bytes): Content to write. content(bytes): Content to write.
Returns: Returns:
None None
...@@ -461,15 +463,15 @@ def save_inference_model( ...@@ -461,15 +463,15 @@ def save_inference_model(
): ):
""" """
Save current model and its parameters to given path. i.e. Save current model and its parameters to given path. i.e.
Given path_prefix = "/path/to/modelname", after invoking Given ``path_prefix = "PATH/modelname"``, after invoking
save_inference_model(path_prefix, feed_vars, fetch_vars, executor), ``save_inference_model(path_prefix, feed_vars, fetch_vars, executor)``,
you will find two files named modelname.pdmodel and modelname.pdiparams you will find two files named ``modelname.pdmodel`` and ``modelname.pdiparams``
under "/path/to", which represent your model and parameters respectively. under ``PATH``, which represent your model and parameters respectively.
Args: Args:
path_prefix(str): Directory path to save model + model name without suffix. path_prefix(str): Directory path to save model + model name without suffix.
feed_vars(Variable | list[Variable]): Variables needed by inference. feed_vars(Tensor | list[Tensor]): Variables needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference. fetch_vars(Tensor | list[Tensor]): Variables returned by inference.
executor(Executor): The executor that saves the inference model. You can refer executor(Executor): The executor that saves the inference model. You can refer
to :ref:`api_guide_executor_en` for more details. to :ref:`api_guide_executor_en` for more details.
kwargs: Supported keys including 'program' and "clip_extra". Attention please, kwargs is used for backward compatibility mainly. kwargs: Supported keys including 'program' and "clip_extra". Attention please, kwargs is used for backward compatibility mainly.
...@@ -551,7 +553,6 @@ def save_inference_model( ...@@ -551,7 +553,6 @@ def save_inference_model(
@static_only @static_only
def deserialize_program(data): def deserialize_program(data):
""" """
:api_attr: Static Graph
Deserialize given data to a program. Deserialize given data to a program.
...@@ -598,7 +599,6 @@ def deserialize_program(data): ...@@ -598,7 +599,6 @@ def deserialize_program(data):
@static_only @static_only
def deserialize_persistables(program, data, executor): def deserialize_persistables(program, data, executor):
""" """
:api_attr: Static Graph
Deserialize given data to parameters according to given program and executor. Deserialize given data to parameters according to given program and executor.
...@@ -704,8 +704,10 @@ def deserialize_persistables(program, data, executor): ...@@ -704,8 +704,10 @@ def deserialize_persistables(program, data, executor):
def load_from_file(path): def load_from_file(path):
""" """
Load file in binary mode. Load file in binary mode.
Args: Args:
path(str): Path of an existed file. path(str): Path of an existed file.
Returns: Returns:
bytes: Content of file. bytes: Content of file.
...@@ -739,7 +741,6 @@ def load_from_file(path): ...@@ -739,7 +741,6 @@ def load_from_file(path):
@static_only @static_only
def load_inference_model(path_prefix, executor, **kwargs): def load_inference_model(path_prefix, executor, **kwargs):
""" """
:api_attr: Static Graph
Load inference model from a given path. By this API, you can get the model Load inference model from a given path. By this API, you can get the model
structure(Inference Program) and model parameters. structure(Inference Program) and model parameters.
...@@ -750,9 +751,11 @@ def load_inference_model(path_prefix, executor, **kwargs): ...@@ -750,9 +751,11 @@ def load_inference_model(path_prefix, executor, **kwargs):
- Set to None when reading the model from memory. - Set to None when reading the model from memory.
executor(Executor): The executor to run for loading inference model. executor(Executor): The executor to run for loading inference model.
See :ref:`api_guide_executor_en` for more details about it. See :ref:`api_guide_executor_en` for more details about it.
kwargs: Supported keys including 'model_filename', 'params_filename'.Attention please, kwargs is used for backward compatibility mainly. kwargs: Supported keys including 'model_filename', 'params_filename'. Attention please, kwargs is used for backward compatibility mainly.
- model_filename(str): specify model_filename if you don't want to use default name.
- params_filename(str): specify params_filename if you don't want to use default name. - model_filename(str): specify model_filename if you don't want to use default name.
- params_filename(str): specify params_filename if you don't want to use default name.
Returns: Returns:
list: The return of this API is a list with three elements: list: The return of this API is a list with three elements:
......
...@@ -2343,7 +2343,7 @@ def deform_conv2d( ...@@ -2343,7 +2343,7 @@ def deform_conv2d(
float32, float64. float32, float64.
offset (Tensor): The input coordinate offset of deformable convolution layer. offset (Tensor): The input coordinate offset of deformable convolution layer.
A Tensor with type float32, float64. A Tensor with type float32, float64.
mask (Tensor, Optional): The input mask of deformable convolution layer. mask (Tensor): The input mask of deformable convolution layer.
A Tensor with type float32, float64. It should be None when you use A Tensor with type float32, float64. It should be None when you use
deformable convolution v1. deformable convolution v1.
num_filters(int): The number of filter. It is as same as the output num_filters(int): The number of filter. It is as same as the output
...@@ -2377,7 +2377,7 @@ def deform_conv2d( ...@@ -2377,7 +2377,7 @@ def deform_conv2d(
deformable conv will create ParamAttr as weight_attr. deformable conv will create ParamAttr as weight_attr.
If the Initializer of the weight_attr is not set, the parameter is If the Initializer of the weight_attr is not set, the parameter is
initialized with :math:`Normal(0.0, std)`, and the initialized with :math:`Normal(0.0, std)`, and the
:math:`std` is :math:`(\\frac{2.0 }{filter\_elem\_num})^{0.5}`. Default: None. :math:`std` is :math:`(\frac{2.0 }{filter\_elem\_num})^{0.5}`. Default: None.
bias_attr (ParamAttr|bool, Optional): The parameter attribute for the bias of bias_attr (ParamAttr|bool, Optional): The parameter attribute for the bias of
deformable conv layer. If it is set to False, no bias will be added deformable conv layer. If it is set to False, no bias will be added
to the output units. If it is set to None or one attribute of ParamAttr, conv2d to the output units. If it is set to None or one attribute of ParamAttr, conv2d
...@@ -2385,9 +2385,9 @@ def deform_conv2d( ...@@ -2385,9 +2385,9 @@ def deform_conv2d(
is not set, the bias is initialized zero. Default: None. is not set, the bias is initialized zero. Default: None.
name(str, Optional): For details, please refer to :ref:`api_guide_Name`. name(str, Optional): For details, please refer to :ref:`api_guide_Name`.
Generally, no setting is required. Default: None. Generally, no setting is required. Default: None.
Returns: Returns:
Tensor: The tensor storing the deformable convolution \ Tensor: The tensor storing the deformable convolution result. A Tensor with type float32, float64.
result. A Tensor with type float32, float64.
Examples: Examples:
.. code-block:: python .. code-block:: python
......
...@@ -20,6 +20,7 @@ __all__ = ['get_include', 'get_lib'] ...@@ -20,6 +20,7 @@ __all__ = ['get_include', 'get_lib']
def get_include(): def get_include():
""" """
Get the directory containing the PaddlePaddle C++ header files. Get the directory containing the PaddlePaddle C++ header files.
Returns: Returns:
The directory as string. The directory as string.
...@@ -38,6 +39,7 @@ def get_include(): ...@@ -38,6 +39,7 @@ def get_include():
def get_lib(): def get_lib():
""" """
Get the directory containing the libpaddle_framework. Get the directory containing the libpaddle_framework.
Returns: Returns:
The directory as string. The directory as string.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册