未验证 提交 6439e91d 编写于 作者: Z Zhibao Li 提交者: GitHub

修复 paddle.static.load_from_file 等 API 英文文档 (#49042)

* fix api docs format problems 121-131

* fix English docs #49042

* resolve conflict; test=document_fix
Co-authored-by: 梦柳's avatarLigoml <limengliu@tiaozhan.com>
上级 4b803a4a
......@@ -24,7 +24,6 @@ __all__ = []
@static_only
def data(name, shape, dtype=None, lod_level=0):
"""
**Data Layer**
This function creates a variable on the global block. The global variable
can be accessed by all the following operators in the graph. The variable
......@@ -36,15 +35,14 @@ def data(name, shape, dtype=None, lod_level=0):
name (str): The name/alias of the variable, see :ref:`api_guide_Name`
for more details.
shape (list|tuple): List|Tuple of integers declaring the shape. You can
set "None" or -1 at a dimension to indicate the dimension can be of any
size. For example, it is useful to set changeable batch size as "None" or -1.
set None or -1 at a dimension to indicate the dimension can be of any
size. For example, it is useful to set changeable batch size as None or -1.
dtype (np.dtype|str, optional): The type of the data. Supported
dtype: bool, float16, float32, float64, int8, int16, int32, int64,
uint8. Default: None. When `dtype` is not set, the dtype will get
from the global dtype by `paddle.get_default_dtype()`.
lod_level (int, optional): The LoD level of the LoDTensor. Usually users
don't have to set this value. For more details about when and how to
use LoD level, see :ref:`user_guide_lod_tensor` . Default: 0.
don't have to set this value. Default: 0.
Returns:
Variable: The global variable that gives access to the data.
......
......@@ -128,14 +128,13 @@ def _clone_var_in_block(block, var):
def normalize_program(program, feed_vars, fetch_vars):
"""
:api_attr: Static Graph
Normalize/Optimize a program according to feed_vars and fetch_vars.
Args:
program(Program): Specify a program you want to optimize.
feed_vars(Variable | list[Variable]): Variables needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference.
feed_vars(Tensor | list[Tensor]): Variables needed by inference.
fetch_vars(Tensor | list[Tensor]): Variables returned by inference.
Returns:
Program: Normalized/Optimized program.
......@@ -233,6 +232,7 @@ def normalize_program(program, feed_vars, fetch_vars):
def is_persistable(var):
"""
Check whether the given variable is persistable.
Args:
......@@ -264,14 +264,14 @@ def is_persistable(var):
@static_only
def serialize_program(feed_vars, fetch_vars, **kwargs):
"""
:api_attr: Static Graph
Serialize default main program according to feed_vars and fetch_vars.
Args:
feed_vars(Variable | list[Variable]): Variables needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference.
kwargs: Supported keys including 'program'.Attention please, kwargs is used for backward compatibility mainly.
feed_vars(Tensor | list[Tensor]): Tensor needed by inference.
fetch_vars(Tensor | list[Tensor]): Tensor returned by inference.
kwargs: Supported keys including ``program``. Attention please, kwargs is used for backward compatibility mainly.
- program(Program): specify a program if you don't want to use default main program.
Returns:
......@@ -323,14 +323,14 @@ def _serialize_program(program):
@static_only
def serialize_persistables(feed_vars, fetch_vars, executor, **kwargs):
"""
:api_attr: Static Graph
Serialize parameters using given executor and default main program according to feed_vars and fetch_vars.
Args:
feed_vars(Variable | list[Variable]): Variables needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference.
kwargs: Supported keys including 'program'.Attention please, kwargs is used for backward compatibility mainly.
feed_vars(Tensor | list[Tensor]): Tensor needed by inference.
fetch_vars(Tensor | list[Tensor]): Tensor returned by inference.
kwargs: Supported keys including ``program``. Attention please, kwargs is used for backward compatibility mainly.
- program(Program): specify a program if you don't want to use default main program.
Returns:
......@@ -423,9 +423,11 @@ def _serialize_persistables(program, executor):
def save_to_file(path, content):
"""
Save content to given path.
Args:
path(str): Path to write content to.
content(bytes): Content to write.
Returns:
None
......@@ -461,15 +463,15 @@ def save_inference_model(
):
"""
Save current model and its parameters to given path. i.e.
Given path_prefix = "/path/to/modelname", after invoking
save_inference_model(path_prefix, feed_vars, fetch_vars, executor),
you will find two files named modelname.pdmodel and modelname.pdiparams
under "/path/to", which represent your model and parameters respectively.
Given ``path_prefix = "PATH/modelname"``, after invoking
``save_inference_model(path_prefix, feed_vars, fetch_vars, executor)``,
you will find two files named ``modelname.pdmodel`` and ``modelname.pdiparams``
under ``PATH``, which represent your model and parameters respectively.
Args:
path_prefix(str): Directory path to save model + model name without suffix.
feed_vars(Variable | list[Variable]): Variables needed by inference.
fetch_vars(Variable | list[Variable]): Variables returned by inference.
feed_vars(Tensor | list[Tensor]): Variables needed by inference.
fetch_vars(Tensor | list[Tensor]): Variables returned by inference.
executor(Executor): The executor that saves the inference model. You can refer
to :ref:`api_guide_executor_en` for more details.
kwargs: Supported keys including 'program' and "clip_extra". Attention please, kwargs is used for backward compatibility mainly.
......@@ -551,7 +553,6 @@ def save_inference_model(
@static_only
def deserialize_program(data):
"""
:api_attr: Static Graph
Deserialize given data to a program.
......@@ -598,7 +599,6 @@ def deserialize_program(data):
@static_only
def deserialize_persistables(program, data, executor):
"""
:api_attr: Static Graph
Deserialize given data to parameters according to given program and executor.
......@@ -704,8 +704,10 @@ def deserialize_persistables(program, data, executor):
def load_from_file(path):
"""
Load file in binary mode.
Args:
path(str): Path of an existed file.
Returns:
bytes: Content of file.
......@@ -739,7 +741,6 @@ def load_from_file(path):
@static_only
def load_inference_model(path_prefix, executor, **kwargs):
"""
:api_attr: Static Graph
Load inference model from a given path. By this API, you can get the model
structure(Inference Program) and model parameters.
......@@ -750,8 +751,10 @@ def load_inference_model(path_prefix, executor, **kwargs):
- Set to None when reading the model from memory.
executor(Executor): The executor to run for loading inference model.
See :ref:`api_guide_executor_en` for more details about it.
kwargs: Supported keys including 'model_filename', 'params_filename'.Attention please, kwargs is used for backward compatibility mainly.
kwargs: Supported keys including 'model_filename', 'params_filename'. Attention please, kwargs is used for backward compatibility mainly.
- model_filename(str): specify model_filename if you don't want to use default name.
- params_filename(str): specify params_filename if you don't want to use default name.
Returns:
......
......@@ -2343,7 +2343,7 @@ def deform_conv2d(
float32, float64.
offset (Tensor): The input coordinate offset of deformable convolution layer.
A Tensor with type float32, float64.
mask (Tensor, Optional): The input mask of deformable convolution layer.
mask (Tensor): The input mask of deformable convolution layer.
A Tensor with type float32, float64. It should be None when you use
deformable convolution v1.
num_filters(int): The number of filter. It is as same as the output
......@@ -2377,7 +2377,7 @@ def deform_conv2d(
deformable conv will create ParamAttr as weight_attr.
If the Initializer of the weight_attr is not set, the parameter is
initialized with :math:`Normal(0.0, std)`, and the
:math:`std` is :math:`(\\frac{2.0 }{filter\_elem\_num})^{0.5}`. Default: None.
:math:`std` is :math:`(\frac{2.0 }{filter\_elem\_num})^{0.5}`. Default: None.
bias_attr (ParamAttr|bool, Optional): The parameter attribute for the bias of
deformable conv layer. If it is set to False, no bias will be added
to the output units. If it is set to None or one attribute of ParamAttr, conv2d
......@@ -2385,9 +2385,9 @@ def deform_conv2d(
is not set, the bias is initialized zero. Default: None.
name(str, Optional): For details, please refer to :ref:`api_guide_Name`.
Generally, no setting is required. Default: None.
Returns:
Tensor: The tensor storing the deformable convolution \
result. A Tensor with type float32, float64.
Tensor: The tensor storing the deformable convolution result. A Tensor with type float32, float64.
Examples:
.. code-block:: python
......
......@@ -20,6 +20,7 @@ __all__ = ['get_include', 'get_lib']
def get_include():
"""
Get the directory containing the PaddlePaddle C++ header files.
Returns:
The directory as string.
......@@ -38,6 +39,7 @@ def get_include():
def get_lib():
"""
Get the directory containing the libpaddle_framework.
Returns:
The directory as string.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册