未验证 提交 92121d17 编写于 作者: C co63oc 提交者: GitHub

Fix typos, test=document_fix (#53916)

上级 65ce6886
...@@ -315,7 +315,7 @@ def enable_operator_stats_collection(): ...@@ -315,7 +315,7 @@ def enable_operator_stats_collection():
""" """
Enable to collect the number of operators for different data types. Enable to collect the number of operators for different data types.
The statistical data are categorized according to four data types, namely The statistical data are categorized according to four data types, namely
float32, float16, bfloat16 and others. This funciton is used in pair with float32, float16, bfloat16 and others. This function is used in pair with
the corresponding disable function. the corresponding disable function.
Examples: Examples:
...@@ -351,7 +351,7 @@ def enable_operator_stats_collection(): ...@@ -351,7 +351,7 @@ def enable_operator_stats_collection():
def disable_operator_stats_collection(): def disable_operator_stats_collection():
""" """
Disable the collection the number of operators for different data types. Disable the collection the number of operators for different data types.
This funciton is used in pair with the corresponding enable function. This function is used in pair with the corresponding enable function.
The statistical data are categorized according to four data types, namely The statistical data are categorized according to four data types, namely
float32, float16, bfloat16 and others, and will be printed after the float32, float16, bfloat16 and others, and will be printed after the
function call. function call.
......
...@@ -135,7 +135,7 @@ def XPUPlace(dev_id): ...@@ -135,7 +135,7 @@ def XPUPlace(dev_id):
def get_cudnn_version(): def get_cudnn_version():
""" """
This funciton return the version of cudnn. the retuen value is int which represents the This function return the version of cudnn. the retuen value is int which represents the
cudnn version. For example, if it return 7600, it represents the version of cudnn is 7.6. cudnn version. For example, if it return 7600, it represents the version of cudnn is 7.6.
Returns: Returns:
...@@ -270,7 +270,7 @@ def set_device(device): ...@@ -270,7 +270,7 @@ def set_device(device):
def get_device(): def get_device():
""" """
This funciton can get the current global device of the program is running. This function can get the current global device of the program is running.
It's a string which is like 'cpu', 'gpu:x', 'xpu:x' and 'npu:x'. if the global device is not It's a string which is like 'cpu', 'gpu:x', 'xpu:x' and 'npu:x'. if the global device is not
set, it will return a string which is 'gpu:x' when cuda is avaliable or it set, it will return a string which is 'gpu:x' when cuda is avaliable or it
will return a string which is 'cpu' when cuda is not avaliable. will return a string which is 'cpu' when cuda is not avaliable.
......
...@@ -2388,7 +2388,7 @@ class DistributedStrategy: ...@@ -2388,7 +2388,7 @@ class DistributedStrategy:
""" """
The workspace limit size in MB unit for choosing cuDNN convolution algorithms. The workspace limit size in MB unit for choosing cuDNN convolution algorithms.
The inner funciton of cuDNN obtain the fastest suited algorithm that fits within this memory limit. The inner function of cuDNN obtain the fastest suited algorithm that fits within this memory limit.
Usually, large workspace size may lead to choose faster algorithms, Usually, large workspace size may lead to choose faster algorithms,
but significant increasing memory workspace. Users need to trade-off between memory and speed. but significant increasing memory workspace. Users need to trade-off between memory and speed.
Default Value: 4000 Default Value: 4000
......
...@@ -120,7 +120,7 @@ class Beta(exponential_family.ExponentialFamily): ...@@ -120,7 +120,7 @@ class Beta(exponential_family.ExponentialFamily):
return paddle.exp(self.log_prob(value)) return paddle.exp(self.log_prob(value))
def log_prob(self, value): def log_prob(self, value):
"""Log probability density funciton evaluated at value """Log probability density function evaluated at value
Args: Args:
value (Tensor): Value to be evaluated value (Tensor): Value to be evaluated
......
...@@ -73,7 +73,7 @@ def register_kl(cls_p, cls_q): ...@@ -73,7 +73,7 @@ def register_kl(cls_p, cls_q):
functions registered by ``register_kl``, according to multi-dispatch pattern. functions registered by ``register_kl``, according to multi-dispatch pattern.
If an implemention function is found, it will return the result, otherwise, If an implemention function is found, it will return the result, otherwise,
it will raise ``NotImplementError`` exception. Users can register it will raise ``NotImplementError`` exception. Users can register
implemention funciton by the decorator. implemention function by the decorator.
Args: Args:
cls_p (Distribution): The Distribution type of Instance p. Subclass derived from ``Distribution``. cls_p (Distribution): The Distribution type of Instance p. Subclass derived from ``Distribution``.
......
...@@ -66,7 +66,7 @@ class Transform: ...@@ -66,7 +66,7 @@ class Transform:
Suppose :math:`X` is a K-dimensional random variable with probability Suppose :math:`X` is a K-dimensional random variable with probability
density function :math:`p_X(x)`. A new random variable :math:`Y = f(X)` may density function :math:`p_X(x)`. A new random variable :math:`Y = f(X)` may
be defined by transforming :math:`X` with a suitably well-behaved funciton be defined by transforming :math:`X` with a suitably well-behaved function
:math:`f`. It suffices for what follows to note that if `f` is one-to-one and :math:`f`. It suffices for what follows to note that if `f` is one-to-one and
its inverse :math:`f^{-1}` have a well-defined Jacobian, then the density of its inverse :math:`f^{-1}` have a well-defined Jacobian, then the density of
:math:`Y` is :math:`Y` is
......
...@@ -1234,7 +1234,7 @@ class LambdaDecay(_LearningRateEpochDecay): ...@@ -1234,7 +1234,7 @@ class LambdaDecay(_LearningRateEpochDecay):
:api_attr: imperative :api_attr: imperative
Sets the learning rate of ``optimizer`` to the initial lr times a multiplicative factor, and this multiplicative Sets the learning rate of ``optimizer`` to the initial lr times a multiplicative factor, and this multiplicative
factor is computed by function ``lr_lambda`` . ``lr_lambda`` is funciton which receives ``epoch`` . factor is computed by function ``lr_lambda`` . ``lr_lambda`` is function which receives ``epoch`` .
The algorithm can be described as the code below. The algorithm can be described as the code below.
......
...@@ -44,7 +44,7 @@ def _clear_multiprocess_queue_set(): ...@@ -44,7 +44,7 @@ def _clear_multiprocess_queue_set():
def _cleanup(): def _cleanup():
# NOTE: inter-process Queue shared memory objects clear function # NOTE: inter-process Queue shared memory objects clear function
_clear_multiprocess_queue_set() _clear_multiprocess_queue_set()
# NOTE: main process memory map files clear funciton # NOTE: main process memory map files clear function
core._cleanup_mmap_fds() core._cleanup_mmap_fds()
......
...@@ -25,7 +25,7 @@ TEST(CC, IMPORT_PY) { ...@@ -25,7 +25,7 @@ TEST(CC, IMPORT_PY) {
ASSERT_FALSE(PyRun_SimpleString("import paddle")); ASSERT_FALSE(PyRun_SimpleString("import paddle"));
ASSERT_FALSE(PyRun_SimpleString("print(paddle.to_tensor(1))")); ASSERT_FALSE(PyRun_SimpleString("print(paddle.to_tensor(1))"));
// 2. C/C++ Run Python funciton // 2. C/C++ Run Python function
PyRun_SimpleString("import sys"); PyRun_SimpleString("import sys");
PyRun_SimpleString("import os"); PyRun_SimpleString("import os");
PyRun_SimpleString("sys.path.append(os.getcwd())"); PyRun_SimpleString("sys.path.append(os.getcwd())");
......
...@@ -45,7 +45,7 @@ def _clear_multiprocess_queue_set(): ...@@ -45,7 +45,7 @@ def _clear_multiprocess_queue_set():
def _cleanup(): def _cleanup():
# NOTE: inter-process Queue shared memory objects clear function # NOTE: inter-process Queue shared memory objects clear function
_clear_multiprocess_queue_set() _clear_multiprocess_queue_set()
# NOTE: main process memory map files clear funciton # NOTE: main process memory map files clear function
core._cleanup_mmap_fds() core._cleanup_mmap_fds()
......
...@@ -1138,7 +1138,7 @@ class StepDecay(LRScheduler): ...@@ -1138,7 +1138,7 @@ class StepDecay(LRScheduler):
class LambdaDecay(LRScheduler): class LambdaDecay(LRScheduler):
""" """
Sets the learning rate of ``optimizer`` by function ``lr_lambda`` . ``lr_lambda`` is funciton which receives ``epoch`` . Sets the learning rate of ``optimizer`` by function ``lr_lambda`` . ``lr_lambda`` is function which receives ``epoch`` .
The algorithm can be described as the code below. The algorithm can be described as the code below.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册