Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
Paddle
提交
92121d17
P
Paddle
项目概览
PaddlePaddle
/
Paddle
1 年多 前同步成功
通知
2302
Star
20931
Fork
5422
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1423
列表
看板
标记
里程碑
合并请求
543
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1,423
Issue
1,423
列表
看板
标记
里程碑
合并请求
543
合并请求
543
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
92121d17
编写于
5月 18, 2023
作者:
C
co63oc
提交者:
GitHub
5月 18, 2023
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix typos, test=document_fix (#53916)
上级
65ce6886
变更
11
隐藏空白更改
内联
并排
Showing
11 changed file
with
13 addition
and
13 deletion
+13
-13
python/paddle/amp/debugging.py
python/paddle/amp/debugging.py
+2
-2
python/paddle/device/__init__.py
python/paddle/device/__init__.py
+2
-2
python/paddle/distributed/fleet/base/distributed_strategy.py
python/paddle/distributed/fleet/base/distributed_strategy.py
+1
-1
python/paddle/distribution/beta.py
python/paddle/distribution/beta.py
+1
-1
python/paddle/distribution/kl.py
python/paddle/distribution/kl.py
+1
-1
python/paddle/distribution/transform.py
python/paddle/distribution/transform.py
+1
-1
python/paddle/fluid/dygraph/learning_rate_scheduler.py
python/paddle/fluid/dygraph/learning_rate_scheduler.py
+1
-1
python/paddle/fluid/multiprocess_utils.py
python/paddle/fluid/multiprocess_utils.py
+1
-1
python/paddle/fluid/tests/unittests/cc_imp_py_test.cc
python/paddle/fluid/tests/unittests/cc_imp_py_test.cc
+1
-1
python/paddle/io/multiprocess_utils.py
python/paddle/io/multiprocess_utils.py
+1
-1
python/paddle/optimizer/lr.py
python/paddle/optimizer/lr.py
+1
-1
未找到文件。
python/paddle/amp/debugging.py
浏览文件 @
92121d17
...
@@ -315,7 +315,7 @@ def enable_operator_stats_collection():
...
@@ -315,7 +315,7 @@ def enable_operator_stats_collection():
"""
"""
Enable to collect the number of operators for different data types.
Enable to collect the number of operators for different data types.
The statistical data are categorized according to four data types, namely
The statistical data are categorized according to four data types, namely
float32, float16, bfloat16 and others. This func
it
on is used in pair with
float32, float16, bfloat16 and others. This func
ti
on is used in pair with
the corresponding disable function.
the corresponding disable function.
Examples:
Examples:
...
@@ -351,7 +351,7 @@ def enable_operator_stats_collection():
...
@@ -351,7 +351,7 @@ def enable_operator_stats_collection():
def
disable_operator_stats_collection
():
def
disable_operator_stats_collection
():
"""
"""
Disable the collection the number of operators for different data types.
Disable the collection the number of operators for different data types.
This func
it
on is used in pair with the corresponding enable function.
This func
ti
on is used in pair with the corresponding enable function.
The statistical data are categorized according to four data types, namely
The statistical data are categorized according to four data types, namely
float32, float16, bfloat16 and others, and will be printed after the
float32, float16, bfloat16 and others, and will be printed after the
function call.
function call.
...
...
python/paddle/device/__init__.py
浏览文件 @
92121d17
...
@@ -135,7 +135,7 @@ def XPUPlace(dev_id):
...
@@ -135,7 +135,7 @@ def XPUPlace(dev_id):
def
get_cudnn_version
():
def
get_cudnn_version
():
"""
"""
This func
it
on return the version of cudnn. the retuen value is int which represents the
This func
ti
on return the version of cudnn. the retuen value is int which represents the
cudnn version. For example, if it return 7600, it represents the version of cudnn is 7.6.
cudnn version. For example, if it return 7600, it represents the version of cudnn is 7.6.
Returns:
Returns:
...
@@ -270,7 +270,7 @@ def set_device(device):
...
@@ -270,7 +270,7 @@ def set_device(device):
def
get_device
():
def
get_device
():
"""
"""
This func
it
on can get the current global device of the program is running.
This func
ti
on can get the current global device of the program is running.
It's a string which is like 'cpu', 'gpu:x', 'xpu:x' and 'npu:x'. if the global device is not
It's a string which is like 'cpu', 'gpu:x', 'xpu:x' and 'npu:x'. if the global device is not
set, it will return a string which is 'gpu:x' when cuda is avaliable or it
set, it will return a string which is 'gpu:x' when cuda is avaliable or it
will return a string which is 'cpu' when cuda is not avaliable.
will return a string which is 'cpu' when cuda is not avaliable.
...
...
python/paddle/distributed/fleet/base/distributed_strategy.py
浏览文件 @
92121d17
...
@@ -2388,7 +2388,7 @@ class DistributedStrategy:
...
@@ -2388,7 +2388,7 @@ class DistributedStrategy:
"""
"""
The workspace limit size in MB unit for choosing cuDNN convolution algorithms.
The workspace limit size in MB unit for choosing cuDNN convolution algorithms.
The inner func
it
on of cuDNN obtain the fastest suited algorithm that fits within this memory limit.
The inner func
ti
on of cuDNN obtain the fastest suited algorithm that fits within this memory limit.
Usually, large workspace size may lead to choose faster algorithms,
Usually, large workspace size may lead to choose faster algorithms,
but significant increasing memory workspace. Users need to trade-off between memory and speed.
but significant increasing memory workspace. Users need to trade-off between memory and speed.
Default Value: 4000
Default Value: 4000
...
...
python/paddle/distribution/beta.py
浏览文件 @
92121d17
...
@@ -120,7 +120,7 @@ class Beta(exponential_family.ExponentialFamily):
...
@@ -120,7 +120,7 @@ class Beta(exponential_family.ExponentialFamily):
return
paddle
.
exp
(
self
.
log_prob
(
value
))
return
paddle
.
exp
(
self
.
log_prob
(
value
))
def
log_prob
(
self
,
value
):
def
log_prob
(
self
,
value
):
"""Log probability density func
it
on evaluated at value
"""Log probability density func
ti
on evaluated at value
Args:
Args:
value (Tensor): Value to be evaluated
value (Tensor): Value to be evaluated
...
...
python/paddle/distribution/kl.py
浏览文件 @
92121d17
...
@@ -73,7 +73,7 @@ def register_kl(cls_p, cls_q):
...
@@ -73,7 +73,7 @@ def register_kl(cls_p, cls_q):
functions registered by ``register_kl``, according to multi-dispatch pattern.
functions registered by ``register_kl``, according to multi-dispatch pattern.
If an implemention function is found, it will return the result, otherwise,
If an implemention function is found, it will return the result, otherwise,
it will raise ``NotImplementError`` exception. Users can register
it will raise ``NotImplementError`` exception. Users can register
implemention func
it
on by the decorator.
implemention func
ti
on by the decorator.
Args:
Args:
cls_p (Distribution): The Distribution type of Instance p. Subclass derived from ``Distribution``.
cls_p (Distribution): The Distribution type of Instance p. Subclass derived from ``Distribution``.
...
...
python/paddle/distribution/transform.py
浏览文件 @
92121d17
...
@@ -66,7 +66,7 @@ class Transform:
...
@@ -66,7 +66,7 @@ class Transform:
Suppose :math:`X` is a K-dimensional random variable with probability
Suppose :math:`X` is a K-dimensional random variable with probability
density function :math:`p_X(x)`. A new random variable :math:`Y = f(X)` may
density function :math:`p_X(x)`. A new random variable :math:`Y = f(X)` may
be defined by transforming :math:`X` with a suitably well-behaved func
it
on
be defined by transforming :math:`X` with a suitably well-behaved func
ti
on
:math:`f`. It suffices for what follows to note that if `f` is one-to-one and
:math:`f`. It suffices for what follows to note that if `f` is one-to-one and
its inverse :math:`f^{-1}` have a well-defined Jacobian, then the density of
its inverse :math:`f^{-1}` have a well-defined Jacobian, then the density of
:math:`Y` is
:math:`Y` is
...
...
python/paddle/fluid/dygraph/learning_rate_scheduler.py
浏览文件 @
92121d17
...
@@ -1234,7 +1234,7 @@ class LambdaDecay(_LearningRateEpochDecay):
...
@@ -1234,7 +1234,7 @@ class LambdaDecay(_LearningRateEpochDecay):
:api_attr: imperative
:api_attr: imperative
Sets the learning rate of ``optimizer`` to the initial lr times a multiplicative factor, and this multiplicative
Sets the learning rate of ``optimizer`` to the initial lr times a multiplicative factor, and this multiplicative
factor is computed by function ``lr_lambda`` . ``lr_lambda`` is func
it
on which receives ``epoch`` .
factor is computed by function ``lr_lambda`` . ``lr_lambda`` is func
ti
on which receives ``epoch`` .
The algorithm can be described as the code below.
The algorithm can be described as the code below.
...
...
python/paddle/fluid/multiprocess_utils.py
浏览文件 @
92121d17
...
@@ -44,7 +44,7 @@ def _clear_multiprocess_queue_set():
...
@@ -44,7 +44,7 @@ def _clear_multiprocess_queue_set():
def
_cleanup
():
def
_cleanup
():
# NOTE: inter-process Queue shared memory objects clear function
# NOTE: inter-process Queue shared memory objects clear function
_clear_multiprocess_queue_set
()
_clear_multiprocess_queue_set
()
# NOTE: main process memory map files clear func
it
on
# NOTE: main process memory map files clear func
ti
on
core
.
_cleanup_mmap_fds
()
core
.
_cleanup_mmap_fds
()
...
...
python/paddle/fluid/tests/unittests/cc_imp_py_test.cc
浏览文件 @
92121d17
...
@@ -25,7 +25,7 @@ TEST(CC, IMPORT_PY) {
...
@@ -25,7 +25,7 @@ TEST(CC, IMPORT_PY) {
ASSERT_FALSE
(
PyRun_SimpleString
(
"import paddle"
));
ASSERT_FALSE
(
PyRun_SimpleString
(
"import paddle"
));
ASSERT_FALSE
(
PyRun_SimpleString
(
"print(paddle.to_tensor(1))"
));
ASSERT_FALSE
(
PyRun_SimpleString
(
"print(paddle.to_tensor(1))"
));
// 2. C/C++ Run Python func
it
on
// 2. C/C++ Run Python func
ti
on
PyRun_SimpleString
(
"import sys"
);
PyRun_SimpleString
(
"import sys"
);
PyRun_SimpleString
(
"import os"
);
PyRun_SimpleString
(
"import os"
);
PyRun_SimpleString
(
"sys.path.append(os.getcwd())"
);
PyRun_SimpleString
(
"sys.path.append(os.getcwd())"
);
...
...
python/paddle/io/multiprocess_utils.py
浏览文件 @
92121d17
...
@@ -45,7 +45,7 @@ def _clear_multiprocess_queue_set():
...
@@ -45,7 +45,7 @@ def _clear_multiprocess_queue_set():
def
_cleanup
():
def
_cleanup
():
# NOTE: inter-process Queue shared memory objects clear function
# NOTE: inter-process Queue shared memory objects clear function
_clear_multiprocess_queue_set
()
_clear_multiprocess_queue_set
()
# NOTE: main process memory map files clear func
it
on
# NOTE: main process memory map files clear func
ti
on
core
.
_cleanup_mmap_fds
()
core
.
_cleanup_mmap_fds
()
...
...
python/paddle/optimizer/lr.py
浏览文件 @
92121d17
...
@@ -1138,7 +1138,7 @@ class StepDecay(LRScheduler):
...
@@ -1138,7 +1138,7 @@ class StepDecay(LRScheduler):
class
LambdaDecay
(
LRScheduler
):
class
LambdaDecay
(
LRScheduler
):
"""
"""
Sets the learning rate of ``optimizer`` by function ``lr_lambda`` . ``lr_lambda`` is func
it
on which receives ``epoch`` .
Sets the learning rate of ``optimizer`` by function ``lr_lambda`` . ``lr_lambda`` is func
ti
on which receives ``epoch`` .
The algorithm can be described as the code below.
The algorithm can be described as the code below.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录