Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
BaiXuePrincess
Paddle
提交
ed2bb051
P
Paddle
项目概览
BaiXuePrincess
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
ed2bb051
编写于
9月 23, 2022
作者:
N
Nyakku Shigure
提交者:
GitHub
9月 23, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
[CodeStyle][W191][E101] remove tabs in python files (#46288)
上级
330b1a0a
变更
13
显示空白变更内容
内联
并排
Showing
13 changed file
with
228 addition
and
241 deletion
+228
-241
python/paddle/distributed/fleet/dataset/dataset.py
python/paddle/distributed/fleet/dataset/dataset.py
+1
-1
python/paddle/incubate/nn/layer/fused_transformer.py
python/paddle/incubate/nn/layer/fused_transformer.py
+1
-1
python/paddle/incubate/sparse/unary.py
python/paddle/incubate/sparse/unary.py
+1
-1
python/paddle/nn/functional/common.py
python/paddle/nn/functional/common.py
+94
-94
python/paddle/nn/functional/loss.py
python/paddle/nn/functional/loss.py
+25
-26
python/paddle/nn/layer/loss.py
python/paddle/nn/layer/loss.py
+6
-6
python/paddle/nn/layer/norm.py
python/paddle/nn/layer/norm.py
+48
-48
python/paddle/optimizer/adagrad.py
python/paddle/optimizer/adagrad.py
+13
-13
python/paddle/optimizer/adam.py
python/paddle/optimizer/adam.py
+13
-13
python/paddle/optimizer/adamax.py
python/paddle/optimizer/adamax.py
+13
-13
python/paddle/optimizer/adamw.py
python/paddle/optimizer/adamw.py
+6
-6
python/paddle/tensor/attribute.py
python/paddle/tensor/attribute.py
+0
-6
python/paddle/tensor/manipulation.py
python/paddle/tensor/manipulation.py
+7
-13
未找到文件。
python/paddle/distributed/fleet/dataset/dataset.py
浏览文件 @
ed2bb051
python/paddle/incubate/nn/layer/fused_transformer.py
浏览文件 @
ed2bb051
python/paddle/incubate/sparse/unary.py
浏览文件 @
ed2bb051
python/paddle/nn/functional/common.py
浏览文件 @
ed2bb051
python/paddle/nn/functional/loss.py
浏览文件 @
ed2bb051
...
...
@@ -473,7 +473,6 @@ def edit_distance(input,
Returns:
Tuple:
distance(Tensor): edit distance result, its data type is float32, and its shape is (batch_size, 1).
sequence_num(Tensor): sequence number, its data type is float32, and its shape is (1,).
...
...
@@ -3266,8 +3265,8 @@ def triplet_margin_with_distance_loss(input,
distance_function (callable, optional): Quantifies the distance between two tensors. if not specified, 2 norm functions will be used.
margin (float, optional):Default: :math:`1`.
A nonnegative margin representing the minimum difference
between the positive and negative distances required for the loss to be 0.
margin (float, optional):
A nonnegative margin representing the minimum difference
between the positive and negative distances required for the loss to be 0.
Default value is :math:`1`.
swap (bool, optional):The distance swap changes the negative distance to the swap distance (distance between positive samples
and negative samples) if swap distance smaller than negative distance. Default: ``False``.
...
...
python/paddle/nn/layer/loss.py
浏览文件 @
ed2bb051
python/paddle/nn/layer/norm.py
浏览文件 @
ed2bb051
...
...
@@ -210,7 +210,7 @@ Where `H` means height of feature map, `W` means width of feature map.
If it is set to None or one attribute of ParamAttr, instance_norm
will create ParamAttr as bias_attr, the name of bias can be set in ParamAttr.
If the Initializer of the bias_attr is not set, the bias is initialized zero.
If it is set to False, will not create bias_attr. Default: None.
`
If it is set to False, will not create bias_attr. Default: None.
data_format(str, optional): Specify the input data format, could be "NCHW". Default: NCHW.
name(str, optional): Name for the InstanceNorm, default is None. For more information, please refer to :ref:`api_guide_Name`..
...
...
python/paddle/optimizer/adagrad.py
浏览文件 @
ed2bb051
...
...
@@ -45,18 +45,18 @@ class Adagrad(Optimizer):
It can be a float value or a ``Variable`` with a float type.
epsilon (float, optional): A small float value for numerical stability.
The default value is 1e-06.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \
This parameter is required in dygraph mode. And you can specify different options for \
different parameter groups such as the learning rate, weight decay, etc,
\
then the parameters are list of dict. Note that the learning_rate in paramter groups
\
represents the scale of base learning_rate.
\
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or
:ref:`api_paddle_regularizer_L1Decay`, :ref:`api_paddle_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_paddle_fluid_param_attr_aramAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
If a parameter has set regularizer using :ref:`api_paddle_fluid_param_attr_aramAttr` already,
the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies,
...
...
python/paddle/optimizer/adam.py
浏览文件 @
ed2bb051
...
...
@@ -67,18 +67,18 @@ class Adam(Optimizer):
epsilon (float|Tensor, optional): A small float value for numerical stability.
It should be a float number or a Tensor with shape [1] and data type as float32.
The default value is 1e-08.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \
This parameter is required in dygraph mode. And you can specify different options for \
different parameter groups such as the learning rate, weight decay, etc,
\
then the parameters are list of dict. Note that the learning_rate in paramter groups
\
represents the scale of base learning_rate.
\
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies
...
...
python/paddle/optimizer/adamax.py
浏览文件 @
ed2bb051
...
...
@@ -57,18 +57,18 @@ class Adamax(Optimizer):
The default value is 0.999.
epsilon (float, optional): A small float value for numerical stability.
The default value is 1e-08.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``. \
This parameter is required in dygraph mode. And you can specify different options for \
different parameter groups such as the learning rate, weight decay, etc,
\
then the parameters are list of dict. Note that the learning_rate in paramter groups
\
represents the scale of base learning_rate.
\
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated.
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or
:ref:`api_fluid_regularizer_L1Decay`, :ref:`api_fluid_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
If a parameter has set regularizer using :ref:`api_fluid_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
some derived class of ``GradientClipBase`` . There are three cliping strategies
...
...
python/paddle/optimizer/adamw.py
浏览文件 @
ed2bb051
...
...
@@ -54,11 +54,11 @@ class AdamW(Optimizer):
Args:
learning_rate (float|LRScheduler, optional): The learning rate used to update ``Parameter``.
It can be a float value or a LRScheduler. The default value is 0.001.
parameters (list|tuple, optional): List/Tuple of ``Tensor`` names to update to minimize ``loss``.
\
This parameter is required in dygraph mode. And you can specify different options for
\
different parameter groups such as the learning rate, weight decay, etc,
\
then the parameters are list of dict. Note that the learning_rate in paramter groups
\
represents the scale of base learning_rate.
\
parameters (list|tuple, optional): List/Tuple of ``Tensor`` names to update to minimize ``loss``.
This parameter is required in dygraph mode. And you can specify different options for
different parameter groups such as the learning rate, weight decay, etc,
then the parameters are list of dict. Note that the learning_rate in paramter groups
represents the scale of base learning_rate.
The default value is None in static mode, at this time all parameters will be updated.
beta1 (float|Tensor, optional): The exponential decay rate for the 1st moment estimates.
It should be a float number or a Tensor with shape [1] and data type as float32.
...
...
python/paddle/tensor/attribute.py
浏览文件 @
ed2bb051
...
...
@@ -63,12 +63,6 @@ def rank(input):
def
shape
(
input
):
"""
:alias_main: paddle.shape
:alias: paddle.shape,paddle.tensor.shape,paddle.tensor.attribute.shape
:old_api: paddle.fluid.layers.shape
**Shape Layer**
Get the shape of the input.
.. code-block:: text
...
...
python/paddle/tensor/manipulation.py
浏览文件 @
ed2bb051
...
...
@@ -416,12 +416,6 @@ def transpose(x, perm, name=None):
def
unstack
(
x
,
axis
=
0
,
num
=
None
):
"""
:alias_main: paddle.unstack
:alias: paddle.unstack,paddle.tensor.unstack,paddle.tensor.manipulation.unstack
:old_api: paddle.fluid.layers.unstack
**UnStack Layer**
This layer unstacks input Tensor :code:`x` into several Tensors along :code:`axis`.
If :code:`axis` < 0, it would be replaced with :code:`axis+rank(x)`.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录