Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
4b7fe610
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
4b7fe610
编写于
7月 27, 2022
作者:
C
caozhou
提交者:
GitHub
7月 27, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add adagrad and rmsprop yaml (#44631)
上级
16506d8e
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
39 addition
and
3 deletion
+39
-3
paddle/phi/api/yaml/legacy_api.yaml
paddle/phi/api/yaml/legacy_api.yaml
+22
-0
python/paddle/fluid/optimizer.py
python/paddle/fluid/optimizer.py
+17
-2
python/paddle/fluid/tests/unittests/test_adagrad_op.py
python/paddle/fluid/tests/unittests/test_adagrad_op.py
+0
-1
未找到文件。
paddle/phi/api/yaml/legacy_api.yaml
浏览文件 @
4b7fe610
...
...
@@ -48,6 +48,17 @@
kernel
:
func
:
adadelta
-
api
:
adagrad_
args
:
(Tensor param, Tensor grad, Tensor moment, Tensor learning_rate, float epsilon)
output
:
Tensor(param_out), Tensor(moment_out)
infer_meta
:
func
:
AdagradInferMeta
kernel
:
func
:
adagrad {dense, dense, dense, dense -> dense, dense}
adagrad_dense_param_sparse_grad {dense, selected_rows, dense, dense -> dense, dense}
data_type
:
param
inplace
:
(param -> param_out), (moment -> moment_out)
-
api
:
adam_
args
:
(Tensor param, Tensor grad, Tensor learning_rate, Tensor moment1, Tensor moment2, Tensor beta1_pow, Tensor beta2_pow, Tensor master_param, Tensor skip_update, Scalar beta1, Scalar beta2, Scalar epsilon, bool lazy_mode, int64_t min_row_size_to_use_multithread, bool multi_precision, bool use_global_beta_pow)
output
:
Tensor(param_out), Tensor(moment1_out), Tensor(moment2_out), Tensor(beta1_pow_out), Tensor(beta2_pow_out), Tensor(master_param_outs)
...
...
@@ -1851,6 +1862,17 @@
func
:
reverse_array
backward
:
reverse_array_grad
-
api
:
rmsprop_
args
:
(Tensor param, Tensor mean_square, Tensor grad, Tensor moment, Tensor learning_rate, Tensor mean_grad, float epsilon, float decay, float momentum, bool centered)
output
:
Tensor(param_out), Tensor(moment_out), Tensor(mean_square_out), Tensor(mean_grad_out)
infer_meta
:
func
:
RmspropInferMeta
kernel
:
func
:
rmsprop {dense, dense, dense, dense, dense, dense -> dense, dense, dense, dense}
rmsprop_dense_param_sparse_grad {dense, dense, selected_rows, dense, dense, dense -> dense, dense, dense, dense}
optional
:
mean_grad
inplace
:
(param -> param_out), (moment -> moment_out), (mean_square -> mean_square_out), (mean_grad -> mean_grad_out)
-
api
:
roi_align
args
:
(Tensor x, Tensor boxes, Tensor boxes_num, int pooled_height, int pooled_width, float spatial_scale, int sampling_ratio, bool aligned)
output
:
Tensor
...
...
python/paddle/fluid/optimizer.py
浏览文件 @
4b7fe610
...
...
@@ -2279,11 +2279,18 @@ class AdagradOptimizer(Optimizer):
moment_acc
=
self
.
_get_accumulator
(
self
.
_moment_acc_str
,
param_and_grad
[
0
])
if
framework
.
_non_static_mode
():
if
in_dygraph_mode
():
_C_ops
.
final_state_adagrad_
(
param_and_grad
[
0
],
param_and_grad
[
1
],
moment_acc
,
self
.
_create_param_lr
(
param_and_grad
),
self
.
_epsilon
)
return
None
elif
_in_legacy_dygraph
():
_C_ops
.
adagrad
(
param_and_grad
[
0
],
param_and_grad
[
1
],
moment_acc
,
self
.
_create_param_lr
(
param_and_grad
),
param_and_grad
[
0
],
moment_acc
,
"epsilon"
,
self
.
_epsilon
)
return
None
else
:
# Create the adagrad optimizer op
adagrad_op
=
block
.
append_op
(
...
...
@@ -3374,7 +3381,14 @@ class RMSPropOptimizer(Optimizer):
param_and_grad
[
0
])
mean_grad_acc
=
self
.
_get_accumulator
(
self
.
_mean_grad_acc_str
,
param_and_grad
[
0
])
if
framework
.
_non_static_mode
():
if
in_dygraph_mode
():
_C_ops
.
final_state_rmsprop_
(
param_and_grad
[
0
],
mean_square_acc
,
param_and_grad
[
1
],
momentum_acc
,
self
.
_create_param_lr
(
param_and_grad
),
mean_grad_acc
,
self
.
_epsilon
,
self
.
_rho
,
self
.
_momentum
,
self
.
_centered
)
return
None
elif
_in_legacy_dygraph
():
_C_ops
.
rmsprop
(
param_and_grad
[
0
],
mean_square_acc
,
self
.
_create_param_lr
(
param_and_grad
),
param_and_grad
[
1
],
momentum_acc
,
param_and_grad
[
0
],
...
...
@@ -3382,6 +3396,7 @@ class RMSPropOptimizer(Optimizer):
"epsilon"
,
self
.
_epsilon
,
"decay"
,
self
.
_rho
,
"momentum"
,
self
.
_momentum
,
"centered"
,
self
.
_centered
)
return
None
else
:
rmsprop_op
=
block
.
append_op
(
type
=
self
.
type
,
...
...
python/paddle/fluid/tests/unittests/test_adagrad_op.py
浏览文件 @
4b7fe610
...
...
@@ -29,7 +29,6 @@ class TestAdagradOp1(OpTest):
def
setUp
(
self
):
self
.
op_type
=
"adagrad"
param
=
np
.
random
.
random
((
123
,
321
)).
astype
(
"float32"
)
grad
=
np
.
random
.
random
((
123
,
321
)).
astype
(
"float32"
)
moment
=
np
.
zeros
((
123
,
321
)).
astype
(
"float32"
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录