Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PARL
提交
33516338
P
PARL
项目概览
PaddlePaddle
/
PARL
通知
67
Star
3
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
18
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PARL
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
18
Issue
18
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
33516338
编写于
7月 25, 2019
作者:
B
Bo Zhou
提交者:
GitHub
7月 25, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix the compatibility issue in the A2C example. (#98)
* fix the compatibility issue * fix the comment issue
上级
d18f19a9
变更
3
显示空白变更内容
内联
并排
Showing
3 changed file
with
7 addition
and
25 deletion
+7
-25
examples/A2C/actor.py
examples/A2C/actor.py
+1
-6
examples/A2C/atari_agent.py
examples/A2C/atari_agent.py
+5
-13
examples/A2C/learner.py
examples/A2C/learner.py
+1
-6
未找到文件。
examples/A2C/actor.py
浏览文件 @
33516338
...
...
@@ -48,12 +48,7 @@ class Actor(object):
model
=
AtariModel
(
act_dim
)
algorithm
=
parl
.
algorithms
.
A3C
(
model
,
vf_loss_coeff
=
config
[
'vf_loss_coeff'
])
self
.
agent
=
AtariAgent
(
algorithm
,
obs_shape
=
self
.
config
[
'obs_shape'
],
lr_scheduler
=
self
.
config
[
'lr_scheduler'
],
entropy_coeff_scheduler
=
self
.
config
[
'entropy_coeff_scheduler'
],
)
self
.
agent
=
AtariAgent
(
algorithm
,
config
)
def
sample
(
self
):
sample_data
=
defaultdict
(
list
)
...
...
examples/A2C/atari_agent.py
浏览文件 @
33516338
...
...
@@ -21,30 +21,22 @@ from parl.utils.scheduler import PiecewiseScheduler, LinearDecayScheduler
class
AtariAgent
(
parl
.
Agent
):
def
__init__
(
self
,
algorithm
,
obs_shape
,
lr_scheduler
,
entropy_coeff_scheduler
):
def
__init__
(
self
,
algorithm
,
config
):
"""
Args:
algorithm (`parl.Algorithm`): a2c algorithm
obs_shape (list/tuple): observation shape of atari environment
lr_scheduler (list/tuple): learning rate adjustment schedule: (train_step, learning_rate)
entropy_coeff_scheduler (list/tuple): coefficient of policy entropy adjustment schedule: (train_step, coefficient)
algorithm (`parl.Algorithm`): algorithm to be used in this agent.
config (dict): config file describing the training hyper-parameters(see a2c_config.py)
"""
assert
isinstance
(
obs_shape
,
(
list
,
tuple
))
assert
isinstance
(
lr_scheduler
,
(
list
,
tuple
))
assert
isinstance
(
entropy_coeff_scheduler
,
(
list
,
tuple
))
self
.
obs_shape
=
obs_shape
self
.
lr_scheduler
=
lr_scheduler
self
.
entropy_coeff_scheduler
=
entropy_coeff_scheduler
self
.
obs_shape
=
config
[
'obs_shape'
]
super
(
AtariAgent
,
self
).
__init__
(
algorithm
)
self
.
lr_scheduler
=
LinearDecayScheduler
(
config
[
'start_lr'
],
config
[
'max_sample_steps'
])
self
.
entropy_coeff_scheduler
=
PiecewiseScheduler
(
self
.
entropy_coeff_scheduler
)
config
[
'entropy_coeff_scheduler'
]
)
exec_strategy
=
fluid
.
ExecutionStrategy
()
exec_strategy
.
use_experimental_executor
=
True
...
...
examples/A2C/learner.py
浏览文件 @
33516338
...
...
@@ -47,12 +47,7 @@ class Learner(object):
model
=
AtariModel
(
act_dim
)
algorithm
=
parl
.
algorithms
.
A3C
(
model
,
vf_loss_coeff
=
config
[
'vf_loss_coeff'
])
self
.
agent
=
AtariAgent
(
algorithm
,
obs_shape
=
self
.
config
[
'obs_shape'
],
lr_scheduler
=
self
.
config
[
'lr_scheduler'
],
entropy_coeff_scheduler
=
self
.
config
[
'entropy_coeff_scheduler'
],
)
self
.
agent
=
AtariAgent
(
algorithm
,
config
)
if
machine_info
.
is_gpu_available
():
assert
get_gpu_count
()
==
1
,
'Only support training in single GPU,
\
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录