Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Greenplum
Annotated Deep Learning Paper Implementations
提交
bfe089fd
A
Annotated Deep Learning Paper Implementations
项目概览
Greenplum
/
Annotated Deep Learning Paper Implementations
11 个月 前同步成功
通知
6
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
A
Annotated Deep Learning Paper Implementations
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
bfe089fd
编写于
3月 04, 2021
作者:
K
KeshSam
提交者:
Varuna Jayasiri
3月 05, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
made a few changes
上级
0f029892
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
4 addition
and
4 deletion
+4
-4
labml_nn/rl/ppo/__init__.py
labml_nn/rl/ppo/__init__.py
+4
-4
未找到文件。
labml_nn/rl/ppo/__init__.py
浏览文件 @
bfe089fd
...
...
@@ -11,9 +11,9 @@ This is a [PyTorch](https://pytorch.org) implementation of
[Proximal Policy Optimization - PPO](https://arxiv.org/abs/1707.06347).
PPO is a policy gradient method for reinforcement learning.
Simple policy gradient methods
one
do a single gradient update per sample (or a set of samples).
Doing multiple gradient steps for a singe sample causes problems
because the policy deviates too much producing a bad policy.
Simple policy gradient methods do a single gradient update per sample (or a set of samples).
Doing multiple gradient steps for a sing
l
e sample causes problems
because the policy deviates too much
,
producing a bad policy.
PPO lets us do multiple gradient updates per sample by trying to keep the
policy close to the policy that was used to sample data.
It does so by clipping gradient flow if the updated policy
...
...
@@ -107,7 +107,7 @@ class ClippedPPOLoss(Module):
Then we assume $d^\pi_
\t
heta(s)$ and $d^\pi_{
\t
heta_{OLD}}(s)$ are similar.
The error we introduce to $J(\pi_
\t
heta) - J(\pi_{
\t
heta_{OLD}})$
by this assumtion is bound by the KL divergence between
by this assum
p
tion is bound by the KL divergence between
$\pi_
\t
heta$ and $\pi_{
\t
heta_{OLD}}$.
[Constrained Policy Optimization](https://arxiv.org/abs/1705.10528)
shows the proof of this. I haven't read it.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录