Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
CSDN 技术社区
ai
chatCSDN
提交
30ccef27
C
chatCSDN
项目概览
CSDN 技术社区
/
ai
/
chatCSDN
通知
107
Star
8
Fork
2
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
C
chatCSDN
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
30ccef27
编写于
3月 17, 2023
作者:
U
u010280923
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update ppo model
上级
ba2760dc
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
4 addition
and
20 deletion
+4
-20
src/rlhf/ppo.py
src/rlhf/ppo.py
+4
-20
未找到文件。
src/rlhf/ppo.py
浏览文件 @
30ccef27
...
@@ -29,8 +29,6 @@ from src.rlhf.reward import RewardModel
...
@@ -29,8 +29,6 @@ from src.rlhf.reward import RewardModel
from
src.rlhf.optimizer
import
get_optimizer
from
src.rlhf.optimizer
import
get_optimizer
from
src.rlhf.utils
import
masked_mean
,
eval_decorator
from
src.rlhf.utils
import
masked_mean
,
eval_decorator
from
accelerate
import
Accelerator
# actor critic - rwkv with lora
# actor critic - rwkv with lora
PPOActionCriticReturn
=
namedtuple
(
'PPOActionCriticReturn'
,
[
PPOActionCriticReturn
=
namedtuple
(
'PPOActionCriticReturn'
,
[
...
@@ -254,15 +252,12 @@ def clipped_value_loss(values, rewards, old_values, clip):
...
@@ -254,15 +252,12 @@ def clipped_value_loss(values, rewards, old_values, clip):
class
RLHF
(
nn
.
Module
):
class
RLHF
(
nn
.
Module
):
def
__init__
(
def
__init__
(
self
,
self
,
args
,
args
accelerate_kwargs
:
dict
=
{}
):
):
super
().
__init__
()
super
().
__init__
()
self
.
args
=
args
self
.
args
=
args
self
.
accelerate
=
Accelerator
(
**
accelerate_kwargs
)
# 加载 RWKV 模型
# 加载 RWKV 模型
rwkv
=
RWKV
(
args
)
rwkv
=
RWKV
(
args
)
...
@@ -299,19 +294,12 @@ class RLHF(nn.Module):
...
@@ -299,19 +294,12 @@ class RLHF(nn.Module):
reward_model
.
load
(
args
.
load_rm_model
)
reward_model
.
load
(
args
.
load_rm_model
)
self
.
reward_model
=
reward_model
.
eval
()
self
.
reward_model
=
reward_model
.
eval
()
def
print
(
self
,
msg
):
return
self
.
accelerate
.
print
(
msg
)
def
save
(
self
,
filepath
=
'./checkpoint.pt'
):
def
save
(
self
,
filepath
=
'./checkpoint.pt'
):
torch
.
save
(
self
.
actor_critic
.
state_dict
(),
filepath
)
torch
.
save
(
self
.
actor_critic
.
state_dict
(),
filepath
)
def
load
(
self
,
filepath
=
'./checkpoint.pt'
):
def
load
(
self
,
filepath
=
'./checkpoint.pt'
):
state_dict
=
torch
.
load
(
filepath
)
state_dict
=
torch
.
load
(
filepath
)
self
.
actor_critic
.
load_state_dict
(
state_dict
)
self
.
actor_critic
.
load_state_dict
(
state_dict
)
@
property
def
device
(
self
):
return
self
.
accelerate
.
device
def
configure_optimizers
(
self
):
def
configure_optimizers
(
self
):
args
=
self
.
args
args
=
self
.
args
...
@@ -383,11 +371,7 @@ class RLHF(nn.Module):
...
@@ -383,11 +371,7 @@ class RLHF(nn.Module):
assert
prompt
.
ndim
==
1
,
'only one prompt allowed at a time for now'
assert
prompt
.
ndim
==
1
,
'only one prompt allowed at a time for now'
prompt
=
repeat
(
prompt
,
'n -> b n'
,
b
=
num_samples
)
prompt
=
repeat
(
prompt
,
'n -> b n'
,
b
=
num_samples
)
actor_critic
=
self
.
accelerate
.
unwrap_model
(
self
.
actor_critic
)
self
.
actor_critic
.
eval
()
reward_model
=
self
.
accelerate
.
unwrap_model
(
self
.
reward_model
)
actor_critic
.
eval
()
(
(
actions
,
actions
,
sequences
,
sequences
,
...
@@ -395,7 +379,7 @@ class RLHF(nn.Module):
...
@@ -395,7 +379,7 @@ class RLHF(nn.Module):
prompt_mask
,
prompt_mask
,
action_logits
,
action_logits
,
_
_
)
=
actor_critic
.
generate
(
)
=
self
.
actor_critic
.
generate
(
prompt
,
prompt
,
*
args
,
*
args
,
max_seq_len
=
max_seq_len
,
max_seq_len
=
max_seq_len
,
...
@@ -403,7 +387,7 @@ class RLHF(nn.Module):
...
@@ -403,7 +387,7 @@ class RLHF(nn.Module):
**
kwargs
**
kwargs
)
)
rewards
=
reward_model
(
rewards
=
self
.
reward_model
(
sequences
,
sequences
,
prompt_mask
=
prompt_mask
,
prompt_mask
=
prompt_mask
,
mask
=
mask
,
mask
=
mask
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录