PPG¶
PPGPolicy¶
- class ding.policy.ppg.PPGPolicy(cfg: dict, model: Optional[Union[type, torch.nn.modules.module.Module]] = None, enable_field: Optional[List[str]] = None)[source]¶
- Overview:
Policy class of PPG algorithm.
- Interface:
_init_learn, _data_preprocess_learn, _forward_learn, _state_dict_learn, _load_state_dict_learn _init_collect, _forward_collect, _process_transition, _get_train_sample, _get_batch_size, _init_eval, _forward_eval, default_model, _monitor_vars_learn, learn_aux
- Config:
ID
Symbol
Type
Default Value
Description
Other(Shape)
1
type
str
ppg
RL policy register name, refer toregistryPOLICY_REGISTRY
this arg is optional,a placeholder2
cuda
bool
False
Whether to use cuda for networkthis arg can be diff-erent from modes3
on_policy
bool
True
Whether the RL algorithm is on-policyor off-policypriority
bool
False
Whether use priority(PER)priority sample,update priority5
priority_
IS_weight
bool
False
Whether use Importance SamplingWeight to correct biased update.IS weight6
learn.update
_per_collect
int
5
How many updates(iterations) to trainafter collector’s one collection. Onlyvalid in serial trainingthis args can be varyfrom envs. Bigger valmeans more off-policy7
learn.value_
weight
float
1.0
The loss weight of value networkpolicy network weightis set to 18
learn.entropy_
weight
float
0.01
The loss weight of entropyregularizationpolicy network weightis set to 19
learn.clip_
ratio
float
0.2
PPO clip ratio10
learn.adv_
norm
bool
False
Whether to use advantage norm ina whole training batch11
learn.aux_
freq
int
5
The frequency(normal update times)of auxiliary phase training12
learn.aux_
train_epoch
int
6
The training epochs of auxiliaryphase13
learn.aux_
bc_weight
int
1
The loss weight of behavioral_cloningin auxiliary phase14
collect.dis
count_factor
float
0.99
Reward’s future discount factor, aka.gammamay be 1 when sparsereward env15
collect.gae_
lambda
float
0.95
GAE lambda factor for the balanceof bias and variance(1-step td and mc)
- _data_preprocess_learn(data: List[Any]) dict [source]¶
- Overview:
Preprocess the data to fit the required data format for learning, including collate(stack data into batch), ignore done(in some fake terminate env), prepare loss weight per training sample, and cpu tensor to cuda.
- Arguments:
data (
List[Dict[str, Any]]
): the data collected from collect function
- Returns:
data (
Dict[str, Any]
): the processed data, including at least [‘done’, ‘weight’]
- _forward_collect(data: dict) dict [source]¶
- Overview:
Forward function of collect mode.
- Arguments:
- data (
Dict[str, Any]
): Dict type data, stacked env data for predicting policy_output(action), values are torch.Tensor or np.ndarray or dict/list combinations, keys are env_id indicated by integer.
- data (
- Returns:
output (
Dict[int, Any]
): Dict type data, including at least inferred action according to input obs.
- ReturnsKeys
necessary:
action
- _forward_eval(data: dict) dict [source]¶
- Overview:
Forward function of eval mode, similar to
self._forward_collect
.- Arguments:
- data (
Dict[str, Any]
): Dict type data, stacked env data for predicting policy_output(action), values are torch.Tensor or np.ndarray or dict/list combinations, keys are env_id indicated by integer.
- data (
- Returns:
output (
Dict[int, Any]
): The dict of predicting action for the interaction with env.
- ReturnsKeys
necessary:
action
- _forward_learn(data: dict) Dict[str, Any] [source]¶
- Overview:
Forward and backward function of learn mode.
- Arguments:
data (
Dict[str, Any]
): Dict type data, a batch of data for training, values are torch.Tensor or np.ndarray or dict/list combinations.
- Returns:
info_dict (
Dict[str, Any]
): Dict type data, a info dict indicated training result, which will be recorded in text log and tensorboard, values are python scalar or a list of scalars.
- ArgumentsKeys:
necessary: ‘obs’, ‘logit’, ‘action’, ‘value’, ‘reward’, ‘done’
- ReturnsKeys:
necessary: current lr, total_loss, policy_loss, value_loss, entropy_loss, adv_abs_max, approx_kl, clipfrac aux_value_loss, auxiliary_loss, behavioral_cloning_loss
current_lr (
float
): Current learning ratetotal_loss (
float
): The calculated losspolicy_loss (
float
): The policy(actor) loss of ppgvalue_loss (
float
): The value(critic) loss of ppgentropy_loss (
float
): The entropy lossauxiliary_loss (
float
): The auxiliary loss, we use the value function loss as the auxiliary objective, thereby sharing features between the policy and value function while minimizing distortions to the policyaux_value_loss (
float
): The auxiliary value loss, we need to train the value network extra during the auxiliary phase, it’s the value loss we train the value network during auxiliary phasebehavioral_cloning_loss (
float
): The behavioral cloning loss, used to optimize the auxiliary objective while otherwise preserving the original policy
- _get_batch_size() Dict[str, int] [source]¶
- Overview:
Get learn batch size. In the PPG algorithm, different networks require different data. We need to get data[‘policy’] and data[‘value’] to train policy net and value net, this function is used to get the batch size of data[‘policy’] and data[‘value’].
- Returns:
output (
dict[str, int]
): Dict type data, including str type batch size and int type batch size.
- _get_train_sample(data: list) Union[None, List[Any]] [source]¶
- Overview:
Get the trajectory and calculate GAE, return one data to cache for next time calculation
- Arguments:
data (
list
): The trajectory’s cache
- Returns:
samples (
dict
): The training samples generated
- _init_collect() None [source]¶
- Overview:
Collect mode init method. Called by
self.__init__
. Init unroll length, collect model.
- _init_eval() None [source]¶
- Overview:
Evaluate mode init method. Called by
self.__init__
. Init eval model with argmax strategy.
- _init_learn() None [source]¶
- Overview:
Learn mode init method. Called by
self.__init__
. Init the optimizer, algorithm config and the main model.- Arguments:
Note
The _init_learn method takes the argument from the self._cfg.learn in the config file
learning_rate (
float
): The learning rate fo the optimizer
- _load_state_dict_learn(state_dict: Dict[str, Any]) None [source]¶
- Overview:
Load the state_dict variable into policy learn mode.
- Arguments:
state_dict (
Dict[str, Any]
): the dict of policy learn state saved before. When the value is distilled into the policy network, we need to make sure the policy network does not change the action predictions, we need two optimizers, _optimizer_ac is used in policy net, and _optimizer_aux_critic is used in value net.
Tip
If you want to only load some parts of model, you can simply set the
strict
argument in load_state_dict toFalse
, or refer toding.torch_utils.checkpoint_helper
for more complicated operation.
- _monitor_vars_learn() List[str] [source]¶
- Overview:
Return variables’ name if variables are to used in monitor.
- Returns:
vars (
List[str]
): Variables’ name list.
- _process_transition(obs: Any, model_output: dict, timestep: collections.namedtuple) dict [source]¶
- Overview:
Generate dict type transition data from inputs.
- Arguments:
obs (
Any
): Env observationmodel_output (
dict
): Output of collect model, including at least [‘action’]timestep (
namedtuple
): Output after env step, including at least [‘obs’, ‘reward’, ‘done’] (here ‘obs’ indicates obs after env step).
- Returns:
transition (
dict
): Dict type transition data.
- _state_dict_learn() Dict[str, Any] [source]¶
- Overview:
Return the state_dict of learn mode, usually including model and optimizer.
- Returns:
state_dict (
Dict[str, Any]
): the dict of current policy learn state, for saving and restoring.
- default_model() Tuple[str, List[str]] [source]¶
- Overview:
Return this algorithm default model setting for demonstration.
- Returns:
model_info (
Tuple[str, List[str]]
): model name and mode import_names
Note
The user can define and use customized network model but must obey the same inferface definition indicated by import_names path.
- learn_aux() Tuple[torch.Tensor, torch.Tensor, torch.Tensor] [source]¶
- Overview:
The auxiliary phase training, where the value is distilled into the policy network
- Returns:
aux_loss (
Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
): including average auxiliary loss average behavioral cloning loss, and average auxiliary value loss