From 58441e84adfa260d95f0888547429c9bf421cf68 Mon Sep 17 00:00:00 2001 From: Steffy-zxf <48793257+Steffy-zxf@users.noreply.github.com> Date: Wed, 11 Sep 2019 15:25:03 +0800 Subject: [PATCH] Update README.md --- demo/reading-comprehension/README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/demo/reading-comprehension/README.md b/demo/reading-comprehension/README.md index 5170c707..04e608e0 100644 --- a/demo/reading-comprehension/README.md +++ b/demo/reading-comprehension/README.md @@ -78,9 +78,12 @@ config = hub.RunConfig(use_cuda=True, num_epoch=2, batch_size=12, strategy=strat 针对ERNIE与BERT类任务,PaddleHub封装了适合这一任务的迁移学习优化策略`AdamWeightDecayStrategy` `learning_rate`: Finetune过程中的最大学习率; + `weight_decay`: 模型的正则项参数,默认0.01,如果模型有过拟合倾向,可适当调高这一参数; + `warmup_proportion`: 如果warmup_proportion>0, 例如0.1, 则学习率会在前10%的steps中线性增长至最高值learning_rate; -`lr_scheduler`: 有两种策略可选(1) `linear_decay`策略学习率会在最高点后以线性方式衰减; `noam_decay`策略学习率会在最高点以多项式形式衰减; + +`lr_scheduler`: 有两种策略可选(1) `linear_decay`策略学习率会在最高点后以线性方式衰减; `noam_decay`策略学习率会在最高点以多项式形式衰减; #### 运行配置 `RunConfig` 主要控制Finetune的训练,包含以下可控制的参数: -- GitLab