Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
ERNIE
提交
5191bf60
E
ERNIE
项目概览
PaddlePaddle
/
ERNIE
大约 1 年 前同步成功
通知
109
Star
5997
Fork
1270
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
29
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
E
ERNIE
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
29
Issue
29
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
5191bf60
编写于
3月 05, 2019
作者:
Y
Yibing Liu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Change default of some args & activate travis-ci
上级
880a14e9
变更
7
隐藏空白更改
内联
并排
Showing
7 changed file
with
66 addition
and
17 deletion
+66
-17
.travis.yml
.travis.yml
+30
-0
.travis/precommit.sh
.travis/precommit.sh
+21
-0
BERT/convert_params.py
BERT/convert_params.py
+1
-1
BERT/predict_classifier.py
BERT/predict_classifier.py
+1
-2
BERT/run_classifier.py
BERT/run_classifier.py
+3
-3
BERT/run_squad.py
BERT/run_squad.py
+9
-10
BERT/train.py
BERT/train.py
+1
-1
未找到文件。
.travis.yml
0 → 100644
浏览文件 @
5191bf60
language
:
cpp
cache
:
ccache
sudo
:
required
dist
:
trusty
services
:
-
docker
os
:
-
linux
env
:
-
JOB=PRE_COMMIT
addons
:
apt
:
packages
:
-
git
-
python
-
python-pip
-
python2.7-dev
ssh_known_hosts
:
13.229.163.131
before_install
:
-
sudo pip install -U virtualenv pre-commit pip
script
:
-
exit_code=0
-
.travis/precommit.sh || exit_code=$(( exit_code | $? ))
notifications
:
email
:
on_success
:
change
on_failure
:
always
.travis/precommit.sh
0 → 100755
浏览文件 @
5191bf60
#!/bin/bash
function
abort
(){
echo
"Your commit does not fit PaddlePaddle code style"
1>&2
echo
"Please use pre-commit scripts to auto-format your code"
1>&2
exit
1
}
trap
'abort'
0
set
-e
cd
`
dirname
$0
`
cd
..
export
PATH
=
/usr/bin:
$PATH
pre-commit
install
if
!
pre-commit run
-a
;
then
ls
-lh
git diff
--exit-code
exit
1
fi
trap
: 0
BERT/convert_params.py
浏览文件 @
5191bf60
...
...
@@ -20,7 +20,7 @@ from __future__ import print_function
import
numpy
as
np
import
argparse
import
collections
from
args
import
print_arguments
from
utils.
args
import
print_arguments
import
tensorflow
as
tf
import
paddle.fluid
as
fluid
from
tensorflow.python
import
pywrap_tensorflow
...
...
BERT/predict_classifier.py
浏览文件 @
5191bf60
...
...
@@ -41,7 +41,7 @@ model_g.add_arg("use_fp16", bool, False, "Whether to resume
data_g
=
ArgumentGroup
(
parser
,
"data"
,
"Data paths, vocab paths and data processing options."
)
data_g
.
add_arg
(
"data_dir"
,
str
,
None
,
"Directory to test data."
)
data_g
.
add_arg
(
"vocab_path"
,
str
,
None
,
"Vocabulary path."
)
data_g
.
add_arg
(
"max_seq_len"
,
int
,
512
,
"Number of words of the longest seqence."
)
data_g
.
add_arg
(
"max_seq_len"
,
int
,
128
,
"Number of words of the longest seqence."
)
data_g
.
add_arg
(
"batch_size"
,
int
,
32
,
"Total examples' number in batch for training. see also --in_tokens."
)
data_g
.
add_arg
(
"in_tokens"
,
bool
,
False
,
"If set, the batch size will be the maximum number of tokens in one batch. "
...
...
@@ -51,7 +51,6 @@ data_g.add_arg("do_lower_case", bool, True,
run_type_g
=
ArgumentGroup
(
parser
,
"run_type"
,
"running type options."
)
run_type_g
.
add_arg
(
"use_cuda"
,
bool
,
True
,
"If set, use GPU for training."
)
run_type_g
.
add_arg
(
"use_fast_executor"
,
bool
,
False
,
"If set, use fast parallel executor (in experiment)."
)
run_type_g
.
add_arg
(
"task_name"
,
str
,
None
,
"The name of task to perform fine-tuning, should be in {'xnli', 'mnli', 'cola', 'mrpc'}."
)
run_type_g
.
add_arg
(
"do_prediction"
,
bool
,
True
,
"Whether to do prediction on test set."
)
...
...
BERT/run_classifier.py
浏览文件 @
5191bf60
...
...
@@ -44,7 +44,7 @@ model_g.add_arg("init_pretraining_params", str, None,
model_g
.
add_arg
(
"checkpoints"
,
str
,
"checkpoints"
,
"Path to save checkpoints."
)
train_g
=
ArgumentGroup
(
parser
,
"training"
,
"training options."
)
train_g
.
add_arg
(
"epoch"
,
int
,
100
,
"Number of epoches for trai
ning."
)
train_g
.
add_arg
(
"epoch"
,
int
,
3
,
"Number of epoches for fine-tu
ning."
)
train_g
.
add_arg
(
"learning_rate"
,
float
,
5e-5
,
"Learning rate used to train with warmup."
)
train_g
.
add_arg
(
"lr_scheduler"
,
str
,
"linear_warmup_decay"
,
"scheduler of learning rate."
,
choices
=
[
'linear_warmup_decay'
,
'noam_decay'
])
...
...
@@ -65,13 +65,13 @@ data_g = ArgumentGroup(parser, "data", "Data paths, vocab paths and data process
data_g
.
add_arg
(
"data_dir"
,
str
,
None
,
"Path to training data."
)
data_g
.
add_arg
(
"vocab_path"
,
str
,
None
,
"Vocabulary path."
)
data_g
.
add_arg
(
"max_seq_len"
,
int
,
512
,
"Number of words of the longest seqence."
)
data_g
.
add_arg
(
"batch_size"
,
int
,
32
,
"Total examples' number in batch for training. see also --in_tokens."
)
data_g
.
add_arg
(
"batch_size"
,
int
,
32
,
"Total examples' number in batch for training. see also --in_tokens."
)
data_g
.
add_arg
(
"in_tokens"
,
bool
,
False
,
"If set, the batch size will be the maximum number of tokens in one batch. "
"Otherwise, it will be the maximum number of examples in one batch."
)
data_g
.
add_arg
(
"do_lower_case"
,
bool
,
True
,
"Whether to lower case the input text. Should be True for uncased models and False for cased models."
)
data_g
.
add_arg
(
"random_seed"
,
int
,
0
,
"Random seed."
)
data_g
.
add_arg
(
"random_seed"
,
int
,
0
,
"Random seed."
)
run_type_g
=
ArgumentGroup
(
parser
,
"run_type"
,
"running type options."
)
run_type_g
.
add_arg
(
"use_cuda"
,
bool
,
True
,
"If set, use GPU for training."
)
...
...
BERT/run_squad.py
浏览文件 @
5191bf60
...
...
@@ -43,16 +43,15 @@ model_g.add_arg("init_pretraining_params", str, None,
model_g
.
add_arg
(
"checkpoints"
,
str
,
"checkpoints"
,
"Path to save checkpoints."
)
train_g
=
ArgumentGroup
(
parser
,
"training"
,
"training options."
)
train_g
.
add_arg
(
"epoch"
,
int
,
100
,
"Number of epoches for trai
ning."
)
train_g
.
add_arg
(
"learning_rate"
,
float
,
5e-5
,
"Learning rate used to train with warmup."
)
train_g
.
add_arg
(
"epoch"
,
int
,
3
,
"Number of epoches for fine-tu
ning."
)
train_g
.
add_arg
(
"learning_rate"
,
float
,
5e-5
,
"Learning rate used to train with warmup."
)
train_g
.
add_arg
(
"lr_scheduler"
,
str
,
"linear_warmup_decay"
,
"scheduler of learning rate."
,
choices
=
[
'linear_warmup_decay'
,
'noam_decay'
])
train_g
.
add_arg
(
"weight_decay"
,
float
,
0.01
,
"Weight decay rate for L2 regularizer."
)
train_g
.
add_arg
(
"weight_decay"
,
float
,
0.01
,
"Weight decay rate for L2 regularizer."
)
train_g
.
add_arg
(
"warmup_proportion"
,
float
,
0.1
,
"Proportion of training steps to perform linear learning rate warmup for."
)
train_g
.
add_arg
(
"save_steps"
,
int
,
10000
,
"The steps interval to save checkpoints."
)
train_g
.
add_arg
(
"validation_steps"
,
int
,
1000
,
"The steps interval to evaluate model performance."
)
train_g
.
add_arg
(
"use_fp16"
,
bool
,
False
,
"Whether to use fp16 mixed precision training."
)
train_g
.
add_arg
(
"save_steps"
,
int
,
1000
,
"The steps interval to save checkpoints."
)
train_g
.
add_arg
(
"use_fp16"
,
bool
,
False
,
"Whether to use fp16 mixed precision training."
)
train_g
.
add_arg
(
"loss_scaling"
,
float
,
1.0
,
"Loss scaling factor for mixed precision training, only valid when use_fp16 is enabled."
)
...
...
@@ -67,9 +66,9 @@ data_g.add_arg("vocab_path", str, None, "Vocabulary path.")
data_g
.
add_arg
(
"version_2_with_negative"
,
bool
,
False
,
"If true, the SQuAD examples contain some that do not have an answer. If using squad v2.0, it should be set true."
)
data_g
.
add_arg
(
"max_seq_len"
,
int
,
512
,
"Number of words of the longest seqence."
)
data_g
.
add_arg
(
"max_query_length"
,
int
,
64
,
"Max query length."
)
data_g
.
add_arg
(
"max_answer_length"
,
int
,
64
,
"Max answer length."
)
data_g
.
add_arg
(
"batch_size"
,
int
,
12
,
"Total s
amples' number in batch for training. see also --in_tokens."
)
data_g
.
add_arg
(
"max_query_length"
,
int
,
64
,
"Max query length."
)
data_g
.
add_arg
(
"max_answer_length"
,
int
,
30
,
"Max answer length."
)
data_g
.
add_arg
(
"batch_size"
,
int
,
12
,
"Total ex
amples' number in batch for training. see also --in_tokens."
)
data_g
.
add_arg
(
"in_tokens"
,
bool
,
False
,
"If set, the batch size will be the maximum number of tokens in one batch. "
"Otherwise, it will be the maximum number of examples in one batch."
)
...
...
@@ -81,7 +80,7 @@ data_g.add_arg("n_best_size", int, 20,
"The total number of n-best predictions to generate in the nbest_predictions.json output file."
)
data_g
.
add_arg
(
"null_score_diff_threshold"
,
float
,
0.0
,
"If null_score - best_non_null is greater than the threshold predict null."
)
data_g
.
add_arg
(
"random_seed"
,
int
,
0
,
"Random seed."
)
data_g
.
add_arg
(
"random_seed"
,
int
,
0
,
"Random seed."
)
run_type_g
=
ArgumentGroup
(
parser
,
"run_type"
,
"running type options."
)
run_type_g
.
add_arg
(
"use_cuda"
,
bool
,
True
,
"If set, use GPU for training."
)
...
...
BERT/train.py
浏览文件 @
5191bf60
...
...
@@ -65,7 +65,7 @@ data_g.add_arg("validation_set_dir", str, "./data/validation/", "Path to trai
data_g
.
add_arg
(
"test_set_dir"
,
str
,
None
,
"Path to training data."
)
data_g
.
add_arg
(
"vocab_path"
,
str
,
"./config/vocab.txt"
,
"Vocabulary path."
)
data_g
.
add_arg
(
"max_seq_len"
,
int
,
512
,
"Number of words of the longest seqence."
)
data_g
.
add_arg
(
"batch_size"
,
int
,
8192
,
"Total examples' number in batch for training. see also --in_tokens."
)
data_g
.
add_arg
(
"batch_size"
,
int
,
16
,
"Total examples' number in batch for training. see also --in_tokens."
)
data_g
.
add_arg
(
"in_tokens"
,
bool
,
False
,
"If set, the batch size will be the maximum number of tokens in one batch. "
"Otherwise, it will be the maximum number of examples in one batch."
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录