README.md 9.1 KB
Newer Older
W
wsc 已提交
1 2 3 4 5 6
# BERT Example
## Description
This example implements pre-training, fine-tuning and evaluation of [BERT-base](https://github.com/google-research/bert)(the base version of BERT model) and [BERT-NEZHA](https://github.com/huawei-noah/Pretrained-Language-Model)(a Chinese pretrained language model developed by Huawei, which introduced a improvement of Functional Relative Positional Encoding as an effective positional encoding scheme).

## Requirements
- Install [MindSpore](https://www.mindspore.cn/install/en).
7 8
- Download the zhwiki dataset for pre-training. Extract and clean text in the dataset with [WikiExtractor](https://github.com/attardi/wikiextractor). Convert the dataset to TFRecord format and move the files to a specified path.
- Download the CLUE dataset for fine-tuning and evaluation.
W
wsc 已提交
9 10 11 12 13
>  Notes:
   If you are running a fine-tuning or evaluation task, prepare the corresponding checkpoint file.

## Running the Example
### Pre-Training
W
wanghua 已提交
14
- Set options in `config.py`, including lossscale, optimizer and network. Click [here](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#tfrecord) for more information about dataset and the json schema file.
W
wsc 已提交
15

W
wanghua 已提交
16
- Run `run_standalone_pretrain.sh` for non-distributed pre-training of BERT-base and BERT-NEZHA model.
W
wsc 已提交
17

W
wanghua 已提交
18
    ``` bash   
19
    sh scripts/run_standalone_pretrain.sh DEVICE_ID EPOCH_SIZE DATA_DIR SCHEMA_DIR
W
wsc 已提交
20
    ```
W
wanghua 已提交
21 22 23
- Run `run_distribute_pretrain.sh` for distributed pre-training of BERT-base and BERT-NEZHA model.

    ``` bash   
24
    sh scripts/run_distribute_pretrain.sh DEVICE_NUM EPOCH_SIZE DATA_DIR SCHEMA_DIR MINDSPORE_HCCL_CONFIG_PATH
W
wanghua 已提交
25
    ```  
W
wsc 已提交
26 27

### Fine-Tuning
28
- Set options in `finetune_config.py`. Make sure the 'data_file', 'schema_file' and 'pre_training_file' are set to your own path. Set the 'pre_training_ckpt' to a saved checkpoint file generated after pre-training.
W
wsc 已提交
29 30 31 32

- Run `finetune.py` for fine-tuning of BERT-base and BERT-NEZHA model.

    ```bash
33
    python finetune.py
W
wsc 已提交
34 35 36 37 38 39 40 41
    ```

### Evaluation
- Set options in `evaluation_config.py`. Make sure the 'data_file', 'schema_file' and 'finetune_ckpt' are set to your own path.

- Run `evaluation.py` for evaluation of BERT-base and BERT-NEZHA model.

    ```bash
42
    python evaluation.py
W
wsc 已提交
43 44 45 46 47
    ```

## Usage
### Pre-Training
``` 
W
wanghua 已提交
48
usage: run_pretrain.py  [--distribute DISTRIBUTE] [--epoch_size N] [----device_num N] [--device_id N] 
J
jinyaohui 已提交
49
                        [--enable_save_ckpt ENABLE_SAVE_CKPT]
W
wanghua 已提交
50 51 52 53
                        [--enable_lossscale ENABLE_LOSSSCALE] [--do_shuffle DO_SHUFFLE]
                        [--enable_data_sink ENABLE_DATA_SINK] [--data_sink_steps N] [--checkpoint_path CHECKPOINT_PATH]
                        [--save_checkpoint_steps N] [--save_checkpoint_num N] 
                        [--data_dir DATA_DIR] [--schema_dir SCHEMA_DIR]
W
wsc 已提交
54

W
wanghua 已提交
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
options:
    --distribute               pre_training by serveral devices: "true"(training by more than 1 device) | "false", default is "false"
    --epoch_size               epoch size: N, default is 1
    --device_num               number of used devices: N, default is 1
    --device_id                device id: N, default is 0
    --enable_save_ckpt         enable save checkpoint: "true" | "false", default is "true"
    --enable_lossscale         enable lossscale: "true" | "false", default is "true"
    --do_shuffle               enable shuffle: "true" | "false", default is "true"
    --enable_data_sink         enable data sink: "true" | "false", default is "true"
    --data_sink_steps          set data sink steps: N, default is 1
    --checkpoint_path          path to save checkpoint files: PATH, default is ""
    --save_checkpoint_steps    steps for saving checkpoint files: N, default is 1000
    --save_checkpoint_num      number for saving checkpoint files: N, default is 1
    --data_dir                 path to dataset directory: PATH, default is ""
    --schema_dir               path to schema.json file, PATH, default is ""
W
wsc 已提交
70 71 72 73 74
```
## Options and Parameters
It contains of parameters of BERT model and options for training, which is set in file `config.py`, `finetune_config.py` and `evaluation_config.py` respectively.
### Options:
```
75
config.py:
76
    bert_network                    version of BERT model: base | nezha, default is base
W
wsc 已提交
77 78
    loss_scale_value                initial value of loss scale: N, default is 2^32
    scale_factor                    factor used to update loss scale: N, default is 2
W
wanghua 已提交
79
    scale_window                    steps for once updatation of loss scale: N, default is 1000   
W
wsc 已提交
80 81
    optimizer                       optimizer used in the network: AdamWerigtDecayDynamicLR | Lamb | Momentum, default is "Lamb"

82 83 84 85 86 87
finetune_config.py:
    task                            task type: NER | XNLI | LCQMC | SENTIi | OTHERS
    num_labels                      number of labels to do classification
    data_file                       dataset file to load: PATH, default is "/your/path/train.tfrecord"
    schema_file                     dataset schema file to load: PATH, default is "/your/path/schema.json"
    epoch_num                       repeat counts of training: N, default is 5
W
wsc 已提交
88 89 90
    ckpt_prefix                     prefix used to save checkpoint files: PREFIX, default is "bert"
    ckpt_dir                        path to save checkpoint files: PATH, default is None
    pre_training_ckpt               checkpoint file to load: PATH, default is "/your/path/pre_training.ckpt"
91 92
    use_crf                         whether to use crf for evaluation. use_crf takes effect only when task type is NER, default is False
    optimizer                       optimizer used in fine-tune network: AdamWeigtDecayDynamicLR | Lamb | Momentum, default is "Lamb"
W
wsc 已提交
93

94 95 96
evaluation_config.py:
    task                            task type: NER | XNLI | LCQMC | SENTI | OTHERS
    num_labels                      number of labels to do classsification
W
wsc 已提交
97 98 99
    data_file                       dataset file to load: PATH, default is "/your/path/evaluation.tfrecord"
    schema_file                     dataset schema file to load: PATH, default is "/your/path/schema.json"
    finetune_ckpt                   checkpoint file to load: PATH, default is "/your/path/your.ckpt"
100 101
    use_crf                         whether to use crf for evaluation. use_crf takes effect only when task type is NER, default is False
    clue_benchmark                  whether to use clue benchmark. clue_benchmark takes effect only when task type is NER, default is False
W
wsc 已提交
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
```

### Parameters:
```
Parameters for dataset and network (Pre-Training/Fine-Tuning/Evaluation):
    batch_size                      batch size of input dataset: N, default is 16
    seq_length                      length of input sequence: N, default is 128
    vocab_size                      size of each embedding vector: N, default is 21136
    hidden_size                     size of bert encoder layers: N, default is 768
    num_hidden_layers               number of hidden layers: N, default is 12
    num_attention_heads             number of attention heads: N, default is 12
    intermediate_size               size of intermediate layer: N, default is 3072
    hidden_act                      activation function used: ACTIVATION, default is "gelu"
    hidden_dropout_prob             dropout probability for BertOutput: Q, default is 0.1
    attention_probs_dropout_prob    dropout probability for BertAttention: Q, default is 0.1
    max_position_embeddings         maximum length of sequences: N, default is 512
    type_vocab_size                 size of token type vocab: N, default is 16
    initializer_range               initialization value of TruncatedNormal: Q, default is 0.02
    use_relative_positions          use relative positions or not: True | False, default is False
    input_mask_from_dataset         use the input mask loaded form dataset or not: True | False, default is True
    token_type_ids_from_dataset     use the token type ids loaded from dataset or not: True | False, default is True
    dtype                           data type of input: mstype.float16 | mstype.float32, default is mstype.float32
    compute_type                    compute type in BertTransformer: mstype.float16 | mstype.float32, default is mstype.float16

Parameters for optimizer:
    AdamWeightDecayDynamicLR:
128 129 130 131 132 133 134
    decay_steps                     steps of the learning rate decay: N
    learning_rate                   value of learning rate: Q
    end_learning_rate               value of end learning rate: Q, must be positive
    power                           power: Q
    warmup_steps                    steps of the learning rate warm up: N
    weight_decay                    weight decay: Q
    eps                             term added to the denominator to improve numerical stability: Q
W
wsc 已提交
135 136

    Lamb:
137 138 139 140 141 142
    decay_steps                     steps of the learning rate decay: N
    learning_rate                   value of learning rate: Q
    end_learning_rate               value of end learning rate: Q
    power                           power: Q
    warmup_steps                    steps of the learning rate warm up: N
    weight_decay                    weight decay: Q
W
wsc 已提交
143 144

    Momentum:
145 146
    learning_rate                   value of learning rate: Q
    momentum                        momentum for the moving average: Q
W
wsc 已提交
147 148
```