README.en.md 16.2 KB
Newer Older
M
Meiyim 已提交
1
English|[简体中文](./README.zh.md)
M
Meiyim 已提交
2

3
![./.metas/ERNIE_milestone.png](./.metas/ERNIE_milestone_20210519_en.png)
M
Meiyim 已提交
4 5 6 7 8 9 10


**Remind: This repo has been refactored, for paper re-production or backward compatibility; plase checkout to [repro branch](https://github.com/PaddlePaddle/ERNIE/tree/repro)**

ERNIE 2.0 is a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through multi-task learning.
ERNIE 2.0 builds a strong basic for nearly every NLP tasks: Text Classification, Ranking, NER, machine reading comprehension, text genration and so on.

K
kirayummy 已提交
11 12
[\[more information\]](https://wenxin.baidu.com/)

M
Meiyim 已提交
13
# News
C
chenxuyi 已提交
14

O
oyjxer 已提交
15 16 17
- Dec.03.2021:
    - [`ERNIE-M`](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-m) models are **avaliable** now!

18
- May.20.2021:
N
nbcc 已提交
19
    - [`ERNIE-Doc`](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-doc), [`ERNIE-Gram`](./ernie-gram/), [`ERNIE-ViL`](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-vil) models are **avaliable** now!
N
nbcc 已提交
20
    - `ERNIE-UNIMO` has been released in [here](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-unimo).
21

C
chenxuyi 已提交
22 23 24 25 26 27
- Dec.29.2020:
 	- Pretrain and finetune ERNIE with [PaddlePaddle v2.0](https://github.com/PaddlePaddle/Paddle/tree/release/2.0-rc).
    - New AMP(auto mixed precision) feature for every demo in this repo.
    - Introducing `Gradient accumulation`, run `ERNIE-large` with only 8G memory.

- Sept.24.2020:
L
liyukun01 已提交
28
    - We have announced the [`ERNIE-ViL`](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-vil)!
T
tangjiji 已提交
29 30 31
        - A **knowledge-enhanced** joint representations for vision-language tasks.
            - Constructing three **Scene Graph Prediction** tasks utilizing structured knowledge.
	    - The state-of-the-art performance on 5 downstream tasks, 1st place on [VCR leaderboad](https://visualcommonsense.com/leaderboard/).
M
Meiyim 已提交
32

N
nbcc 已提交
33 34 35 36 37 38 39 40
- May.20.2020:

    - Try ERNIE in "`dygraph`", with:
    	- Eager execution with `paddle.fluid.dygraph`.
    	- Distributed training.
    	- Easy deployment.
    	- Learn NLP in Aistudio tutorials.
    	- Backward compatibility for old-styled checkpoint
C
chenxuyi 已提交
41

N
nbcc 已提交
42 43 44 45 46
    - [`ERNIE-GEN`](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-gen) is **avaliable** now! ([link here](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-gen))
    	- the **state-of-the-art** pre-trained model for generation tasks, accepted by `IJCAI-2020`.
        	- A novel **span-by-span generation pre-training task**.
        	- An **infilling generation** echanism and a **noise-aware generation** method.
        	- Implemented by a carefully designed **Multi-Flow Attention** architecture.
Z
zhanghan 已提交
47
    	- You are able to `download` all models including `base/large/large-430G`.
C
chenxuyi 已提交
48

M
Meiyim 已提交
49 50 51 52 53 54 55
- Apr.30.2020: Release [ERNIESage](https://github.com/PaddlePaddle/PGL/tree/master/examples/erniesage), a novel Graph Neural Network Model using ERNIE as its aggregator. It is implemented through [PGL](https://github.com/PaddlePaddle/PGL)
- Mar.27.2020: [Champion on 5 SemEval2020 sub tasks](https://www.jiqizhixin.com/articles/2020-03-27-8)
- Dec.26.2019: [1st place on GLUE leaderboard](https://www.technologyreview.com/2019/12/26/131372/ai-baidu-ernie-google-bert-natural-language-glue/)
- Nov.6.2019: [Introducing ERNIE-tiny](https://www.jiqizhixin.com/articles/2019-11-06-9)
- Jul.7.2019: [Introducing ERNIE2.0](https://www.jiqizhixin.com/articles/2019-07-31-10)
- Mar.16.2019: [Introducing ERNIE1.0](https://www.jiqizhixin.com/articles/2019-03-16-3)

C
chenxuyi 已提交
56

M
Meiyim 已提交
57 58 59 60 61 62 63 64 65 66 67 68
# Table of contents
* [Tutorials](#tutorials)
* [Setup](#setup)
* [Fine-tuning](#fine-tuning)
* [Pre-training with ERNIE 1.0](#pre-training-with-ernie-10)
* [Online inference](#online-inference)
* [Distillation](#distillation)

# Quick Tour

```python
import numpy as np
C
chenxuyi 已提交
69
import paddle as P
M
Meiyim 已提交
70 71 72 73
from ernie.tokenizing_ernie import ErnieTokenizer
from ernie.modeling_ernie import ErnieModel

model = ErnieModel.from_pretrained('ernie-1.0')    # Try to get pretrained model from server, make sure you have network connection
M
Meiyim 已提交
74
model.eval()
M
Meiyim 已提交
75 76 77
tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')

ids, _ = tokenizer.encode('hello world')
C
chenxuyi 已提交
78
ids = P.to_tensor(np.expand_dims(ids, 0))  # insert extra `batch` dimension
M
Meiyim 已提交
79 80 81 82 83 84 85 86 87 88
pooled, encoded = model(ids)                 # eager execution
print(pooled.numpy())                        # convert  results to numpy

```

# Tutorials

Don't have GPU? try ERNIE in [AIStudio](https://aistudio.baidu.com/aistudio/index)!
(please choose the latest version and apply for a GPU environment)

C
chenxuyi 已提交
89
1. [ERNIE for beginners](https://aistudio.baidu.com/studio/edu/group/quick/join/314947)
M
Meiyim 已提交
90 91 92 93 94
1. [Sementic analysis](https://aistudio.baidu.com/aistudio/projectdetail/427482)
2. [Cloze test](https://aistudio.baidu.com/aistudio/projectdetail/433491)
3. [Knowledge distillation](https://aistudio.baidu.com/aistudio/projectdetail/439460)
4. [Ask ERNIE](https://aistudio.baidu.com/aistudio/projectdetail/456443)
5. [Loading old-styled checkpoint](https://aistudio.baidu.com/aistudio/projectdetail/493415)
M
Meiyim 已提交
95 96 97

# Setup

M
Meiyim 已提交
98 99 100 101 102
##### 1. install PaddlePaddle

This repo requires PaddlePaddle 1.7.0+, please see [here](https://www.paddlepaddle.org.cn/install/quick) for installaton instruction.

##### 2. install ernie
M
Meiyim 已提交
103 104

```script
M
Meiyim 已提交
105
pip install paddle-ernie
M
Meiyim 已提交
106 107
```

C
chenxuyi 已提交
108
or
M
Meiyim 已提交
109 110

```shell
M
Meiyim 已提交
111
git clone https://github.com/PaddlePaddle/ERNIE.git --depth 1
M
Meiyim 已提交
112
cd ERNIE
M
Meiyim 已提交
113
pip install -r requirements.txt
M
Meiyim 已提交
114
pip install -e .
M
Meiyim 已提交
115 116 117 118
```

##### 3. download pretrained models (optional)

M
Meiyim 已提交
119 120 121 122 123 124 125 126
| Model                                              | Description                                                  |abbreviation|
| :------------------------------------------------- | :----------------------------------------------------------- |:-----------|
| [ERNIE 1.0 Base for Chinese](https://ernie-github.cdn.bcebos.com/model-ernie1.0.1.tar.gz)           | L12H768A12  |ernie-1.0|
| [ERNIE Tiny](https://ernie-github.cdn.bcebos.com/model-ernie_tiny.1.tar.gz)                         | L3H1024A16  |ernie-tiny|
| [ERNIE 2.0 Base for English](https://ernie-github.cdn.bcebos.com/model-ernie2.0-en.1.tar.gz)        | L12H768A12  |ernie-2.0-en|
| [ERNIE 2.0 Large for English](https://ernie-github.cdn.bcebos.com/model-ernie2.0-large-en.1.tar.gz) | L24H1024A16 |ernie-2.0-large-en|
| [ERNIE Gen base for English](https://ernie-github.cdn.bcebos.com/model-ernie-gen-base-en.1.tar.gz)  | L12H768A12  |ernie-gen-base-en|
| [ERNIE Gen Large for English](https://ernie-github.cdn.bcebos.com/model-ernie-gen-large-en.1.tar.gz)| L24H1024A16 | ernie-gen-large-en |
Z
zhanghan17 已提交
127
| [ERNIE Gen Large 430G for English](https://ernie-github.cdn.bcebos.com/model-ernie-gen-large-430g-en.1.tar.gz)| Layer:24, Hidden:1024, Heads:16 + 430G pretrain corpus | ernie-gen-large-430g-en |
D
dingsiyu 已提交
128 129 130
| [ERNIE Doc Base for Chinese](https://ernie-github.cdn.bcebos.com/model-ernie-doc-base-zh.tar.gz)| L12H768A12 | ernie-doc-base-zh |
| [ERNIE Doc Base for English](https://ernie-github.cdn.bcebos.com/model-ernie-doc-base-en.tar.gz)| L12H768A12 | ernie-doc-base-en |
| [ERNIE Doc Large for English](https://ernie-github.cdn.bcebos.com/model-ernie-doc-large-en.tar.gz)| L24H1024A16 | ernie-doc-large-zh |
D
dingsiyu 已提交
131 132
| [ERNIE Gram Base for Chinese](https://ernie-github.cdn.bcebos.com/model-ernie-gram-zh.1.tar.gz) | L12H768A12 | ernie-gram-zh |
| [ERNIE Gram Base for English](https://ernie-github.cdn.bcebos.com/model-ernie-gram-en.1.tar.gz) | L12H768A12 | ernie-gram-en |
M
Meiyim 已提交
133 134

##### 4. download datasets
C
chenxuyi 已提交
135

M
Meiyim 已提交
136 137
**English Datasets**

C
chenxuyi 已提交
138
Download the [GLUE datasets](https://gluebenchmark.com/tasks) by running [this script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e)
M
Meiyim 已提交
139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169

the `--data_dir` option in the following section assumes a directory tree like this:

```shell
data/xnli
├── dev
│   └── 1
├── test
│   └── 1
└── train
    └── 1
```

see [demo](https://ernie-github.cdn.bcebos.com/data-mnli-m.tar.gz) data for MNLI task.

**Chinese Datasets**

| Datasets|Description|
|:--------|:----------|
| [XNLI](https://ernie-github.cdn.bcebos.com/data-xnli.tar.gz)                 |XNLI is a natural language inference dataset in 15 languages. It was jointly built by Facebook and New York University. We use Chinese data of XNLI to evaluate language understanding ability of our model. [url](https://github.com/facebookresearch/XNLI)|
| [ChnSentiCorp](https://ernie-github.cdn.bcebos.com/data-chnsenticorp.tar.gz) |ChnSentiCorp is a sentiment analysis dataset consisting of reviews on online shopping of hotels, notebooks and books.|
| [MSRA-NER](https://ernie-github.cdn.bcebos.com/data-msra_ner.tar.gz)         |MSRA-NER (SIGHAN2006) dataset is released by MSRA for recognizing the names of people, locations and organizations in text.|
| [NLPCC2016-DBQA](https://ernie-github.cdn.bcebos.com/data-dbqa.tar.gz)       |NLPCC2016-DBQA is a sub-task of NLPCC-ICCPOL 2016 Shared Task which is hosted by NLPCC(Natural Language Processing and Chinese Computing), this task targets on selecting documents from the candidates to answer the questions. [url: http://tcci.ccf.org.cn/conference/2016/dldoc/evagline2.pdf]|
|[CMRC2018](https://ernie-github.cdn.bcebos.com/data-cmrc2018.tar.gz)|CMRC2018 is a evaluation of Chinese extractive reading comprehension hosted by Chinese Information Processing Society of China (CIPS-CL). [url](https://github.com/ymcui/cmrc2018)|


# Fine-tuning

- try eager execution with `dygraph model` :

```script
C
chenxuyi 已提交
170
python3 ./demo/finetune_classifier.py \
M
Meiyim 已提交
171
       --from_pretrained ernie-1.0 \
C
chenxuyi 已提交
172
       --data_dir ./data/xnli
M
Meiyim 已提交
173 174
```

C
chenxuyi 已提交
175 176 177 178 179
  - specify `--use_amp` to activate AMP training.
  - `--bsz` denotes global batch size for one optimization step, `--micro_bsz` denotes maximum batch size for each GPU device.
if `--micro_bsz < --bsz`, gradient accumulation will be actiavted.


M
Meiyim 已提交
180 181 182 183
- Distributed finetune

`paddle.distributed.launch` is a process manager, we use it to launch python processes on each avalible GPU devices:

M
Meiyim 已提交
184 185 186
When in distributed training, `max_steps` is used as stopping criteria rather than `epoch` to prevent dead block.
You could calculate `max_steps` with `EPOCH * NUM_TRAIN_EXAMPLES / TOTAL_BATCH`.
Also notice than we shard the train data according to device id to prevent over fitting.
M
Meiyim 已提交
187

C
chenxuyi 已提交
188 189 190 191
demo:
(make sure you have more than 2 GPUs,
online model download can not work in `paddle.distributed.launch`,
you need to run single card finetuning first to get pretrained model, or donwload and extract one manualy from [here](#section-pretrained-models)):
M
Meiyim 已提交
192

M
Meiyim 已提交
193 194 195

```script
python3 -m paddle.distributed.launch \
C
chenxuyi 已提交
196
./demo/finetune_classifier_distributed.py  \
M
Meiyim 已提交
197 198
    --data_dir data/mnli \
    --max_steps 10000 \
M
Meiyim 已提交
199
    --from_pretrained ernie-2.0-en
M
Meiyim 已提交
200 201 202 203 204
```


many other demo python scripts:

C
chenxuyi 已提交
205 206 207 208
1. [Sentiment Analysis](./demo/finetune_sentiment_analysis.py)
1. [Semantic Similarity](./demo/finetune_classifier.py)
1. [Name Entity Recognition(NER)](./demo/finetune_ner.py)
1. [Machine Reading Comprehension](./demo/finetune_mrc.py)
M
Meiyim 已提交
209
1. [Text generation](./demo/seq2seq/README.md)
C
chenxuyi 已提交
210
1. [Text classification with `paddle.static` API](./demo/finetune_classifier_static.py)
M
Meiyim 已提交
211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234




**recomended hyper parameters:**

|tasks|batch size|learning rate|
|--|--|--|
| CoLA         | 32 / 64 (base)  | 3e-5                     |
| SST-2        | 64 / 256 (base) | 2e-5                     |
| STS-B        | 128             | 5e-5                     |
| QQP          | 256             | 3e-5(base)/5e-5(large)   |
| MNLI         | 256 / 512 (base)| 3e-5                     |
| QNLI         | 256             | 2e-5                     |
| RTE          | 16 / 4 (base)   | 2e-5(base)/3e-5(large)   |
| MRPC         | 16 / 32 (base)  | 3e-5                     |
| WNLI         | 8               | 2e-5                     |
| XNLI         | 512             | 1e-4(base)/4e-5(large)   |
| CMRC2018     | 64              | 3e-5                     |
| DRCD         | 64              | 5e-5(base)/3e-5(large)   |
| MSRA-NER(SIGHAN2006)  | 16     | 5e-5(base)/1e-5(large)   |
| ChnSentiCorp | 24              | 5e-5(base)/1e-5(large)   |
| LCQMC        | 32              | 2e-5(base)/5e-6(large)   |
| NLPCC2016-DBQA| 64             | 2e-5(base)/1e-5(large)   |
T
tangjiji 已提交
235
| VCR           | 64             | 2e-5(base)/2e-5(large)   |
M
Meiyim 已提交
236 237 238 239 240 241 242 243

# Pretraining with ERNIE 1.0

see [here](./demo/pretrain/README.md)


# Online inference

C
chenxuyi 已提交
244
If `--inference_model_dir` is passed to `finetune_classifier_dygraph.py`,
M
Meiyim 已提交
245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267
a deployable model will be generated at the end of finetuning and your model is ready to serve.

For details about online inferece, see [C++ inference API](./inference/README.md),
or you can start a multi-gpu inference server with a few lines of codes:

```shell
python -m propeller.tools.start_server -m /path/to/saved/inference_model  -p 8881
```

and call the server just like calling local function (python3 only):

```python
from propeller.service.client import InferenceClient
from ernie.tokenizing_ernie import ErnieTokenizer

client = InferenceClient('tcp://localhost:8881')
tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')
ids, sids = tokenizer.encode('hello world')
ids = np.expand_dims(ids, 0)
sids = np.expand_dims(sids, 0)
result = client(ids, sids)
```

C
chenxuyi 已提交
268
A pre-made `inference model` for ernie-1.0 can be downloaded at [here](https://ernie.bj.bcebos.com/ernie1.0_zh_inference_model.tar.gz).
M
Meiyim 已提交
269 270 271 272
It can be used for feature-based finetuning or feature extraction.

# Distillation

C
chenxuyi 已提交
273
Knowledge distillation is good way to compress and accelerate ERNIE.
M
Meiyim 已提交
274

C
chenxuyi 已提交
275
For details about distillation, see [here](./demo/distill/README.md)
M
Meiyim 已提交
276

L
liyukun01 已提交
277
# Citation
M
Meiyim 已提交
278

L
liyukun01 已提交
279 280 281 282 283 284 285 286 287
### ERNIE 1.0
```
@article{sun2019ernie,
  title={Ernie: Enhanced representation through knowledge integration},
  author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Chen, Xuyi and Zhang, Han and Tian, Xin and Zhu, Danxiang and Tian, Hao and Wu, Hua},
  journal={arXiv preprint arXiv:1904.09223},
  year={2019}
}
```
M
Meiyim 已提交
288

L
liyukun01 已提交
289
### ERNIE 2.0
M
Meiyim 已提交
290
```
L
liyukun01 已提交
291
@article{sun2019ernie20,
M
Meiyim 已提交
292 293
  title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
  author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
L
liyukun01 已提交
294
  journal={arXiv preprint arXiv:1907.12412},
C
chenxuyi 已提交
295
  year={2019}
M
Meiyim 已提交
296 297 298
}
```

L
liyukun01 已提交
299
### ERNIE-GEN
M
Meiyim 已提交
300 301

```
L
liyukun01 已提交
302
@article{xiao2020ernie-gen,
M
Meiyim 已提交
303 304
  title={ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation},
  author={Xiao, Dongling and Zhang, Han and Li, Yukun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
L
liyukun01 已提交
305 306
  journal={arXiv preprint arXiv:2001.11314},
  year={2020}
M
Meiyim 已提交
307 308 309
}
```

T
tangjiji 已提交
310
### ERNIE-ViL
311

T
tangjiji 已提交
312 313 314 315 316 317 318 319 320 321
```
@article{yu2020ernie,
  title={ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph},
  author={Yu, Fei and Tang, Jiji and Yin, Weichong and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
  journal={arXiv preprint arXiv:2006.16934},
  year={2020}
}

```

322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354
### ERNIE-Gram

```
@article{xiao2020ernie,
  title={ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding},
  author={Xiao, Dongling and Li, Yu-Kun and Zhang, Han and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
  journal={arXiv preprint arXiv:2010.12148},
  year={2020}
}
```

### ERNIE-Doc

```
@article{ding2020ernie,
  title={ERNIE-DOC: The Retrospective Long-Document Modeling Transformer},
  author={Ding, Siyu and Shang, Junyuan and Wang, Shuohuan and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
  journal={arXiv preprint arXiv:2012.15688},
  year={2020}
}
```

### ERNIE-UNIMO

```
@article{li2020unimo,
  title={UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning},
  author={Li, Wei and Gao, Can and Niu, Guocheng and Xiao, Xinyan and Liu, Hao and Liu, Jiachen and Wu, Hua and Wang, Haifeng},
  journal={arXiv preprint arXiv:2012.15409},
  year={2020}
}
```

O
oyjxer 已提交
355 356 357 358 359 360 361 362 363 364 365
### ERNIE-M

```
@article{ouyang2020ernie,
  title={Ernie-m: Enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora},
  author={Ouyang, Xuan and Wang, Shuohuan and Pang, Chao and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
  journal={arXiv preprint arXiv:2012.15674},
  year={2020}
}
```

M
Meiyim 已提交
366 367 368 369
For full reproduction of paper results, please checkout to `repro` branch of this repo.

### Communication

M
Meiyim 已提交
370
- [ERNIE homepage](https://wenxin.baidu.com/)
M
Meiyim 已提交
371 372
- [Github Issues](https://github.com/PaddlePaddle/ERNIE/issues): bug reports, feature requests, install issues, usage issues, etc.
- QQ discussion group: 760439550 (ERNIE discussion group).
M
Meiyim 已提交
373
- QQ discussion group: 958422639 (ERNIE discussion group-v2).
M
Meiyim 已提交
374
- [Forums](http://ai.baidu.com/forum/topic/list/168?pageNo=1): discuss implementations, research, etc.