提交 3fb0b491 编写于 作者: L liyukun01 提交者: Meiyim

Fixed citation format

上级 b26916cd
...@@ -234,24 +234,36 @@ Knowledge distillation is good way to compress and accelerate ERNIE. ...@@ -234,24 +234,36 @@ Knowledge distillation is good way to compress and accelerate ERNIE.
For details about distillation, see [here](./distill/README.md) For details about distillation, see [here](./distill/README.md)
### Citation # Citation
please cite [ERNIE 2.0](https://arxiv.org/abs/1907.12412): ### ERNIE 1.0
```
@article{sun2019ernie,
title={Ernie: Enhanced representation through knowledge integration},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Chen, Xuyi and Zhang, Han and Tian, Xin and Zhu, Danxiang and Tian, Hao and Wu, Hua},
journal={arXiv preprint arXiv:1904.09223},
year={2019}
}
```
### ERNIE 2.0
``` ```
@article{SunERNIE, @article{sun2019ernie20,
title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding}, title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng}, author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1907.12412},
year={2019}
} }
``` ```
and [ERNIE Gen](https://arxiv.org/abs/2001.11314) ### ERNIE-GEN
``` ```
@article{Xiao2020ERNIE, @article{xiao2020ernie-gen,
title={ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation}, title={ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation},
author={Xiao, Dongling and Zhang, Han and Li, Yukun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng}, author={Xiao, Dongling and Zhang, Han and Li, Yukun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
year={2020}, journal={arXiv preprint arXiv:2001.11314},
year={2020}
} }
``` ```
......
...@@ -235,23 +235,36 @@ ids = np.expand_dims(ids, -1) # ids.shape==[BATCH, SEQLEN, 1] ...@@ -235,23 +235,36 @@ ids = np.expand_dims(ids, -1) # ids.shape==[BATCH, SEQLEN, 1]
知识蒸馏是进行ERNIE模型压缩、加速的有效方式;关于知识蒸馏的实现细节请参见[这里](./distill/README.md) 知识蒸馏是进行ERNIE模型压缩、加速的有效方式;关于知识蒸馏的实现细节请参见[这里](./distill/README.md)
### 引用 # 文献引用
[ERNIE 2.0](https://arxiv.org/abs/1907.12412) ### ERNIE 1.0
``` ```
@article{SunERNIE, @article{sun2019ernie,
title={Ernie: Enhanced representation through knowledge integration},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Chen, Xuyi and Zhang, Han and Tian, Xin and Zhu, Danxiang and Tian, Hao and Wu, Hua},
journal={arXiv preprint arXiv:1904.09223},
year={2019}
}
```
### ERNIE 2.0
```
@article{sun2019ernie20,
title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding}, title={ERNIE 2.0: A Continual Pre-training Framework for Language Understanding},
author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng}, author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1907.12412},
year={2019}
} }
``` ```
[ERNIE Gen](https://arxiv.org/abs/2001.11314) ### ERNIE-GEN
``` ```
@article{Xiao2020ERNIE, @article{xiao2020ernie-gen,
title={ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation}, title={ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation},
author={Xiao, Dongling and Zhang, Han and Li, Yukun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng}, author={Xiao, Dongling and Zhang, Han and Li, Yukun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
year={2020}, journal={arXiv preprint arXiv:2001.11314},
year={2020}
} }
``` ```
......
# ERNIE Gen # ERNIE-GEN
[ERNIE-GEN](https://arxiv.org/pdf/2001.11314.pdf) is a multi-flow language generation framework for both pre-training and fine-tuning. [ERNIE-GEN](https://arxiv.org/pdf/2001.11314.pdf) is a multi-flow language generation framework for both pre-training and fine-tuning.
Only finetune strategy is illustrated in this section. Only finetune strategy is illustrated in this section.
## Finetune ## Finetune
We use Abstractive Summarization task CNN/DailyMail to illustate usage of ERNIE Gen, you can download preprocessed finetune data from [here](https://ernie-github.cdn.bcebos.com/data-cnndm.tar.gz) We use Abstractive Summarization task CNN/DailyMail to illustate usage of ERNIE-GEN, you can download preprocessed finetune data from [here](https://ernie-github.cdn.bcebos.com/data-cnndm.tar.gz)
To starts finetuning ERNIE Gen, run: To starts finetuning ERNIE-GEN, run:
```script ```script
python3 -m paddle.distributed.launch \ python3 -m paddle.distributed.launch \
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册