未验证 提交 d6247223 编写于 作者: T tangjiji 提交者: GitHub

add ernie-vil develop (#572)

* add ernie-vil

* Update README.md
上级 7860c6f0
...@@ -11,6 +11,11 @@ ERNIE 2.0 builds a strong basic for nearly every NLP tasks: Text Classification, ...@@ -11,6 +11,11 @@ ERNIE 2.0 builds a strong basic for nearly every NLP tasks: Text Classification,
[\[more information\]](https://wenxin.baidu.com/) [\[more information\]](https://wenxin.baidu.com/)
# News # News
- Sept.24.2020:
- [`ERNIE-ViL`](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-vil) is **avaliable** now!
- A **knowledge-enhanced** joint representations for vision-language tasks.
- Constructing three **Scene Graph Prediction** tasks utilizing structured knowledge.
- The state-of-the-art performance on 5 downstream tasks, 1st place on [VCR leaderboad](https://visualcommonsense.com/leaderboard/).
- May.20.2020: - May.20.2020:
...@@ -206,6 +211,7 @@ many other demo python scripts: ...@@ -206,6 +211,7 @@ many other demo python scripts:
| ChnSentiCorp | 24 | 5e-5(base)/1e-5(large) | | ChnSentiCorp | 24 | 5e-5(base)/1e-5(large) |
| LCQMC | 32 | 2e-5(base)/5e-6(large) | | LCQMC | 32 | 2e-5(base)/5e-6(large) |
| NLPCC2016-DBQA| 64 | 2e-5(base)/1e-5(large) | | NLPCC2016-DBQA| 64 | 2e-5(base)/1e-5(large) |
| VCR | 64 | 2e-5(base)/2e-5(large) |
# Pretraining with ERNIE 1.0 # Pretraining with ERNIE 1.0
...@@ -280,6 +286,17 @@ For details about distillation, see [here](./distill/README.md) ...@@ -280,6 +286,17 @@ For details about distillation, see [here](./distill/README.md)
} }
``` ```
### ERNIE-ViL
```
@article{yu2020ernie,
title={ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph},
author={Yu, Fei and Tang, Jiji and Yin, Weichong and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:2006.16934},
year={2020}
}
```
For full reproduction of paper results, please checkout to `repro` branch of this repo. For full reproduction of paper results, please checkout to `repro` branch of this repo.
### Communication ### Communication
......
...@@ -10,6 +10,13 @@ ERNIE是百度开创性提出的基于知识增强的持续学习语义理解框 ...@@ -10,6 +10,13 @@ ERNIE是百度开创性提出的基于知识增强的持续学习语义理解框
# 新闻 # 新闻
- 2020.9.24:
- `ERNIE-ViL` 模型正式开源! ([点击进入](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-vil))
- 面向视觉-语言知识增强的预训练框架,首次在视觉-语言预训练引入结构化的知识。
- 利用场景图中的知识,构建了物体、属性和关系预测任务,精细刻画模态间细粒度语义对齐。
- 五项视觉-语言下游任务取得最好效果,[视觉常识推理榜单](https://visualcommonsense.com/)取得第一。
- 2020.5.20: - 2020.5.20:
- 欢迎试用`动态图`实现的 ERNIE: - 欢迎试用`动态图`实现的 ERNIE:
- 基于[PaddlePaddle v1.8](https://github.com/PaddlePaddle/Paddle/tree/release/1.8)使用 ERNIE 进行 Pretrain 和 Finetune. - 基于[PaddlePaddle v1.8](https://github.com/PaddlePaddle/Paddle/tree/release/1.8)使用 ERNIE 进行 Pretrain 和 Finetune.
...@@ -206,6 +213,7 @@ python3 -m paddle.distributed.launch \ ...@@ -206,6 +213,7 @@ python3 -m paddle.distributed.launch \
| ChnSentiCorp | 24 | 5e-5(base)/1e-5(large) | | ChnSentiCorp | 24 | 5e-5(base)/1e-5(large) |
| LCQMC | 32 | 2e-5(base)/5e-6(large) | | LCQMC | 32 | 2e-5(base)/5e-6(large) |
| NLPCC2016-DBQA| 64 | 2e-5(base)/1e-5(large) | | NLPCC2016-DBQA| 64 | 2e-5(base)/1e-5(large) |
| VCR | 64 | 2e-5(base)/2e-5(large) |
# 预训练 (ERNIE 1.0) # 预训练 (ERNIE 1.0)
...@@ -281,6 +289,18 @@ ids = np.expand_dims(ids, -1) # ids.shape==[BATCH, SEQLEN, 1] ...@@ -281,6 +289,18 @@ ids = np.expand_dims(ids, -1) # ids.shape==[BATCH, SEQLEN, 1]
} }
``` ```
### ERNIE-ViL
```
@article{yu2020ernie,
title={ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph},
author={Yu, Fei and Tang, Jiji and Yin, Weichong and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:2006.16934},
year={2020}
}
```
若希望复现 paper 中的所有实验,请切换至本repo的`repro`分支。 若希望复现 paper 中的所有实验,请切换至本repo的`repro`分支。
### 讨论组 ### 讨论组
......
![ernie_vil](.meta/ernie-vil.png)
The `ERNIE-ViL` (including our pre-trained models and VCR task-pretrained models) has been released at [here](https://github.com/PaddlePaddle/ERNIE/tree/repro/ernie-vil).
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册