#

`UNIMO`

Code for the main conference of ACL 2021 long paper [UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning](https://arxiv.org/pdf/2012.15409.pdf) ## Abstract Existed pre-training methods either focus on single-modal tasks or multi-modal tasks, and cannot effectively adapt to each other. They can only utilize single-modal data (i.e., text or image) or limited multi-modal data (i.e., image-text pairs). In this work, we propose a UNIfied-MOdal pre-training architecture, namely `UNIMO`, which can effectively adapt to both single-modal and multi-modal understanding and generation tasks. Large scale of free text corpus and image collections are utilized to improve the capability of visual and textual understanding, and cross-modal contrastive learning (CMCL) is leveraged to align the textual and visual information into a unified semantic space over a corpus of image-text pairs augmented with related images and texts. With the help of rich non-paired single-modal data, our model is able to learn more generalizable representations, by allowing textual knowledge and visual knowledge to enhance each other in the unified semantic space. The experimental results show that `UNIMO` greatly improves the performance of several single-modal and multi-modal downstream tasks. ![UNIMO](images/framework.png#pic_center) ## Performance Results on multi-modal understanding and generation tasks: ![UNIMO](images/multiple.png#pic_center) Results on single-modal understanding and generation tasks: ![UNIMO](images/single.png#pic_center) --- ## TODOs - [] Add all downstream tasks - [] Add unimo large model ## Dependencies python 3.7.4\ paddlepaddle-gpu==1.8.4.post107\ pyrouge==0.1.3 ## Pre-trained Models `UNIMO` adopts large-scale text corpus, image collections and image-text aligned datasets as the pre-training data. We provide `UNIMO` models of 1 scale settings which are pretrained: [UNIMO base](https://unimo.bj.bcebos.com/model/unimo_base_en.tar.gz) (lowercased | 12 layers) ``` MODEL_SIZE=base cd /path/to/model_files wget --no-check-certificate -q https://unimo.bj.bcebos.com/model/unimo_${MODEL_SIZE}_en.tar.gz tar -zxf unimo_${MODEL_SIZE}_en.tar.gz ``` ## Experiments Our fine-tuning experiments are carried on V100 GPU. Here are the results from the `UNIMO` model:
Task Type
Datatset
Pre-trained Models
Start Command
V100 GPU Cards
Running Time
Text Understanding
SST-2
UNIMO base
sh ./script/classification/SST-2/run.sh
8
9h
Text Generation
CoQA
UNIMO base
sh ./script/seq2seq/coqa/run.sh
4
7h
Multi-Modal Understanding
Flickr30k
UNIMO base
sh ./script/retrieval/Flickr30k/run.sh
16
3d
--- ## Text Understanding Tasks ### (1) Sentiment Classification #### Download SST-2 dataset: ``` cd /path/to/data wget --no-check-certificate -q https://unimo.bj.bcebos.com/data/SST-2.tar.gz tar -zxf SST.tar.gz ``` #### Run the following common to train and evaluate on the SST-2 dataset: For base model: ``` bash ./script/classification/SST-2/run.sh ``` #### Evaluation Results:
Model
Acc
UNIMO-base
95.1
## Text Generation Tasks ### (1) Conversation Question Answering #### Download CoQA dataset: ``` cd /path/to/data wget --no-check-certificate -q https://unimo.bj.bcebos.com/data/coqa.tar.gz tar -zxf coqa.tar.gz ``` #### Download evaluation script: ``` cd src/eval/tasks wget --no-check-certificate -q https://unimo.bj.bcebos.com/eval_script/coqa.tar.gz tar -zxf coqa.tar.gz ``` #### Run the following common to train and evaluate on the CoQA dataset: For base model: ``` bash ./script/seq2seq/coqa/run.sh ``` #### Evaluation Results:
Model
Acc
UNIMO-base
80.2
## Multi-Modal Understanding Tasks ### (1) Image-Text Retrieval #### Download Flickr30k dataset: ##### Note: Visual features are extracted by [bottom-up-attention](https://github.com/peteanderson80/bottom-up-attention) ``` cd /path/to/data wget --no-check-certificate -q https://unimo.bj.bcebos.com/data/Flickr30k.tar.gz # occupies about 37G disk space tar -zxf Flickr30k.tar.gz ``` #### Run the following common to train and evaluate on the Flickr30k dataset: For base model: ``` bash ./script/retrieval/Flickr30k/run.sh ``` #### Evaluation Results: Results of Image Retrieval task on Flickr30k dataset
Model
R@1
R@5
R@10
UNIMO-base
74.66
93.40
96.08
Results of Text Retrieval task on Flickr30k dataset
Model
R@1
R@5
R@10
UNIMO-base
89.70
98.40
99.10
--- Citation --- If you find our paper and code useful, please cite the following paper: ``` @article{li2020unimo, title={UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning}, author={Li, Wei and Gao, Can and Niu, Guocheng and Xiao, Xinyan and Liu, Hao and Liu, Jiachen and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:2012.15409}, year={2020} } ``` Contact information --- For help or issues using `UNIMO`, please submit a GitHub issue. For personal communication related to `UNIMO`, please contact Wei Li (liwei85@baidu.com), Guocheng Niu (niuguocheng@baidu.com) , Can Gao (gaocan01@baidu.com).