[Transformer](./PaddleNLP/neural_machine_translation/transformer/README.md)|机器翻译模型|基于self-attention,计算复杂度小,并行度高,容易学习长程依赖,翻译效果更好|[Attention Is All You Need](https://arxiv.org/abs/1706.03762)
[BERT](https://github.com/PaddlePaddle/LARK/tree/develop/BERT)|语义表示模型|在多个 NLP 任务上取得 SOTA 效果,支持多卡多机训练,支持混合精度训练|[BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805)
[ELMo](https://github.com/PaddlePaddle/LARK/tree/develop/ELMo)|语义表示模型|支持多卡训练,训练速度比主流实现快1倍,提供在中文词法分析任务上迁移学习的示例。|[ELMo: Embeddings from Language Models](https://arxiv.org/abs/1802.05365)
[LAC](https://github.com/baidu/lac/blob/master/README.md)|联合的词法分析模型|能够整体性地完成中文分词、词性标注、专名识别任务|[Chinese Lexical Analysis with Deep Bi-GRU-CRF Network](https://arxiv.org/abs/1807.01882)