@@ -227,3 +229,5 @@ We welcome you to contribute code to PaddleHub, and thank you for your feedback.
* Many thanks to [paopjian](https://github.com/paopjian) for correcting the wrong website address [#1424](https://github.com/PaddlePaddle/PaddleHub/issues/1424)
* Many thanks to [Wgm-Inspur](https://github.com/Wgm-Inspur) for correcting the demo errors in readme, and updating the RNN illustration in the text classification and sequence labeling demo
* Many thanks to [zl1271](https://github.com/zl1271) for fixing serving docs typo
* Many thanks to [AK391](https://github.com/AK391) for adding the webdemo of UGATIT and deoldify models in Hugging Face spaces
* Many thanks to [itegel](https://github.com/itegel) for fixing quick start docs typo
**Deoldify Huggingface Web Demo**: Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See demo: [](https://huggingface.co/spaces/akhaliq/deoldify)
### Image Generation
- Including portrait cartoonization, street scene cartoonization, and style transfer.
- Many thanks to CopyRight@[PaddleGAN](https://github.com/PaddlePaddle/PaddleGAN)、CopyRight@[AnimeGAN](https://github.com/TachibanaYoshino/AnimeGANv2)for the pre-trained models.
**UGATIT Selfie2anime Huggingface Web Demo**: Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See demo: [](https://huggingface.co/spaces/akhaliq/U-GAT-IT-selfie2anime)
### Object Detection
- Pedestrian detection, vehicle detection, and more industrial-grade ultra-large-scale pretrained models are provided.
2017 年,Google机器翻译团队在其发表的论文[Attention Is All You Need](https://arxiv.org/abs/1706.03762)中,提出了用于完成机器翻译(Machine Translation)等序列到序列(Seq2Seq)学习任务的一种全新网络结构——Transformer。Tranformer网络完全使用注意力(Attention)机制来实现序列到序列的建模,并且取得了很好的效果。
- 2017 年,Google机器翻译团队在其发表的论文[Attention Is All You Need](https://arxiv.org/abs/1706.03762)中,提出了用于完成机器翻译(Machine Translation)等序列到序列(Seq2Seq)学习任务的一种全新网络结构——Transformer。Tranformer网络完全使用注意力(Attention)机制来实现序列到序列的建模,并且取得了很好的效果。
关于机器翻译的Transformer模型训练方式和详情,可查看[Machine Translation using Transformer](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/machine_translation/transformer)。
- 关于机器翻译的Transformer模型训练方式和详情,可查看[Machine Translation using Transformer](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/machine_translation/transformer)。
2017 年,Google机器翻译团队在其发表的论文[Attention Is All You Need](https://arxiv.org/abs/1706.03762)中,提出了用于完成机器翻译(Machine Translation)等序列到序列(Seq2Seq)学习任务的一种全新网络结构——Transformer。Tranformer网络完全使用注意力(Attention)机制来实现序列到序列的建模,并且取得了很好的效果。
- 2017 年,Google机器翻译团队在其发表的论文[Attention Is All You Need](https://arxiv.org/abs/1706.03762)中,提出了用于完成机器翻译(Machine Translation)等序列到序列(Seq2Seq)学习任务的一种全新网络结构——Transformer。Tranformer网络完全使用注意力(Attention)机制来实现序列到序列的建模,并且取得了很好的效果。
关于机器翻译的Transformer模型训练方式和详情,可查看[Machine Translation using Transformer](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/machine_translation/transformer)。
- 关于机器翻译的Transformer模型训练方式和详情,可查看[Machine Translation using Transformer](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/machine_translation/transformer)。