description:This is a re-trained 3-layer RoBERTa-wwm-ext model.description_en:This is a re-trained 3-layer RoBERTa-wwm-ext model.from_repo:https://huggingface.co/hfl/rbt3icon:https://paddlenlp.bj.bcebos.com/models/community/transformer-layer.pngname:hfl/rbt3
-title:Pre-Training with Whole Word Masking for Chinese BERTurl:http://arxiv.org/abs/1906.08101v3-title:Revisiting Pre-Trained Models for Chinese Natural Language Processingurl:http://arxiv.org/abs/2004.13922v2Publisher:hflTask:-sub_tag:槽位填充sub_tag_en:Fill-Masktag:自然语言处理tag_en:Natural Language Processing