未验证 提交 e75f74fb 编写于 作者: 骑马小猫 提交者: GitHub

add paddlenlp community models (#5596)

* add huggingface model files

* update modelcenter

* update paddlenlp community models

* update jupyter notebook

* update community models by comment

* format model-centers

* update community models

* update yaml format

* update official notebooks

* update introduction

* add modelcenter models
上级 7728c7ea
# 模型列表
## CLTL/MedRoBERTa.nl
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|CLTL/MedRoBERTa.nl| | 633.14MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models CLTL/MedRoBERTa.nl
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|CLTL/MedRoBERTa.nl| | 633.14MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/CLTL/MedRoBERTa.nl/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models CLTL/MedRoBERTa.nl
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "CLTL/MedRoBERTa.nl"
description: "MedRoBERTa.nl"
description_en: "MedRoBERTa.nl"
icon: ""
from_repo: "https://huggingface.co/CLTL/MedRoBERTa.nl"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "CLTL"
License: "mit"
Language: "Dutch"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## Jean-Baptiste/roberta-large-ner-english
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|Jean-Baptiste/roberta-large-ner-english| | 1.32G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models Jean-Baptiste/roberta-large-ner-english
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|Jean-Baptiste/roberta-large-ner-english| | 1.32G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/Jean-Baptiste/roberta-large-ner-english/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models Jean-Baptiste/roberta-large-ner-english
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "Jean-Baptiste/roberta-large-ner-english"
description: "roberta-large-ner-english: model fine-tuned from roberta-large for NER task"
description_en: "roberta-large-ner-english: model fine-tuned from roberta-large for NER task"
icon: ""
from_repo: "https://huggingface.co/Jean-Baptiste/roberta-large-ner-english"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Token Classification"
sub_tag: "Token分类"
Example:
Datasets: "conll2003"
Pulisher: "Jean-Baptiste"
License: "mit"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## Langboat/mengzi-bert-base-fin
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|Langboat/mengzi-bert-base-fin| | 456.84MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/Langboat/mengzi-bert-base-fin/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/Langboat/mengzi-bert-base-fin/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/Langboat/mengzi-bert-base-fin/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/Langboat/mengzi-bert-base-fin/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models Langboat/mengzi-bert-base-fin
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|Langboat/mengzi-bert-base-fin| | 456.84MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/Langboat/mengzi-bert-base-fin/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/Langboat/mengzi-bert-base-fin/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/Langboat/mengzi-bert-base-fin/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/Langboat/mengzi-bert-base-fin/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models Langboat/mengzi-bert-base-fin
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "Langboat/mengzi-bert-base-fin"
description: "Mengzi-BERT base fin model (Chinese)"
description_en: "Mengzi-BERT base fin model (Chinese)"
icon: ""
from_repo: "https://huggingface.co/Langboat/mengzi-bert-base-fin"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "Langboat"
License: "apache-2.0"
Language: "Chinese"
Paper:
- title: 'Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese'
url: 'http://arxiv.org/abs/2110.06696v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|PlanTL-GOB-ES/roberta-base-biomedical-clinical-es| | 633.14MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|PlanTL-GOB-ES/roberta-base-biomedical-clinical-es| | 633.14MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "PlanTL-GOB-ES/roberta-base-biomedical-clinical-es"
description: "Biomedical-clinical language model for Spanish"
description_en: "Biomedical-clinical language model for Spanish"
icon: ""
from_repo: "https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "PlanTL-GOB-ES"
License: "apache-2.0"
Language: "Spanish"
Paper:
- title: 'Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario'
url: 'http://arxiv.org/abs/2109.03570v2'
- title: 'Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models'
url: 'http://arxiv.org/abs/2109.07765v1'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## PlanTL-GOB-ES/roberta-base-biomedical-es
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|PlanTL-GOB-ES/roberta-base-biomedical-es| | 633.14MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models PlanTL-GOB-ES/roberta-base-biomedical-es
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|PlanTL-GOB-ES/roberta-base-biomedical-es| | 633.14MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-biomedical-es/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models PlanTL-GOB-ES/roberta-base-biomedical-es
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "PlanTL-GOB-ES/roberta-base-biomedical-es"
description: "Biomedical language model for Spanish"
description_en: "Biomedical language model for Spanish"
icon: ""
from_repo: "https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "PlanTL-GOB-ES"
License: "apache-2.0"
Language: "Spanish"
Paper:
- title: 'Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario'
url: 'http://arxiv.org/abs/2109.03570v2'
- title: 'Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models'
url: 'http://arxiv.org/abs/2109.07765v1'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## PlanTL-GOB-ES/roberta-base-ca
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|PlanTL-GOB-ES/roberta-base-ca| | 633.14MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models PlanTL-GOB-ES/roberta-base-ca
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|PlanTL-GOB-ES/roberta-base-ca| | 633.14MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/PlanTL-GOB-ES/roberta-base-ca/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models PlanTL-GOB-ES/roberta-base-ca
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "PlanTL-GOB-ES/roberta-base-ca"
description: "BERTa: RoBERTa-based Catalan language model"
description_en: "BERTa: RoBERTa-based Catalan language model"
icon: ""
from_repo: "https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "PlanTL-GOB-ES"
License: "apache-2.0"
Language: "Catalan"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## Recognai/bert-base-spanish-wwm-cased-xnli
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|Recognai/bert-base-spanish-wwm-cased-xnli| | 419.08MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/Recognai/bert-base-spanish-wwm-cased-xnli/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/Recognai/bert-base-spanish-wwm-cased-xnli/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/Recognai/bert-base-spanish-wwm-cased-xnli/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/Recognai/bert-base-spanish-wwm-cased-xnli/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models Recognai/bert-base-spanish-wwm-cased-xnli
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|Recognai/bert-base-spanish-wwm-cased-xnli| | 419.08MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/Recognai/bert-base-spanish-wwm-cased-xnli/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/Recognai/bert-base-spanish-wwm-cased-xnli/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/Recognai/bert-base-spanish-wwm-cased-xnli/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/Recognai/bert-base-spanish-wwm-cased-xnli/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models Recognai/bert-base-spanish-wwm-cased-xnli
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "Recognai/bert-base-spanish-wwm-cased-xnli"
description: "bert-base-spanish-wwm-cased-xnli"
description_en: "bert-base-spanish-wwm-cased-xnli"
icon: ""
from_repo: "https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Zero-Shot Classification"
sub_tag: "零样本分类"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: "xnli"
Pulisher: "Recognai"
License: "mit"
Language: "Spanish"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## allenai/macaw-3b
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|allenai/macaw-3b| | 10.99G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-3b/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-3b/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-3b/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models allenai/macaw-3b
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|allenai/macaw-3b| | 10.99G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-3b/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-3b/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-3b/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models allenai/macaw-3b
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "allenai/macaw-3b"
description: "macaw-3b"
description_en: "macaw-3b"
icon: ""
from_repo: "https://huggingface.co/allenai/macaw-3b"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "allenai"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## allenai/macaw-large
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|allenai/macaw-large| | 3.12G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-large/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models allenai/macaw-large
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|allenai/macaw-large| | 3.12G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/macaw-large/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models allenai/macaw-large
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "allenai/macaw-large"
description: "macaw-large"
description_en: "macaw-large"
icon: ""
from_repo: "https://huggingface.co/allenai/macaw-large"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "allenai"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## allenai/specter
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|allenai/specter| | 419.41MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/specter/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/allenai/specter/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/specter/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/allenai/specter/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models allenai/specter
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|allenai/specter| | 419.41MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/specter/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/allenai/specter/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/allenai/specter/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/allenai/specter/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models allenai/specter
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "allenai/specter"
description: "SPECTER"
description_en: "SPECTER"
icon: ""
from_repo: "https://huggingface.co/allenai/specter"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Feature Extraction"
sub_tag: "特征抽取"
Example:
Datasets: ""
Pulisher: "allenai"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'SPECTER: Document-level Representation Learning using Citation-informed Transformers'
url: 'http://arxiv.org/abs/2004.07180v4'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## alvaroalon2/biobert_chemical_ner
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|alvaroalon2/biobert_chemical_ner| | 410.95MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_chemical_ner/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_chemical_ner/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_chemical_ner/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_chemical_ner/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models alvaroalon2/biobert_chemical_ner
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|alvaroalon2/biobert_chemical_ner| | 410.95MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_chemical_ner/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_chemical_ner/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_chemical_ner/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_chemical_ner/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models alvaroalon2/biobert_chemical_ner
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "alvaroalon2/biobert_chemical_ner"
description: ""
description_en: ""
icon: ""
from_repo: "https://huggingface.co/alvaroalon2/biobert_chemical_ner"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Token Classification"
sub_tag: "Token分类"
Example:
Datasets: ""
Pulisher: "alvaroalon2"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## alvaroalon2/biobert_diseases_ner
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|alvaroalon2/biobert_diseases_ner| | 410.95MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_diseases_ner/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_diseases_ner/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_diseases_ner/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_diseases_ner/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models alvaroalon2/biobert_diseases_ner
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|alvaroalon2/biobert_diseases_ner| | 410.95MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_diseases_ner/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_diseases_ner/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_diseases_ner/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_diseases_ner/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models alvaroalon2/biobert_diseases_ner
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "alvaroalon2/biobert_diseases_ner"
description: ""
description_en: ""
icon: ""
from_repo: "https://huggingface.co/alvaroalon2/biobert_diseases_ner"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Token Classification"
sub_tag: "Token分类"
Example:
Datasets: "ncbi_disease"
Pulisher: "alvaroalon2"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## alvaroalon2/biobert_genetic_ner
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|alvaroalon2/biobert_genetic_ner| | 410.95MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_genetic_ner/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_genetic_ner/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_genetic_ner/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_genetic_ner/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models alvaroalon2/biobert_genetic_ner
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|alvaroalon2/biobert_genetic_ner| | 410.95MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_genetic_ner/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_genetic_ner/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_genetic_ner/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/alvaroalon2/biobert_genetic_ner/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models alvaroalon2/biobert_genetic_ner
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "alvaroalon2/biobert_genetic_ner"
description: ""
description_en: ""
icon: ""
from_repo: "https://huggingface.co/alvaroalon2/biobert_genetic_ner"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Token Classification"
sub_tag: "Token分类"
Example:
Datasets: ""
Pulisher: "alvaroalon2"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## amberoad/bert-multilingual-passage-reranking-msmarco
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|amberoad/bert-multilingual-passage-reranking-msmarco| | 638.44MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/amberoad/bert-multilingual-passage-reranking-msmarco/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/amberoad/bert-multilingual-passage-reranking-msmarco/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/amberoad/bert-multilingual-passage-reranking-msmarco/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/amberoad/bert-multilingual-passage-reranking-msmarco/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models amberoad/bert-multilingual-passage-reranking-msmarco
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|amberoad/bert-multilingual-passage-reranking-msmarco| | 638.44MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/amberoad/bert-multilingual-passage-reranking-msmarco/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/amberoad/bert-multilingual-passage-reranking-msmarco/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/amberoad/bert-multilingual-passage-reranking-msmarco/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/amberoad/bert-multilingual-passage-reranking-msmarco/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models amberoad/bert-multilingual-passage-reranking-msmarco
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "amberoad/bert-multilingual-passage-reranking-msmarco"
description: "Passage Reranking Multilingual BERT 🔃 🌍"
description_en: "Passage Reranking Multilingual BERT 🔃 🌍"
icon: ""
from_repo: "https://huggingface.co/amberoad/bert-multilingual-passage-reranking-msmarco"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "amberoad"
License: "apache-2.0"
Language: ""
Paper:
- title: 'Passage Re-ranking with BERT'
url: 'http://arxiv.org/abs/1901.04085v5'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## asi/gpt-fr-cased-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|asi/gpt-fr-cased-base| | 4.12G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models asi/gpt-fr-cased-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|asi/gpt-fr-cased-base| | 4.12G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-base/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models asi/gpt-fr-cased-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "asi/gpt-fr-cased-base"
description: "Model description"
description_en: "Model description"
icon: ""
from_repo: "https://huggingface.co/asi/gpt-fr-cased-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "asi"
License: "apache-2.0"
Language: "French"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## asi/gpt-fr-cased-small
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|asi/gpt-fr-cased-small| | 620.45MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models asi/gpt-fr-cased-small
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|asi/gpt-fr-cased-small| | 620.45MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/asi/gpt-fr-cased-small/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models asi/gpt-fr-cased-small
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "asi/gpt-fr-cased-small"
description: "Model description"
description_en: "Model description"
icon: ""
from_repo: "https://huggingface.co/asi/gpt-fr-cased-small"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "asi"
License: "apache-2.0"
Language: "French"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## benjamin/gerpt2-large
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|benjamin/gerpt2-large| | 3.12G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models benjamin/gerpt2-large
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|benjamin/gerpt2-large| | 3.12G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2-large/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models benjamin/gerpt2-large
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "benjamin/gerpt2-large"
description: "GerPT2"
description_en: "GerPT2"
icon: ""
from_repo: "https://huggingface.co/benjamin/gerpt2-large"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "benjamin"
License: "mit"
Language: "German"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## benjamin/gerpt2
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|benjamin/gerpt2| | 621.95MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models benjamin/gerpt2
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|benjamin/gerpt2| | 621.95MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/benjamin/gerpt2/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models benjamin/gerpt2
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "benjamin/gerpt2"
description: "GerPT2"
description_en: "GerPT2"
icon: ""
from_repo: "https://huggingface.co/benjamin/gerpt2"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "benjamin"
License: "mit"
Language: "German"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## beomi/kcbert-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|beomi/kcbert-base| | 505.90MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/beomi/kcbert-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/beomi/kcbert-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/beomi/kcbert-base/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/beomi/kcbert-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models beomi/kcbert-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|beomi/kcbert-base| | 505.90MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/beomi/kcbert-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/beomi/kcbert-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/beomi/kcbert-base/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/beomi/kcbert-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models beomi/kcbert-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "beomi/kcbert-base"
description: "KcBERT: Korean comments BERT"
description_en: "KcBERT: Korean comments BERT"
icon: ""
from_repo: "https://huggingface.co/beomi/kcbert-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "beomi"
License: "apache-2.0"
Language: "Korean"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bhadresh-savani/roberta-base-emotion
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bhadresh-savani/roberta-base-emotion| | 475.53MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bhadresh-savani/roberta-base-emotion
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bhadresh-savani/roberta-base-emotion| | 475.53MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bhadresh-savani/roberta-base-emotion/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bhadresh-savani/roberta-base-emotion
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bhadresh-savani/roberta-base-emotion"
description: "robert-base-emotion"
description_en: "robert-base-emotion"
icon: ""
from_repo: "https://huggingface.co/bhadresh-savani/roberta-base-emotion"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: "emotion,emotion"
Pulisher: "bhadresh-savani"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'RoBERTa: A Robustly Optimized BERT Pretraining Approach'
url: 'http://arxiv.org/abs/1907.11692v1'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cahya/bert-base-indonesian-522M
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cahya/bert-base-indonesian-522M| | 518.25MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/bert-base-indonesian-522M/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cahya/bert-base-indonesian-522M/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/bert-base-indonesian-522M/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cahya/bert-base-indonesian-522M/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cahya/bert-base-indonesian-522M
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cahya/bert-base-indonesian-522M| | 518.25MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/bert-base-indonesian-522M/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cahya/bert-base-indonesian-522M/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/bert-base-indonesian-522M/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cahya/bert-base-indonesian-522M/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cahya/bert-base-indonesian-522M
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cahya/bert-base-indonesian-522M"
description: "Indonesian BERT base model (uncased)"
description_en: "Indonesian BERT base model (uncased)"
icon: ""
from_repo: "https://huggingface.co/cahya/bert-base-indonesian-522M"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "wikipedia"
Pulisher: "cahya"
License: "mit"
Language: "Indonesian"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cahya/gpt2-small-indonesian-522M
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cahya/gpt2-small-indonesian-522M| | 621.95MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cahya/gpt2-small-indonesian-522M
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cahya/gpt2-small-indonesian-522M| | 621.95MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cahya/gpt2-small-indonesian-522M/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cahya/gpt2-small-indonesian-522M
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cahya/gpt2-small-indonesian-522M"
description: "Indonesian GPT2 small model"
description_en: "Indonesian GPT2 small model"
icon: ""
from_repo: "https://huggingface.co/cahya/gpt2-small-indonesian-522M"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "cahya"
License: "mit"
Language: "Indonesian"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## ceshine/t5-paraphrase-paws-msrp-opinosis
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|ceshine/t5-paraphrase-paws-msrp-opinosis| | 1.11G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-paws-msrp-opinosis/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-paws-msrp-opinosis/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-paws-msrp-opinosis/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models ceshine/t5-paraphrase-paws-msrp-opinosis
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|ceshine/t5-paraphrase-paws-msrp-opinosis| | 1.11G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-paws-msrp-opinosis/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-paws-msrp-opinosis/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-paws-msrp-opinosis/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models ceshine/t5-paraphrase-paws-msrp-opinosis
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "ceshine/t5-paraphrase-paws-msrp-opinosis"
description: "T5-base Parapharasing model fine-tuned on PAWS, MSRP, and Opinosis"
description_en: "T5-base Parapharasing model fine-tuned on PAWS, MSRP, and Opinosis"
icon: ""
from_repo: "https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "ceshine"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## ceshine/t5-paraphrase-quora-paws
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|ceshine/t5-paraphrase-quora-paws| | 1.11G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-quora-paws/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-quora-paws/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-quora-paws/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models ceshine/t5-paraphrase-quora-paws
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|ceshine/t5-paraphrase-quora-paws| | 1.11G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-quora-paws/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-quora-paws/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/ceshine/t5-paraphrase-quora-paws/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models ceshine/t5-paraphrase-quora-paws
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "ceshine/t5-paraphrase-quora-paws"
description: "T5-base Parapharasing model fine-tuned on PAWS and Quora"
description_en: "T5-base Parapharasing model fine-tuned on PAWS and Quora"
icon: ""
from_repo: "https://huggingface.co/ceshine/t5-paraphrase-quora-paws"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "ceshine"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cointegrated/rubert-tiny
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cointegrated/rubert-tiny| | 80.75MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cointegrated/rubert-tiny
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cointegrated/rubert-tiny| | 80.75MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cointegrated/rubert-tiny
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cointegrated/rubert-tiny"
description: "pip install transformers sentencepiece"
description_en: "pip install transformers sentencepiece"
icon: ""
from_repo: "https://huggingface.co/cointegrated/rubert-tiny"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Feature Extraction"
sub_tag: "特征抽取"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Sentence Similarity"
sub_tag: "句子相似度"
Example:
Datasets: ""
Pulisher: "cointegrated"
License: "mit"
Language: "Russian,English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cointegrated/rubert-tiny2
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cointegrated/rubert-tiny2| | 212.18MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny2/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny2/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cointegrated/rubert-tiny2
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cointegrated/rubert-tiny2| | 212.18MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny2/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cointegrated/rubert-tiny2/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cointegrated/rubert-tiny2
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cointegrated/rubert-tiny2"
description: "pip install transformers sentencepiece"
description_en: "pip install transformers sentencepiece"
icon: ""
from_repo: "https://huggingface.co/cointegrated/rubert-tiny2"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Feature Extraction"
sub_tag: "特征抽取"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Sentence Similarity"
sub_tag: "句子相似度"
Example:
Datasets: ""
Pulisher: "cointegrated"
License: "mit"
Language: "Russian"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/ms-marco-MiniLM-L-12-v2
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/ms-marco-MiniLM-L-12-v2| | 127.28MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-MiniLM-L-12-v2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-MiniLM-L-12-v2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-MiniLM-L-12-v2/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-MiniLM-L-12-v2/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/ms-marco-MiniLM-L-12-v2
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/ms-marco-MiniLM-L-12-v2| | 127.28MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-MiniLM-L-12-v2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-MiniLM-L-12-v2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-MiniLM-L-12-v2/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-MiniLM-L-12-v2/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/ms-marco-MiniLM-L-12-v2
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/ms-marco-MiniLM-L-12-v2"
description: "Cross-Encoder for MS Marco"
description_en: "Cross-Encoder for MS Marco"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-12-v2"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/ms-marco-TinyBERT-L-2
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/ms-marco-TinyBERT-L-2| | 16.74MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-TinyBERT-L-2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-TinyBERT-L-2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-TinyBERT-L-2/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-TinyBERT-L-2/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/ms-marco-TinyBERT-L-2
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/ms-marco-TinyBERT-L-2| | 16.74MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-TinyBERT-L-2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-TinyBERT-L-2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-TinyBERT-L-2/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/ms-marco-TinyBERT-L-2/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/ms-marco-TinyBERT-L-2
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/ms-marco-TinyBERT-L-2"
description: "Cross-Encoder for MS Marco"
description_en: "Cross-Encoder for MS Marco"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/nli-MiniLM2-L6-H768
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/nli-MiniLM2-L6-H768| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/nli-MiniLM2-L6-H768
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/nli-MiniLM2-L6-H768| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-MiniLM2-L6-H768/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/nli-MiniLM2-L6-H768
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/nli-MiniLM2-L6-H768"
description: "Cross-Encoder for Natural Language Inference"
description_en: "Cross-Encoder for Natural Language Inference"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/nli-MiniLM2-L6-H768"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Zero-Shot Classification"
sub_tag: "零样本分类"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: "multi_nli,snli"
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/nli-distilroberta-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/nli-distilroberta-base| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/nli-distilroberta-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/nli-distilroberta-base| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-distilroberta-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/nli-distilroberta-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/nli-distilroberta-base"
description: "Cross-Encoder for Natural Language Inference"
description_en: "Cross-Encoder for Natural Language Inference"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/nli-distilroberta-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Zero-Shot Classification"
sub_tag: "零样本分类"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: "multi_nli,snli"
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/nli-roberta-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/nli-roberta-base| | 475.52MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/nli-roberta-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/nli-roberta-base| | 475.52MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/nli-roberta-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/nli-roberta-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/nli-roberta-base"
description: "Cross-Encoder for Natural Language Inference"
description_en: "Cross-Encoder for Natural Language Inference"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/nli-roberta-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Zero-Shot Classification"
sub_tag: "零样本分类"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: "multi_nli,snli"
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/qnli-distilroberta-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/qnli-distilroberta-base| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/qnli-distilroberta-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/qnli-distilroberta-base| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/qnli-distilroberta-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/qnli-distilroberta-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/qnli-distilroberta-base"
description: "Cross-Encoder for Quora Duplicate Questions Detection"
description_en: "Cross-Encoder for Quora Duplicate Questions Detection"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/qnli-distilroberta-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
- title: 'GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding'
url: 'http://arxiv.org/abs/1804.07461v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/quora-distilroberta-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/quora-distilroberta-base| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/quora-distilroberta-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/quora-distilroberta-base| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-distilroberta-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/quora-distilroberta-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/quora-distilroberta-base"
description: "Cross-Encoder for Quora Duplicate Questions Detection"
description_en: "Cross-Encoder for Quora Duplicate Questions Detection"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/quora-distilroberta-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/quora-roberta-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/quora-roberta-base| | 475.51MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/quora-roberta-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/quora-roberta-base| | 475.51MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/quora-roberta-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/quora-roberta-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/quora-roberta-base"
description: "Cross-Encoder for Quora Duplicate Questions Detection"
description_en: "Cross-Encoder for Quora Duplicate Questions Detection"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/quora-roberta-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/stsb-TinyBERT-L-4
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/stsb-TinyBERT-L-4| | 54.75MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-TinyBERT-L-4/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-TinyBERT-L-4/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-TinyBERT-L-4/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-TinyBERT-L-4/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/stsb-TinyBERT-L-4
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/stsb-TinyBERT-L-4| | 54.75MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-TinyBERT-L-4/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-TinyBERT-L-4/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-TinyBERT-L-4/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-TinyBERT-L-4/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/stsb-TinyBERT-L-4
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/stsb-TinyBERT-L-4"
description: "Cross-Encoder for Quora Duplicate Questions Detection"
description_en: "Cross-Encoder for Quora Duplicate Questions Detection"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/stsb-TinyBERT-L-4"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/stsb-distilroberta-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/stsb-distilroberta-base| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/stsb-distilroberta-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/stsb-distilroberta-base| | 313.28MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-distilroberta-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/stsb-distilroberta-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/stsb-distilroberta-base"
description: "Cross-Encoder for Quora Duplicate Questions Detection"
description_en: "Cross-Encoder for Quora Duplicate Questions Detection"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/stsb-distilroberta-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/stsb-roberta-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/stsb-roberta-base| | 475.51MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/stsb-roberta-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/stsb-roberta-base| | 475.51MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/stsb-roberta-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/stsb-roberta-base"
description: "Cross-Encoder for Quora Duplicate Questions Detection"
description_en: "Cross-Encoder for Quora Duplicate Questions Detection"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/stsb-roberta-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## cross-encoder/stsb-roberta-large
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|cross-encoder/stsb-roberta-large| | 1.32G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/stsb-roberta-large
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|cross-encoder/stsb-roberta-large| | 1.32G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/cross-encoder/stsb-roberta-large/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models cross-encoder/stsb-roberta-large
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "cross-encoder/stsb-roberta-large"
description: "Cross-Encoder for Quora Duplicate Questions Detection"
description_en: "Cross-Encoder for Quora Duplicate Questions Detection"
icon: ""
from_repo: "https://huggingface.co/cross-encoder/stsb-roberta-large"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Classification"
sub_tag: "文本分类"
Example:
Datasets: ""
Pulisher: "cross-encoder"
License: "apache-2.0"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## csarron/roberta-base-squad-v1
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|csarron/roberta-base-squad-v1| | 475.51MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models csarron/roberta-base-squad-v1
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|csarron/roberta-base-squad-v1| | 475.51MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/csarron/roberta-base-squad-v1/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models csarron/roberta-base-squad-v1
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "csarron/roberta-base-squad-v1"
description: "RoBERTa-base fine-tuned on SQuAD v1"
description_en: "RoBERTa-base fine-tuned on SQuAD v1"
icon: ""
from_repo: "https://huggingface.co/csarron/roberta-base-squad-v1"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Question Answering"
sub_tag: "回答问题"
Example:
Datasets: "squad"
Pulisher: "csarron"
License: "mit"
Language: "English"
Paper:
- title: 'RoBERTa: A Robustly Optimized BERT Pretraining Approach'
url: 'http://arxiv.org/abs/1907.11692v1'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dbmdz/bert-base-german-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dbmdz/bert-base-german-cased| | 512.86MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-german-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dbmdz/bert-base-german-cased| | 512.86MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-german-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dbmdz/bert-base-german-cased"
description: "🤗 + 📚 dbmdz German BERT models"
description_en: "🤗 + 📚 dbmdz German BERT models"
icon: ""
from_repo: "https://huggingface.co/dbmdz/bert-base-german-cased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "dbmdz"
License: "mit"
Language: "German"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dbmdz/bert-base-german-uncased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dbmdz/bert-base-german-uncased| | 512.87MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-uncased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-german-uncased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dbmdz/bert-base-german-uncased| | 512.87MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-german-uncased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-german-uncased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dbmdz/bert-base-german-uncased"
description: "🤗 + 📚 dbmdz German BERT models"
description_en: "🤗 + 📚 dbmdz German BERT models"
icon: ""
from_repo: "https://huggingface.co/dbmdz/bert-base-german-uncased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "dbmdz"
License: "mit"
Language: "German"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dbmdz/bert-base-italian-uncased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dbmdz/bert-base-italian-uncased| | 512.86MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-uncased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-italian-uncased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dbmdz/bert-base-italian-uncased| | 512.86MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-uncased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-italian-uncased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dbmdz/bert-base-italian-uncased"
description: "🤗 + 📚 dbmdz BERT and ELECTRA models"
description_en: "🤗 + 📚 dbmdz BERT and ELECTRA models"
icon: ""
from_repo: "https://huggingface.co/dbmdz/bert-base-italian-uncased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "wikipedia"
Pulisher: "dbmdz"
License: "mit"
Language: "Italian"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dbmdz/bert-base-italian-xxl-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dbmdz/bert-base-italian-xxl-cased| | 518.85MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-xxl-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-xxl-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-xxl-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-xxl-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-italian-xxl-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dbmdz/bert-base-italian-xxl-cased| | 518.85MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-xxl-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-xxl-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-xxl-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-italian-xxl-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-italian-xxl-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dbmdz/bert-base-italian-xxl-cased"
description: "🤗 + 📚 dbmdz BERT and ELECTRA models"
description_en: "🤗 + 📚 dbmdz BERT and ELECTRA models"
icon: ""
from_repo: "https://huggingface.co/dbmdz/bert-base-italian-xxl-cased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "wikipedia"
Pulisher: "dbmdz"
License: "mit"
Language: "Italian"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dbmdz/bert-base-turkish-128k-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dbmdz/bert-base-turkish-128k-cased| | 1.06G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-128k-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-128k-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-128k-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-128k-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-turkish-128k-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dbmdz/bert-base-turkish-128k-cased| | 1.06G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-128k-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-128k-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-128k-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-128k-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-turkish-128k-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dbmdz/bert-base-turkish-128k-cased"
description: "🤗 + 📚 dbmdz Turkish BERT model"
description_en: "🤗 + 📚 dbmdz Turkish BERT model"
icon: ""
from_repo: "https://huggingface.co/dbmdz/bert-base-turkish-128k-cased"
Task:
Example:
Datasets: ""
Pulisher: "dbmdz"
License: "mit"
Language: "Turkish"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dbmdz/bert-base-turkish-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dbmdz/bert-base-turkish-cased| | 518.25MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-turkish-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dbmdz/bert-base-turkish-cased| | 518.25MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-turkish-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dbmdz/bert-base-turkish-cased"
description: "🤗 + 📚 dbmdz Turkish BERT model"
description_en: "🤗 + 📚 dbmdz Turkish BERT model"
icon: ""
from_repo: "https://huggingface.co/dbmdz/bert-base-turkish-cased"
Task:
Example:
Datasets: ""
Pulisher: "dbmdz"
License: "mit"
Language: "Turkish"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dbmdz/bert-base-turkish-uncased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dbmdz/bert-base-turkish-uncased| | 518.25MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-uncased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-turkish-uncased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dbmdz/bert-base-turkish-uncased| | 518.25MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dbmdz/bert-base-turkish-uncased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dbmdz/bert-base-turkish-uncased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dbmdz/bert-base-turkish-uncased"
description: "🤗 + 📚 dbmdz Turkish BERT model"
description_en: "🤗 + 📚 dbmdz Turkish BERT model"
icon: ""
from_repo: "https://huggingface.co/dbmdz/bert-base-turkish-uncased"
Task:
Example:
Datasets: ""
Pulisher: "dbmdz"
License: "mit"
Language: "Turkish"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## deepparag/Aeona
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|deepparag/Aeona| | 1.51G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models deepparag/Aeona
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|deepparag/Aeona| | 1.51G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/Aeona/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models deepparag/Aeona
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "deepparag/Aeona"
description: "Aeona | Chatbot"
description_en: "Aeona | Chatbot"
icon: ""
from_repo: "https://huggingface.co/deepparag/Aeona"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "deepparag"
License: "mit"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## deepparag/DumBot
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|deepparag/DumBot| | 621.95MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models deepparag/DumBot
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|deepparag/DumBot| | 621.95MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/deepparag/DumBot/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models deepparag/DumBot
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "deepparag/DumBot"
description: "THIS AI IS OUTDATED. See [Aeona](https://huggingface.co/deepparag/Aeona)"
description_en: "THIS AI IS OUTDATED. See [Aeona](https://huggingface.co/deepparag/Aeona)"
icon: ""
from_repo: "https://huggingface.co/deepparag/DumBot"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "deepparag"
License: "mit"
Language: ""
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## deepset/roberta-base-squad2-distilled
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|deepset/roberta-base-squad2-distilled| | 473.26MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models deepset/roberta-base-squad2-distilled
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|deepset/roberta-base-squad2-distilled| | 473.26MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/deepset/roberta-base-squad2-distilled/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models deepset/roberta-base-squad2-distilled
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "deepset/roberta-base-squad2-distilled"
description: "Overview"
description_en: "Overview"
icon: ""
from_repo: "https://huggingface.co/deepset/roberta-base-squad2-distilled"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Question Answering"
sub_tag: "回答问题"
Example:
Datasets: "squad_v2"
Pulisher: "deepset"
License: "mit"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dslim/bert-base-NER
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dslim/bert-base-NER| | 413.22MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-base-NER/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-base-NER/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-base-NER/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-base-NER/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dslim/bert-base-NER
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dslim/bert-base-NER| | 413.22MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-base-NER/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-base-NER/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-base-NER/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-base-NER/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dslim/bert-base-NER
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dslim/bert-base-NER"
description: "bert-base-NER"
description_en: "bert-base-NER"
icon: ""
from_repo: "https://huggingface.co/dslim/bert-base-NER"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Token Classification"
sub_tag: "Token分类"
Example:
Datasets: "conll2003"
Pulisher: "dslim"
License: "mit"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dslim/bert-large-NER
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dslim/bert-large-NER| | 1.24G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-large-NER/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-large-NER/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-large-NER/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-large-NER/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dslim/bert-large-NER
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dslim/bert-large-NER| | 1.24G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-large-NER/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-large-NER/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-large-NER/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dslim/bert-large-NER/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dslim/bert-large-NER
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dslim/bert-large-NER"
description: "bert-base-NER"
description_en: "bert-base-NER"
icon: ""
from_repo: "https://huggingface.co/dslim/bert-large-NER"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Token Classification"
sub_tag: "Token分类"
Example:
Datasets: "conll2003"
Pulisher: "dslim"
License: "mit"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dumitrescustefan/bert-base-romanian-cased-v1
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dumitrescustefan/bert-base-romanian-cased-v1| | 623.86MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-cased-v1/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-cased-v1/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-cased-v1/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-cased-v1/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dumitrescustefan/bert-base-romanian-cased-v1
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dumitrescustefan/bert-base-romanian-cased-v1| | 623.86MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-cased-v1/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-cased-v1/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-cased-v1/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-cased-v1/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dumitrescustefan/bert-base-romanian-cased-v1
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dumitrescustefan/bert-base-romanian-cased-v1"
description: "bert-base-romanian-cased-v1"
description_en: "bert-base-romanian-cased-v1"
icon: ""
from_repo: "https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "dumitrescustefan"
License: "mit"
Language: "Romanian"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## dumitrescustefan/bert-base-romanian-uncased-v1
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|dumitrescustefan/bert-base-romanian-uncased-v1| | 623.86MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-uncased-v1/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-uncased-v1/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-uncased-v1/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-uncased-v1/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models dumitrescustefan/bert-base-romanian-uncased-v1
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|dumitrescustefan/bert-base-romanian-uncased-v1| | 623.86MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-uncased-v1/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-uncased-v1/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-uncased-v1/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/dumitrescustefan/bert-base-romanian-uncased-v1/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models dumitrescustefan/bert-base-romanian-uncased-v1
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "dumitrescustefan/bert-base-romanian-uncased-v1"
description: "bert-base-romanian-uncased-v1"
description_en: "bert-base-romanian-uncased-v1"
icon: ""
from_repo: "https://huggingface.co/dumitrescustefan/bert-base-romanian-uncased-v1"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "dumitrescustefan"
License: "mit"
Language: "Romanian"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## emilyalsentzer/Bio_ClinicalBERT
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|emilyalsentzer/Bio_ClinicalBERT| | 500.52MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_ClinicalBERT/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_ClinicalBERT/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_ClinicalBERT/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_ClinicalBERT/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models emilyalsentzer/Bio_ClinicalBERT
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|emilyalsentzer/Bio_ClinicalBERT| | 500.52MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_ClinicalBERT/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_ClinicalBERT/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_ClinicalBERT/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_ClinicalBERT/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models emilyalsentzer/Bio_ClinicalBERT
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "emilyalsentzer/Bio_ClinicalBERT"
description: "ClinicalBERT - Bio + Clinical BERT Model"
description_en: "ClinicalBERT - Bio + Clinical BERT Model"
icon: ""
from_repo: "https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "emilyalsentzer"
License: "mit"
Language: "English"
Paper:
- title: 'Publicly Available Clinical BERT Embeddings'
url: 'http://arxiv.org/abs/1904.03323v3'
- title: 'BioBERT: a pre-trained biomedical language representation model for biomedical text mining'
url: 'http://arxiv.org/abs/1901.08746v4'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## emilyalsentzer/Bio_Discharge_Summary_BERT
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|emilyalsentzer/Bio_Discharge_Summary_BERT| | 500.52MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_Discharge_Summary_BERT/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_Discharge_Summary_BERT/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_Discharge_Summary_BERT/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_Discharge_Summary_BERT/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models emilyalsentzer/Bio_Discharge_Summary_BERT
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|emilyalsentzer/Bio_Discharge_Summary_BERT| | 500.52MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_Discharge_Summary_BERT/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_Discharge_Summary_BERT/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_Discharge_Summary_BERT/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/emilyalsentzer/Bio_Discharge_Summary_BERT/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models emilyalsentzer/Bio_Discharge_Summary_BERT
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "emilyalsentzer/Bio_Discharge_Summary_BERT"
description: "ClinicalBERT - Bio + Discharge Summary BERT Model"
description_en: "ClinicalBERT - Bio + Discharge Summary BERT Model"
icon: ""
from_repo: "https://huggingface.co/emilyalsentzer/Bio_Discharge_Summary_BERT"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "emilyalsentzer"
License: "mit"
Language: "English"
Paper:
- title: 'Publicly Available Clinical BERT Embeddings'
url: 'http://arxiv.org/abs/1904.03323v3'
- title: 'BioBERT: a pre-trained biomedical language representation model for biomedical text mining'
url: 'http://arxiv.org/abs/1901.08746v4'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## google/t5-base-lm-adapt
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|google/t5-base-lm-adapt| | 1.11G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-base-lm-adapt/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-base-lm-adapt/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-base-lm-adapt/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-base-lm-adapt
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|google/t5-base-lm-adapt| | 1.11G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-base-lm-adapt/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-base-lm-adapt/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-base-lm-adapt/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-base-lm-adapt
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "google/t5-base-lm-adapt"
description: "Version 1.1 - LM-Adapted"
description_en: "Version 1.1 - LM-Adapted"
icon: ""
from_repo: "https://huggingface.co/google/t5-base-lm-adapt"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: "c4"
Pulisher: "google"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'GLU Variants Improve Transformer'
url: 'http://arxiv.org/abs/2002.05202v1'
- title: 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer'
url: 'http://arxiv.org/abs/1910.10683v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## google/t5-large-lm-adapt
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|google/t5-large-lm-adapt| | 3.16G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-lm-adapt/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-lm-adapt/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-lm-adapt/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-large-lm-adapt
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|google/t5-large-lm-adapt| | 3.16G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-lm-adapt/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-lm-adapt/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-lm-adapt/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-large-lm-adapt
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "google/t5-large-lm-adapt"
description: "Version 1.1 - LM-Adapted"
description_en: "Version 1.1 - LM-Adapted"
icon: ""
from_repo: "https://huggingface.co/google/t5-large-lm-adapt"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: "c4"
Pulisher: "google"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'GLU Variants Improve Transformer'
url: 'http://arxiv.org/abs/2002.05202v1'
- title: 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer'
url: 'http://arxiv.org/abs/1910.10683v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## google/t5-large-ssm
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|google/t5-large-ssm| | 3.12G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-ssm/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-ssm/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-ssm/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-large-ssm
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|google/t5-large-ssm| | 3.12G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-ssm/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-ssm/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-large-ssm/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-large-ssm
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "google/t5-large-ssm"
description: "Abstract"
description_en: "Abstract"
icon: ""
from_repo: "https://huggingface.co/google/t5-large-ssm"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: "c4,wikipedia"
Pulisher: "google"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'REALM: Retrieval-Augmented Language Model Pre-Training'
url: 'http://arxiv.org/abs/2002.08909v1'
- title: 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer'
url: 'http://arxiv.org/abs/1910.10683v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## google/t5-small-lm-adapt
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|google/t5-small-lm-adapt| | 419.10MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-small-lm-adapt/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-small-lm-adapt/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-small-lm-adapt/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-small-lm-adapt
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|google/t5-small-lm-adapt| | 419.10MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-small-lm-adapt/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-small-lm-adapt/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-small-lm-adapt/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-small-lm-adapt
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "google/t5-small-lm-adapt"
description: "Version 1.1 - LM-Adapted"
description_en: "Version 1.1 - LM-Adapted"
icon: ""
from_repo: "https://huggingface.co/google/t5-small-lm-adapt"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: "c4"
Pulisher: "google"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'GLU Variants Improve Transformer'
url: 'http://arxiv.org/abs/2002.05202v1'
- title: 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer'
url: 'http://arxiv.org/abs/1910.10683v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## google/t5-v1_1-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|google/t5-v1_1-base| | 1.11G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-base/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-v1_1-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|google/t5-v1_1-base| | 1.11G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-base/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-v1_1-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "google/t5-v1_1-base"
description: "Version 1.1"
description_en: "Version 1.1"
icon: ""
from_repo: "https://huggingface.co/google/t5-v1_1-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: "c4"
Pulisher: "google"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'GLU Variants Improve Transformer'
url: 'http://arxiv.org/abs/2002.05202v1'
- title: 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer'
url: 'http://arxiv.org/abs/1910.10683v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## google/t5-v1_1-large
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|google/t5-v1_1-large| | 3.16G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-large/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-v1_1-large
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|google/t5-v1_1-large| | 3.16G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-large/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-v1_1-large
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "google/t5-v1_1-large"
description: "Version 1.1"
description_en: "Version 1.1"
icon: ""
from_repo: "https://huggingface.co/google/t5-v1_1-large"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: "c4"
Pulisher: "google"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'GLU Variants Improve Transformer'
url: 'http://arxiv.org/abs/2002.05202v1'
- title: 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer'
url: 'http://arxiv.org/abs/1910.10683v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## google/t5-v1_1-small
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|google/t5-v1_1-small| | 419.10MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-small/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-small/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-small/tokenizer_config.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-v1_1-small
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|google/t5-v1_1-small| | 419.10MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-small/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-small/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/google/t5-v1_1-small/tokenizer_config.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models google/t5-v1_1-small
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "google/t5-v1_1-small"
description: "Version 1.1"
description_en: "Version 1.1"
icon: ""
from_repo: "https://huggingface.co/google/t5-v1_1-small"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text2Text Generation"
sub_tag: "文本生成"
Example:
Datasets: "c4"
Pulisher: "google"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'GLU Variants Improve Transformer'
url: 'http://arxiv.org/abs/2002.05202v1'
- title: 'Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer'
url: 'http://arxiv.org/abs/1910.10683v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## hfl/chinese-bert-wwm-ext
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|hfl/chinese-bert-wwm-ext| | 454.39MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm-ext/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm-ext/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm-ext/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm-ext/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/chinese-bert-wwm-ext
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|hfl/chinese-bert-wwm-ext| | 454.39MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm-ext/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm-ext/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm-ext/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm-ext/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/chinese-bert-wwm-ext
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "hfl/chinese-bert-wwm-ext"
description: "Chinese BERT with Whole Word Masking"
description_en: "Chinese BERT with Whole Word Masking"
icon: ""
from_repo: "https://huggingface.co/hfl/chinese-bert-wwm-ext"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "hfl"
License: "apache-2.0"
Language: "Chinese"
Paper:
- title: 'Pre-Training with Whole Word Masking for Chinese BERT'
url: 'http://arxiv.org/abs/1906.08101v3'
- title: 'Revisiting Pre-Trained Models for Chinese Natural Language Processing'
url: 'http://arxiv.org/abs/2004.13922v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## hfl/chinese-bert-wwm
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|hfl/chinese-bert-wwm| | 454.39MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/chinese-bert-wwm
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|hfl/chinese-bert-wwm| | 454.39MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-bert-wwm/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/chinese-bert-wwm
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "hfl/chinese-bert-wwm"
description: "Chinese BERT with Whole Word Masking"
description_en: "Chinese BERT with Whole Word Masking"
icon: ""
from_repo: "https://huggingface.co/hfl/chinese-bert-wwm"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "hfl"
License: "apache-2.0"
Language: "Chinese"
Paper:
- title: 'Pre-Training with Whole Word Masking for Chinese BERT'
url: 'http://arxiv.org/abs/1906.08101v3'
- title: 'Revisiting Pre-Trained Models for Chinese Natural Language Processing'
url: 'http://arxiv.org/abs/2004.13922v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
#!/usr/bin/env python
# coding: utf-8
# ## Chinese BERT with Whole Word Masking
# For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
#
# **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
# Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
#
# This repository is developed based on:https://github.com/google-research/bert
#
# You may also interested in,
# - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
# - Chinese MacBERT: https://github.com/ymcui/MacBERT
# - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
# - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
# - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
#
# More resources by HFL: https://github.com/ymcui/HFL-Anthology
#
# ## How to Use
# In[ ]:
get_ipython().system('pip install --upgrade paddlenlp')
# In[ ]:
import paddle
from paddlenlp.transformers import AutoModel
model = AutoModel.from_pretrained("hfl/chinese-bert-wwm")
input_ids = paddle.randint(100, 200, shape=[1, 20])
print(model(input_ids))
#
# ## Citation
# If you find the technical report or resource is useful, please cite the following technical report in your paper.
# - Primary: https://arxiv.org/abs/2004.13922
# @inproceedings{cui-etal-2020-revisiting,
# title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
# author = "Cui, Yiming and
# Che, Wanxiang and
# Liu, Ting and
# Qin, Bing and
# Wang, Shijin and
# Hu, Guoping",
# booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
# month = nov,
# year = "2020",
# address = "Online",
# publisher = "Association for Computational Linguistics",
# url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
# pages = "657--668",
# }
#
# - Secondary: https://arxiv.org/abs/1906.08101
#
# @article{chinese-bert-wwm,
# title={Pre-Training with Whole Word Masking for Chinese BERT},
# author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
# journal={arXiv preprint arXiv:1906.08101},
# year={2019}
# }
#
# > 模型来源 [hfl/chinese-bert-wwm](https://huggingface.co/hfl/chinese-bert-wwm)
# 模型列表
## hfl/chinese-roberta-wwm-ext-large
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|hfl/chinese-roberta-wwm-ext-large| | 1.30G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext-large/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext-large/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/chinese-roberta-wwm-ext-large
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|hfl/chinese-roberta-wwm-ext-large| | 1.30G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext-large/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext-large/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/chinese-roberta-wwm-ext-large
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "hfl/chinese-roberta-wwm-ext-large"
description: "Please use 'Bert' related functions to load this model!"
description_en: "Please use 'Bert' related functions to load this model!"
icon: ""
from_repo: "https://huggingface.co/hfl/chinese-roberta-wwm-ext-large"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "hfl"
License: "apache-2.0"
Language: "Chinese"
Paper:
- title: 'Pre-Training with Whole Word Masking for Chinese BERT'
url: 'http://arxiv.org/abs/1906.08101v3'
- title: 'Revisiting Pre-Trained Models for Chinese Natural Language Processing'
url: 'http://arxiv.org/abs/2004.13922v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## hfl/chinese-roberta-wwm-ext
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|hfl/chinese-roberta-wwm-ext| | 454.39MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/chinese-roberta-wwm-ext
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|hfl/chinese-roberta-wwm-ext| | 454.39MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/chinese-roberta-wwm-ext/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/chinese-roberta-wwm-ext
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "hfl/chinese-roberta-wwm-ext"
description: "Please use 'Bert' related functions to load this model!"
description_en: "Please use 'Bert' related functions to load this model!"
icon: ""
from_repo: "https://huggingface.co/hfl/chinese-roberta-wwm-ext"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "hfl"
License: "apache-2.0"
Language: "Chinese"
Paper:
- title: 'Pre-Training with Whole Word Masking for Chinese BERT'
url: 'http://arxiv.org/abs/1906.08101v3'
- title: 'Revisiting Pre-Trained Models for Chinese Natural Language Processing'
url: 'http://arxiv.org/abs/2004.13922v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## hfl/rbt3
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|hfl/rbt3| | 211.03MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/rbt3/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/rbt3/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/rbt3/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/rbt3/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/rbt3
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|hfl/rbt3| | 211.03MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/rbt3/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/hfl/rbt3/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/hfl/rbt3/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/hfl/rbt3/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models hfl/rbt3
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "hfl/rbt3"
description: "This is a re-trained 3-layer RoBERTa-wwm-ext model."
description_en: "This is a re-trained 3-layer RoBERTa-wwm-ext model."
icon: ""
from_repo: "https://huggingface.co/hfl/rbt3"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "hfl"
License: "apache-2.0"
Language: "Chinese"
Paper:
- title: 'Pre-Training with Whole Word Masking for Chinese BERT'
url: 'http://arxiv.org/abs/1906.08101v3'
- title: 'Revisiting Pre-Trained Models for Chinese Natural Language Processing'
url: 'http://arxiv.org/abs/2004.13922v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-base-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-base-cased| | 500.52MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-base-cased| | 500.52MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-base-cased"
description: "BERT base model (cased)"
description_en: "BERT base model (cased)"
icon: ""
from_repo: "https://huggingface.co/bert-base-cased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "bookcorpus,wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-base-german-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-base-german-cased| | 506.40MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-german-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-german-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-german-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-german-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-german-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-base-german-cased| | 506.40MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-german-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-german-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-german-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-german-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-german-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-base-german-cased"
description: "German BERT"
description_en: "German BERT"
icon: ""
from_repo: "https://huggingface.co/bert-base-german-cased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: ""
Pulisher: "huggingface"
License: "mit"
Language: "German"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-base-multilingual-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-base-multilingual-cased| | 1.01G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-multilingual-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-base-multilingual-cased| | 1.01G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-multilingual-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-base-multilingual-cased"
description: "BERT multilingual base model (cased)"
description_en: "BERT multilingual base model (cased)"
icon: ""
from_repo: "https://huggingface.co/bert-base-multilingual-cased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: ""
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-base-multilingual-uncased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-base-multilingual-uncased| | 951.30MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-uncased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-multilingual-uncased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-base-multilingual-uncased| | 951.30MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-multilingual-uncased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-multilingual-uncased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-base-multilingual-uncased"
description: "BERT multilingual base model (uncased)"
description_en: "BERT multilingual base model (uncased)"
icon: ""
from_repo: "https://huggingface.co/bert-base-multilingual-uncased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: ""
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-base-uncased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-base-uncased| | 509.46MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-uncased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-uncased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-base-uncased| | 509.46MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-base-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-base-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-base-uncased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-base-uncased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-base-uncased"
description: "BERT base model (uncased)"
description_en: "BERT base model (uncased)"
icon: ""
from_repo: "https://huggingface.co/bert-base-uncased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "bookcorpus,wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-large-cased-whole-word-masking-finetuned-squad
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-large-cased-whole-word-masking-finetuned-squad| | 1.24G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking-finetuned-squad/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking-finetuned-squad/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking-finetuned-squad/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking-finetuned-squad/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-cased-whole-word-masking-finetuned-squad
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-large-cased-whole-word-masking-finetuned-squad| | 1.24G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking-finetuned-squad/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking-finetuned-squad/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking-finetuned-squad/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking-finetuned-squad/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-cased-whole-word-masking-finetuned-squad
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-large-cased-whole-word-masking-finetuned-squad"
description: "BERT large model (cased) whole word masking finetuned on SQuAD"
description_en: "BERT large model (cased) whole word masking finetuned on SQuAD"
icon: ""
from_repo: "https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Question Answering"
sub_tag: "回答问题"
Example:
Datasets: "bookcorpus,wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-large-cased-whole-word-masking
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-large-cased-whole-word-masking| | 1.36G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-cased-whole-word-masking
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-large-cased-whole-word-masking| | 1.36G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased-whole-word-masking/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-cased-whole-word-masking
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-large-cased-whole-word-masking"
description: "BERT large model (cased) whole word masking"
description_en: "BERT large model (cased) whole word masking"
icon: ""
from_repo: "https://huggingface.co/bert-large-cased-whole-word-masking"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "bookcorpus,wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-large-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-large-cased| | 1024.00MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-large-cased| | 1024.00MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-large-cased"
description: "BERT large model (cased)"
description_en: "BERT large model (cased)"
icon: ""
from_repo: "https://huggingface.co/bert-large-cased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "bookcorpus,wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-large-uncased-whole-word-masking-finetuned-squad
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-large-uncased-whole-word-masking-finetuned-squad| | 1.25G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking-finetuned-squad/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking-finetuned-squad/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking-finetuned-squad/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking-finetuned-squad/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-uncased-whole-word-masking-finetuned-squad
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-large-uncased-whole-word-masking-finetuned-squad| | 1.25G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking-finetuned-squad/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking-finetuned-squad/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking-finetuned-squad/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking-finetuned-squad/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-uncased-whole-word-masking-finetuned-squad
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-large-uncased-whole-word-masking-finetuned-squad"
description: "BERT large model (uncased) whole word masking finetuned on SQuAD"
description_en: "BERT large model (uncased) whole word masking finetuned on SQuAD"
icon: ""
from_repo: "https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Question Answering"
sub_tag: "回答问题"
Example:
Datasets: "bookcorpus,wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-large-uncased-whole-word-masking
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-large-uncased-whole-word-masking| | 1.37G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-uncased-whole-word-masking
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-large-uncased-whole-word-masking| | 1.37G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased-whole-word-masking/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-uncased-whole-word-masking
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-large-uncased-whole-word-masking"
description: "BERT large model (uncased) whole word masking"
description_en: "BERT large model (uncased) whole word masking"
icon: ""
from_repo: "https://huggingface.co/bert-large-uncased-whole-word-masking"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "bookcorpus,wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## bert-large-uncased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|bert-large-uncased| | 1.37G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-uncased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|bert-large-uncased| | 1.37G | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/bert-large-uncased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models bert-large-uncased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "bert-large-uncased"
description: "BERT large model (uncased)"
description_en: "BERT large model (uncased)"
icon: ""
from_repo: "https://huggingface.co/bert-large-uncased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "bookcorpus,wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'
url: 'http://arxiv.org/abs/1810.04805v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## distilbert-base-multilingual-cased
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|distilbert-base-multilingual-cased| | 866.93MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilbert-base-multilingual-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/distilbert-base-multilingual-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilbert-base-multilingual-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/distilbert-base-multilingual-cased/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models distilbert-base-multilingual-cased
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|distilbert-base-multilingual-cased| | 866.93MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilbert-base-multilingual-cased/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/distilbert-base-multilingual-cased/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilbert-base-multilingual-cased/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/distilbert-base-multilingual-cased/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models distilbert-base-multilingual-cased
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "distilbert-base-multilingual-cased"
description: "Model Card for DistilBERT base multilingual (cased)"
description_en: "Model Card for DistilBERT base multilingual (cased)"
icon: ""
from_repo: "https://huggingface.co/distilbert-base-multilingual-cased"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "wikipedia"
Pulisher: "huggingface"
License: "apache-2.0"
Language: ""
Paper:
- title: 'DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter'
url: 'http://arxiv.org/abs/1910.01108v4'
- title: 'Quantifying the Carbon Emissions of Machine Learning'
url: 'http://arxiv.org/abs/1910.09700v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## distilgpt2
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|distilgpt2| | 459.72MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models distilgpt2
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|distilgpt2| | 459.72MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/distilgpt2/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models distilgpt2
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "distilgpt2"
description: "DistilGPT2"
description_en: "DistilGPT2"
icon: ""
from_repo: "https://huggingface.co/distilgpt2"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: "openwebtext"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter'
url: 'http://arxiv.org/abs/1910.01108v4'
- title: 'Can Model Compression Improve NLP Fairness'
url: 'http://arxiv.org/abs/2201.08542v1'
- title: 'Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal'
url: 'http://arxiv.org/abs/2203.12574v1'
- title: 'Quantifying the Carbon Emissions of Machine Learning'
url: 'http://arxiv.org/abs/1910.09700v2'
- title: 'Distilling the Knowledge in a Neural Network'
url: 'http://arxiv.org/abs/1503.02531v1'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## distilroberta-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|distilroberta-base| | 462.98MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models distilroberta-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|distilroberta-base| | 462.98MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/distilroberta-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models distilroberta-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "distilroberta-base"
description: "Model Card for DistilRoBERTa base"
description_en: "Model Card for DistilRoBERTa base"
icon: ""
from_repo: "https://huggingface.co/distilroberta-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Fill-Mask"
sub_tag: "槽位填充"
Example:
Datasets: "openwebtext"
Pulisher: "huggingface"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter'
url: 'http://arxiv.org/abs/1910.01108v4'
- title: 'Quantifying the Carbon Emissions of Machine Learning'
url: 'http://arxiv.org/abs/1910.09700v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## gpt2-large
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|gpt2-large| | 2.88G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models gpt2-large
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|gpt2-large| | 2.88G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-large/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models gpt2-large
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "gpt2-large"
description: "GPT-2 Large"
description_en: "GPT-2 Large"
icon: ""
from_repo: "https://huggingface.co/gpt2-large"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "huggingface"
License: "mit"
Language: "English"
Paper:
- title: 'Quantifying the Carbon Emissions of Machine Learning'
url: 'http://arxiv.org/abs/1910.09700v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## gpt2-medium
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|gpt2-medium| | 1.32G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models gpt2-medium
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|gpt2-medium| | 1.32G | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2-medium/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models gpt2-medium
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "gpt2-medium"
description: "GPT-2 Medium"
description_en: "GPT-2 Medium"
icon: ""
from_repo: "https://huggingface.co/gpt2-medium"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "huggingface"
License: "mit"
Language: "English"
Paper:
- title: 'Quantifying the Carbon Emissions of Machine Learning'
url: 'http://arxiv.org/abs/1910.09700v2'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## gpt2
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|gpt2| | 474.71MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/gpt2/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/gpt2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2/vocab.json) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models gpt2
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|gpt2| | 474.71MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/gpt2/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/gpt2/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/gpt2/vocab.json) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models gpt2
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "gpt2"
description: "GPT-2"
description_en: "GPT-2"
icon: ""
from_repo: "https://huggingface.co/gpt2"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Text Generation"
sub_tag: "文本生成"
Example:
Datasets: ""
Pulisher: "huggingface"
License: "mit"
Language: "English"
Paper:
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## indobenchmark/indobert-base-p1
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|indobenchmark/indobert-base-p1| | 474.73MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/indobenchmark/indobert-base-p1/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/indobenchmark/indobert-base-p1/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/indobenchmark/indobert-base-p1/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/indobenchmark/indobert-base-p1/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models indobenchmark/indobert-base-p1
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|indobenchmark/indobert-base-p1| | 474.73MB | [model_config.json](https://bj.bcebos.com/paddlenlp/models/community/indobenchmark/indobert-base-p1/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/indobenchmark/indobert-base-p1/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/indobenchmark/indobert-base-p1/tokenizer_config.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/indobenchmark/indobert-base-p1/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models indobenchmark/indobert-base-p1
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "indobenchmark/indobert-base-p1"
description: "IndoBERT Base Model (phase1 - uncased)"
description_en: "IndoBERT Base Model (phase1 - uncased)"
icon: ""
from_repo: "https://huggingface.co/indobenchmark/indobert-base-p1"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Feature Extraction"
sub_tag: "特征抽取"
Example:
Datasets: "indonlu"
Pulisher: "indobenchmark"
License: "mit"
Language: "Indonesian"
Paper:
- title: 'IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding'
url: 'http://arxiv.org/abs/2009.05387v3'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
# 模型列表
## johngiorgi/declutr-base
| 模型名称 | 模型介绍 | 模型大小 | 模型下载 |
| --- | --- | --- | --- |
|johngiorgi/declutr-base| | 625.22MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/vocab.txt) |
也可以通过`paddlenlp` cli 工具来下载对应的模型权重,使用步骤如下所示:
* 安装paddlenlp
```shell
pip install --upgrade paddlenlp
```
* 下载命令行
```shell
paddlenlp download --cache-dir ./pretrained_models johngiorgi/declutr-base
```
有任何下载的问题都可以到[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)中发Issue提问。
\ No newline at end of file
# model list
##
| model | description | model_size | download |
| --- | --- | --- | --- |
|johngiorgi/declutr-base| | 625.22MB | [merges.txt](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/merges.txt)<br>[model_config.json](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/model_config.json)<br>[model_state.pdparams](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/model_state.pdparams)<br>[tokenizer_config.json](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/tokenizer_config.json)<br>[vocab.json](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/vocab.json)<br>[vocab.txt](https://bj.bcebos.com/paddlenlp/models/community/johngiorgi/declutr-base/vocab.txt) |
or you can download all of model file with the following steps:
* install paddlenlp
```shell
pip install --upgrade paddlenlp
```
* download model with cli tool
```shell
paddlenlp download --cache-dir ./pretrained_models johngiorgi/declutr-base
```
If you have any problems with it, you can post issue on [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) to get support.
Model_Info:
name: "johngiorgi/declutr-base"
description: "DeCLUTR-base"
description_en: "DeCLUTR-base"
icon: ""
from_repo: "https://huggingface.co/johngiorgi/declutr-base"
Task:
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Sentence Similarity"
sub_tag: "句子相似度"
- tag_en: "Natural Language Processing"
tag: "自然语言处理"
sub_tag_en: "Feature Extraction"
sub_tag: "特征抽取"
Example:
Datasets: "openwebtext"
Pulisher: "johngiorgi"
License: "apache-2.0"
Language: "English"
Paper:
- title: 'DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations'
url: 'http://arxiv.org/abs/2006.03659v4'
IfTraining: 0
IfOnlineDemo: 0
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册