introduction_cn.ipynb 4.3 KB
Notebook
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a5e1e8bd",
   "metadata": {},
   "source": [
    "## Chinese BERT with Whole Word Masking\n",
    "\n",
    "### Please use 'Bert' related functions to load this model!\n",
    "\n",
    "For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.\n",
    "\n",
    "**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**\n",
    "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n",
    "\n",
    "This repository is developed based on:https://github.com/google-research/bert\n",
    "\n",
    "You may also interested in,\n",
    "- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm\n",
    "- Chinese MacBERT: https://github.com/ymcui/MacBERT\n",
    "- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA\n",
    "- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet\n",
    "- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer\n",
    "\n",
    "More resources by HFL: https://github.com/ymcui/HFL-Anthology\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be498a8f",
   "metadata": {},
   "source": [
    "## How to Use"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0199d11d",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install --upgrade paddlenlp"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b71b0698",
   "metadata": {},
   "outputs": [],
   "source": [
    "import paddle\n",
    "from paddlenlp.transformers import AutoModel\n",
    "\n",
    "model = AutoModel.from_pretrained(\"hfl/chinese-roberta-wwm-ext-large\")\n",
    "input_ids = paddle.randint(100, 200, shape=[1, 20])\n",
    "print(model(input_ids))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d6bd99f",
   "metadata": {},
   "source": [
    "\n",
    "## Citation\n",
    "If you find the technical report or resource is useful, please cite the following technical report in your paper.\n",
    "- Primary: https://arxiv.org/abs/2004.13922"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9429c396",
   "metadata": {},
   "source": [
    "```\n",
    "@inproceedings{cui-etal-2020-revisiting,\n",
    "title = \"Revisiting Pre-Trained Models for {C}hinese Natural Language Processing\",\n",
    "author = \"Cui, Yiming  and\n",
    "Che, Wanxiang  and\n",
    "Liu, Ting  and\n",
    "Qin, Bing  and\n",
    "Wang, Shijin  and\n",
    "Hu, Guoping\",\n",
    "booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings\",\n",
    "month = nov,\n",
    "year = \"2020\",\n",
    "address = \"Online\",\n",
    "publisher = \"Association for Computational Linguistics\",\n",
    "url = \"https://www.aclweb.org/anthology/2020.findings-emnlp.58\",\n",
    "pages = \"657--668\",\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9784d9b7",
   "metadata": {},
   "source": [
    "- Secondary: https://arxiv.org/abs/1906.08101\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eb3e56a1",
   "metadata": {},
   "source": [
    "```\n",
    "@article{chinese-bert-wwm,\n",
    "title={Pre-Training with Whole Word Masking for Chinese BERT},\n",
    "author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},\n",
    "journal={arXiv preprint arXiv:1906.08101},\n",
    "year={2019}\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3593ecc9",
   "metadata": {},
   "source": [
    "> 此模型介绍及权重来源于[https://huggingface.co/hfl/chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large),并转换为飞桨模型格式。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.13"
  },
  "vscode": {
   "interpreter": {
    "hash": "606ea184b8fed3419d714b545dc1784fad6c99d0cc940b6b9d787dccf225faa5"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}