introduction_en.ipynb 2.7 KB
Notebook
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "5e0d446c",
   "metadata": {},
   "source": [
    "# 🤗 + 📚 dbmdz German BERT models\n",
    "\n",
    "In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\n",
    "Library open sources another German BERT models 🎉\n",
    "\n",
    "# German BERT\n",
    "\n",
    "## Stats\n",
    "\n",
    "In addition to the recently released [German BERT](https://deepset.ai/german-bert)\n",
    "model by [deepset](https://deepset.ai/) we provide another German-language model.\n",
    "\n",
    "The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,\n",
    "Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with\n",
    "a size of 16GB and 2,350,234,427 tokens.\n",
    "\n",
    "For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps\n",
    "(sentence piece model for vocab generation) follow those used for training\n",
    "[SciBERT](https://github.com/allenai/scibert). The model is trained with an initial\n",
    "sequence length of 512 subwords and was performed for 1.5M steps."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "524680d5",
   "metadata": {},
   "source": [
    "## How to use"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "39332440",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install --upgrade paddlenlp"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "19cf118e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import paddle\n",
    "from paddlenlp.transformers import AutoModel\n",
    "\n",
    "model = AutoModel.from_pretrained(\"dbmdz/bert-base-german-uncased\")\n",
    "input_ids = paddle.randint(100, 200, shape=[1, 20])\n",
    "print(model(input_ids))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb81d709",
   "metadata": {},
   "source": [
    "# Reference"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "747fd5d3",
   "metadata": {},
   "source": [
    "> The model introduction and model weights originate from [https://huggingface.co/dbmdz/bert-base-german-uncased](https://huggingface.co/dbmdz/bert-base-german-uncased) and were converted to PaddlePaddle format for ease of use in PaddleNLP.\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}