{ "cells": [ { "cell_type": "markdown", "id": "41ca2df0", "metadata": {}, "source": [ "# 🤗 + 📚 dbmdz BERT and ELECTRA models\n" ] }, { "cell_type": "markdown", "id": "58e60a32", "metadata": {}, "source": [ "In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\n", "Library open sources Italian BERT and ELECTRA models 🎉\n" ] }, { "cell_type": "markdown", "id": "c7b5f379", "metadata": {}, "source": [ "# Italian BERT\n" ] }, { "cell_type": "markdown", "id": "5bf65013", "metadata": {}, "source": [ "The source data for the Italian BERT model consists of a recent Wikipedia dump and\n", "various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final\n", "training corpus has a size of 13GB and 2,050,057,573 tokens.\n" ] }, { "cell_type": "markdown", "id": "a3aadc8d", "metadata": {}, "source": [ "For sentence splitting, we use NLTK (faster compared to spacy).\n", "Our cased and uncased models are training with an initial sequence length of 512\n", "subwords for ~2-3M steps.\n" ] }, { "cell_type": "markdown", "id": "4d670485", "metadata": {}, "source": [ "For the XXL Italian models, we use the same training data from OPUS and extend\n", "it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).\n", "Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.\n" ] }, { "cell_type": "markdown", "id": "a366dc7d", "metadata": {}, "source": [ "Note: Unfortunately, a wrong vocab size was used when training the XXL models.\n", "This explains the mismatch of the \"real\" vocab size of 31102, compared to the\n", "vocab size specified in `config.json`. However, the model is working and all\n", "evaluations were done under those circumstances.\n", "See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.\n" ] }, { "cell_type": "markdown", "id": "eaee3adf", "metadata": {}, "source": [ "The Italian ELECTRA model was trained on the \"XXL\" corpus for 1M steps in total using a batch\n", "size of 128. We pretty much following the ELECTRA training procedure as used for\n", "[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).\n" ] }, { "cell_type": "markdown", "id": "c5151f8a", "metadata": {}, "source": [ "## Usage\n" ] }, { "cell_type": "code", "execution_count": null, "id": "f9e48e19", "metadata": {}, "outputs": [], "source": [ "!pip install --upgrade paddlenlp" ] }, { "cell_type": "code", "execution_count": null, "id": "c02e6f47", "metadata": {}, "outputs": [], "source": [ "import paddle\n", "from paddlenlp.transformers import AutoModel\n", "\n", "model = AutoModel.from_pretrained(\"dbmdz/bert-base-italian-xxl-cased\")\n", "input_ids = paddle.randint(100, 200, shape=[1, 20])\n", "print(model(input_ids))" ] }, { "cell_type": "markdown", "id": "13e03e4b", "metadata": {}, "source": [ "# Reference" ] }, { "cell_type": "markdown", "id": "66705724", "metadata": {}, "source": [ "> The model introduction and model weights originate from [https://huggingface.co/dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) and were converted to PaddlePaddle format for ease of use in PaddleNLP.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.13" } }, "nbformat": 4, "nbformat_minor": 5 }