{ "cells": [ { "cell_type": "markdown", "id": "dea2fc9e", "metadata": {}, "source": [ "# 🤗 + 📚 dbmdz BERT and ELECTRA models\n" ] }, { "cell_type": "markdown", "id": "00744cbd", "metadata": {}, "source": [ "In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\n", "Library open sources Italian BERT and ELECTRA models 🎉\n" ] }, { "cell_type": "markdown", "id": "d7106b74", "metadata": {}, "source": [ "# Italian BERT\n" ] }, { "cell_type": "markdown", "id": "7ee0fd67", "metadata": {}, "source": [ "The source data for the Italian BERT model consists of a recent Wikipedia dump and\n", "various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final\n", "training corpus has a size of 13GB and 2,050,057,573 tokens.\n" ] }, { "cell_type": "markdown", "id": "a3961910", "metadata": {}, "source": [ "For sentence splitting, we use NLTK (faster compared to spacy).\n", "Our cased and uncased models are training with an initial sequence length of 512\n", "subwords for ~2-3M steps.\n" ] }, { "cell_type": "markdown", "id": "480e4fea", "metadata": {}, "source": [ "For the XXL Italian models, we use the same training data from OPUS and extend\n", "it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).\n", "Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.\n" ] }, { "cell_type": "markdown", "id": "d710804e", "metadata": {}, "source": [ "Note: Unfortunately, a wrong vocab size was used when training the XXL models.\n", "This explains the mismatch of the \"real\" vocab size of 31102, compared to the\n", "vocab size specified in `config.json`. However, the model is working and all\n", "evaluations were done under those circumstances.\n", "See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.\n" ] }, { "cell_type": "markdown", "id": "2d9c79e5", "metadata": {}, "source": [ "The Italian ELECTRA model was trained on the \"XXL\" corpus for 1M steps in total using a batch\n", "size of 128. We pretty much following the ELECTRA training procedure as used for\n", "[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).\n" ] }, { "cell_type": "markdown", "id": "3ee71cee", "metadata": {}, "source": [ "## Usage\n" ] }, { "cell_type": "code", "execution_count": null, "id": "9ffe9a93", "metadata": {}, "outputs": [], "source": [ "!pip install --upgrade paddlenlp" ] }, { "cell_type": "code", "execution_count": null, "id": "82d327d4", "metadata": {}, "outputs": [], "source": [ "import paddle\n", "from paddlenlp.transformers import AutoModel\n", "\n", "model = AutoModel.from_pretrained(\"dbmdz/bert-base-italian-uncased\")\n", "input_ids = paddle.randint(100, 200, shape=[1, 20])\n", "print(model(input_ids))" ] }, { "cell_type": "markdown", "id": "56d92161", "metadata": {}, "source": [ "# Reference" ] }, { "cell_type": "markdown", "id": "ad146f63", "metadata": {}, "source": [ "\n", "> 此模型介绍及权重来源于[https://huggingface.co/dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased),并转换为飞桨模型格式。\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.13" } }, "nbformat": 4, "nbformat_minor": 5 }