{
"cells": [
{
"cell_type": "markdown",
"id": "1b34fb8a",
"metadata": {},
"source": [
"# DistilGPT2\n",
"\n",
"You can get more details from [GPT2 in PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/gpt/README.md)."
]
},
{
"cell_type": "markdown",
"id": "f3ab8949",
"metadata": {},
"source": [
"DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).\n"
]
},
{
"cell_type": "markdown",
"id": "c6fbc1da",
"metadata": {},
"source": [
"## Model Details\n"
]
},
{
"cell_type": "markdown",
"id": "e2929e2f",
"metadata": {},
"source": [
"- **Developed by:** Hugging Face\n",
"- **Model type:** Transformer-based Language Model\n",
"- **Language:** English\n",
"- **License:** Apache 2.0\n",
"- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.\n",
"- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e226406",
"metadata": {},
"outputs": [],
"source": [
"!pip install --upgrade paddlenlp"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51f32d75",
"metadata": {},
"outputs": [],
"source": [
"import paddle\n",
"from paddlenlp.transformers import AutoModel\n",
"\n",
"model = AutoModel.from_pretrained(\"distilgpt2\")\n",
"input_ids = paddle.randint(100, 200, shape=[1, 20])\n",
"print(model(input_ids))"
]
},
{
"cell_type": "markdown",
"id": "adb84dc8",
"metadata": {},
"source": [
"## Citation\n",
"\n",
"```\n",
"@inproceedings{sanh2019distilbert,\n",
"title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},\n",
"author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},\n",
"booktitle={NeurIPS EMC^2 Workshop},\n",
"year={2019}\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "7d2aaec2",
"metadata": {},
"source": [
"## Glossary\n"
]
},
{
"cell_type": "markdown",
"id": "004026dd",
"metadata": {},
"source": [
"-\t**Knowledge Distillation**: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).\n"
]
},
{
"cell_type": "markdown",
"id": "f8d12799",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"\n",
"> The model introduction and model weights originate from [https://huggingface.co/distilgpt2](https://huggingface.co/distilgpt2) and were converted to PaddlePaddle format for ease of use in PaddleNLP.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}