{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# MNIST数据集使用LeNet进行图像分类\n", "本示例教程演示如何在MNIST数据集上用LeNet进行图像分类。\n", "手写数字的MNIST数据集,包含60,000个用于训练的示例和10,000个用于测试的示例。这些数字已经过尺寸标准化并位于图像中心,图像是固定大小(28x28像素),其值为0到1。该数据集的官方地址为:http://yann.lecun.com/exdb/mnist/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 环境\n", "本教程基于paddle-develop编写,如果您的环境不是本版本,请先安装paddle-develop版本。" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.0.0\n" ] } ], "source": [ "import paddle\n", "print(paddle.__version__)\n", "paddle.disable_static()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 加载数据集\n", "我们使用飞桨自带的paddle.dataset完成mnist数据集的加载。" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "download training data and load training data\n", "load finished\n" ] } ], "source": [ "print('download training data and load training data')\n", "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n", "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n", "print('load finished')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "取训练集中的一条数据看一下。" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "train_data0 label is: [5]\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAI4AAACOCAYAAADn/TAIAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAIY0lEQVR4nO3dXWhUZxoH8P/jaPxav7KREtNgiooQFvwg1l1cNOr6sQUN3ixR0VUK9cKPXTBYs17ohReLwl5ovCmuZMU1y+IaWpdC0GIuxCJJMLhJa6oWtSl+FVEXvdDK24s5nc5zapKTZ86cOTPz/4Hk/M8xc17w8Z13zpl5RpxzIBquEbkeAOUnFg6ZsHDIhIVDJiwcMmHhkElGhSMiq0WkT0RuisjesAZF8SfW6zgikgDwFYAVAPoBdABY75z7IrzhUVyNzOB33wVw0zn3NQCIyL8A1AEYsHDKyspcVVVVBqekqHV1dX3nnJvq359J4VQA+CYt9wNYONgvVFVVobOzM4NTUtRE5M6b9md9cSwiH4hIp4h0Pnr0KNuno4hkUjjfAqhMy297+xTn3EfOuRrnXM3UqT+b8ShPZVI4HQBmicg7IlICoB7AJ+EMi+LOvMZxzn0vIjsAtAFIADjhnOsNbWQUa5ksjuGc+xTApyGNhfIIrxyTCQuHTFg4ZMLCIRMWDpmwcMiEhUMmLBwyYeGQCQuHTFg4ZMLCIZOMbnIWk9evX6v89OnTwL/b1NSk8osXL1Tu6+tT+dixYyo3NDSo3NLSovKYMWNU3rv3p88N7N+/P/A4h4MzDpmwcMiEhUMmRbPGuXv3rsovX75U+fLlyypfunRJ5SdPnqh85syZ0MZWWVmp8s6dO1VubW1VecKECSrPmTNH5SVLloQ2toFwxiETFg6ZsHDIpGDXOFevXlV52bJlKg/nOkzYEomEygcPHlR5/PjxKm/cuFHladOmqTxlyhSVZ8+enekQh8QZh0xYOGTCwiGTgl3jTJ8+XeWysjKVw1zjLFyom3T41xwXL15UuaSkROVNmzaFNpaocMYhExYOmbBwyKRg1zilpaUqHz58WOVz586pPG/ePJV37do16OPPnTs3tX3hwgV1zH8dpqenR+UjR44M+tj5gDMOmQxZOCJyQkQeikhP2r5SETkvIje8n1MGewwqPEFmnGYAq3379gL4zDk3C8BnXqYiEqjPsYhUAfivc+5XXu4DUOucuyci5QDanXND3iCpqalxcek6+uzZM5X973HZtm2bysePH1f51KlTqe0NGzaEPLr4EJEu51yNf791jfOWc+6et30fwFvmkVFeynhx7JJT1oDTFtvVFiZr4TzwnqLg/Xw40F9ku9rCZL2O8wmAPwL4q/fz49BGFJGJEycOenzSpEmDHk9f89TX16tjI0YU/lWOIC/HWwB8DmC2iPSLyPtIFswKEbkB4HdepiIy5IzjnFs/wKHlIY+F8kjhz6mUFQV7rypTBw4cULmrq0vl9vb21Lb/XtXKlSuzNazY4IxDJiwcMmHhkIn5Ozkt4nSvarhu3bql8vz581PbkydPVseWLl2qck2NvtWzfft2lUUkjCFmRdj3qqjIsXDIhC/HA5oxY4bKzc3Nqe2tW7eqYydPnhw0P3/+XOXNmzerXF5ebh1mZDjjkAkLh0xYOGTCNY7RunXrUtszZ85Ux3bv3q2y/5ZEY2Ojynfu6O+E37dvn8oVFRXmcWYLZxwyYeGQCQuHTHjLIQv8rW39HzfesmWLyv5/g+XL9Xvkzp8/H97ghom3HChULBwyYeGQCdc4OTB69GiVX716pfKoUaNUbmtrU7m2tjYr43oTrnEoVCwcMmHhkAnvVYXg2rVrKvu/kqijo0Nl/5rGr7q6WuXFixdnMLrs4IxDJiwcMmHhkAnXOAH5v+L56NGjqe2zZ8+qY/fv3x/WY48cqf8Z/O85jmPblPiNiPJCkP44lSJyUUS+EJFeEfmTt58ta4tYkBnnewC7nXPVAH4NYLuIVIMta4takMZK9wDc87b/LyJfAqgAUAeg1vtr/wDQDuDDrIwyAv51yenTp1VuampS+fbt2+ZzLViwQGX/e4zXrl1rfuyoDGuN4/U7ngfgCtiytqgFLhwR+QWA/wD4s3NOdZcerGUt29UWpkCFIyKjkCyafzrnfnztGahlLdvVFqYh1ziS7MHxdwBfOuf+lnYor1rWPnjwQOXe3l6Vd+zYofL169fN5/J/1eKePXtUrqurUzmO12mGEuQC4CIAmwD8T0S6vX1/QbJg/u21r70D4A/ZGSLFUZBXVZcADNT5hy1ri1T+zZEUCwVzr+rx48cq+782qLu7W2V/a7bhWrRoUWrb/1nxVatWqTx27NiMzhVHnHHIhIVDJiwcMsmrNc6VK1dS24cOHVLH/O/r7e/vz+hc48aNU9n/ddLp95f8XxddDDjjkAkLh0zy6qmqtbX1jdtB+D9ysmbNGpUTiYTKDQ0NKvu7pxc7zjhkwsIhExYOmbDNCQ2KbU4oVCwcMmHhkAkLh0xYOGTCwiETFg6ZsHDIhIVDJiwcMmHhkEmk96pE5BGSn/osA/BdZCcenriOLVfjmu6c+9mH/iMtnNRJRTrfdOMsDuI6triNi09VZMLCIZNcFc5HOTpvEHEdW6zGlZM1DuU/PlWRSaSFIyKrRaRPRG6KSE7b24rICRF5KCI9afti0bs5H3pLR1Y4IpIAcAzA7wFUA1jv9UvOlWYAq3374tK7Of69pZ1zkfwB8BsAbWm5EUBjVOcfYExVAHrSch+Acm+7HEBfLseXNq6PAayI0/iifKqqAPBNWu739sVJ7Ho3x7W3NBfHA3DJ/9Y5fclp7S0dhSgL51sAlWn5bW9fnATq3RyFTHpLRyHKwukAMEtE3hGREgD1SPZKjpMfezcDOezdHKC3NJDr3tIRL/LeA/AVgFsA9uV4wdmC5JebvEJyvfU+gF8i+WrlBoALAEpzNLbfIvk0dA1At/fnvbiMzznHK8dkw8UxmbBwyISFQyYsHDJh4ZAJC4dMWDhkwsIhkx8AyyZIbO5tLBIAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "train_data0, train_label_0 = train_dataset[0][0],train_dataset[0][1]\n", "train_data0 = train_data0.reshape([28,28])\n", "plt.figure(figsize=(2,2))\n", "plt.imshow(train_data0, cmap=plt.cm.binary)\n", "print('train_data0 label is: ' + str(train_label_0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 组网\n", "用paddle.nn下的API,如`Conv2d`、`MaxPool2d`、`Linear`完成LeNet的构建。" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "import paddle\n", "import paddle.nn.functional as F\n", "class LeNet(paddle.nn.Layer):\n", " def __init__(self):\n", " super(LeNet, self).__init__()\n", " self.conv1 = paddle.nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2)\n", " self.max_pool1 = paddle.nn.MaxPool2d(kernel_size=2, stride=2)\n", " self.conv2 = paddle.nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1)\n", " self.max_pool2 = paddle.nn.MaxPool2d(kernel_size=2, stride=2)\n", " self.linear1 = paddle.nn.Linear(in_features=16*5*5, out_features=120)\n", " self.linear2 = paddle.nn.Linear(in_features=120, out_features=84)\n", " self.linear3 = paddle.nn.Linear(in_features=84, out_features=10)\n", "\n", " def forward(self, x):\n", " x = self.conv1(x)\n", " x = F.relu(x)\n", " x = self.max_pool1(x)\n", " x = F.relu(x)\n", " x = self.conv2(x)\n", " x = self.max_pool2(x)\n", " x = paddle.flatten(x, start_axis=1,stop_axis=-1)\n", " x = self.linear1(x)\n", " x = F.relu(x)\n", " x = self.linear2(x)\n", " x = F.relu(x)\n", " x = self.linear3(x)\n", " x = F.softmax(x)\n", " return x" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 训练方式一\n", "组网后,开始对模型进行训练,先构建`train_loader`,加载训练数据,然后定义`train`函数,设置好损失函数后,按batch加载数据,完成模型的训练。" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 0, batch_id: 0, loss is: [2.3079572], acc is: [0.125]\n", "epoch: 0, batch_id: 100, loss is: [1.7078608], acc is: [0.828125]\n", "epoch: 0, batch_id: 200, loss is: [1.5642334], acc is: [0.90625]\n", "epoch: 0, batch_id: 300, loss is: [1.7024238], acc is: [0.78125]\n", "epoch: 0, batch_id: 400, loss is: [1.5536337], acc is: [0.921875]\n", "epoch: 0, batch_id: 500, loss is: [1.6908336], acc is: [0.828125]\n", "epoch: 0, batch_id: 600, loss is: [1.5622432], acc is: [0.921875]\n", "epoch: 0, batch_id: 700, loss is: [1.5251796], acc is: [0.953125]\n", "epoch: 0, batch_id: 800, loss is: [1.5698484], acc is: [0.890625]\n", "epoch: 0, batch_id: 900, loss is: [1.5524453], acc is: [0.9375]\n", "epoch: 1, batch_id: 0, loss is: [1.6443151], acc is: [0.84375]\n", "epoch: 1, batch_id: 100, loss is: [1.5547533], acc is: [0.90625]\n", "epoch: 1, batch_id: 200, loss is: [1.5019028], acc is: [1.]\n", "epoch: 1, batch_id: 300, loss is: [1.4820204], acc is: [1.]\n", "epoch: 1, batch_id: 400, loss is: [1.5215418], acc is: [0.984375]\n", "epoch: 1, batch_id: 500, loss is: [1.4972374], acc is: [1.]\n", "epoch: 1, batch_id: 600, loss is: [1.4930981], acc is: [0.984375]\n", "epoch: 1, batch_id: 700, loss is: [1.4971689], acc is: [0.984375]\n", "epoch: 1, batch_id: 800, loss is: [1.4611597], acc is: [1.]\n", "epoch: 1, batch_id: 900, loss is: [1.4903957], acc is: [0.984375]\n" ] } ], "source": [ "import paddle\n", "train_loader = paddle.io.DataLoader(train_dataset, places=paddle.CPUPlace(), batch_size=64, shuffle=True)\n", "# 加载训练集 batch_size 设为 64\n", "def train(model):\n", " model.train()\n", " epochs = 2\n", " optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())\n", " # 用Adam作为优化函数\n", " for epoch in range(epochs):\n", " for batch_id, data in enumerate(train_loader()):\n", " x_data = data[0]\n", " y_data = data[1]\n", " predicts = model(x_data)\n", " loss = paddle.nn.functional.cross_entropy(predicts, y_data)\n", " # 计算损失\n", " acc = paddle.metric.accuracy(predicts, y_data, k=2)\n", " avg_loss = paddle.mean(loss)\n", " avg_acc = paddle.mean(acc)\n", " avg_loss.backward()\n", " if batch_id % 100 == 0:\n", " print(\"epoch: {}, batch_id: {}, loss is: {}, acc is: {}\".format(epoch, batch_id, avg_loss.numpy(), avg_acc.numpy()))\n", " optim.minimize(avg_loss)\n", " model.clear_gradients()\n", "model = LeNet()\n", "train(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 对模型进行验证\n", "训练完成后,需要验证模型的效果,此时,加载测试数据集,然后用训练好的模对测试集进行预测,计算损失与精度。" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "batch_id: 0, loss is: [1.4767745], acc is: [1.]\n", "batch_id: 20, loss is: [1.4841802], acc is: [0.984375]\n", "batch_id: 40, loss is: [1.4997194], acc is: [1.]\n", "batch_id: 60, loss is: [1.4895413], acc is: [1.]\n", "batch_id: 80, loss is: [1.4668798], acc is: [1.]\n", "batch_id: 100, loss is: [1.4611752], acc is: [1.]\n", "batch_id: 120, loss is: [1.4613602], acc is: [1.]\n", "batch_id: 140, loss is: [1.4923686], acc is: [1.]\n" ] } ], "source": [ "import paddle\n", "test_loader = paddle.io.DataLoader(test_dataset, places=paddle.CPUPlace(), batch_size=64)\n", "# 加载测试数据集\n", "def test(model):\n", " model.eval()\n", " batch_size = 64\n", " for batch_id, data in enumerate(test_loader()):\n", " x_data = data[0]\n", " y_data = data[1]\n", " predicts = model(x_data)\n", " # 获取预测结果\n", " loss = paddle.nn.functional.cross_entropy(predicts, y_data)\n", " acc = paddle.metric.accuracy(predicts, y_data, k=2)\n", " avg_loss = paddle.mean(loss)\n", " avg_acc = paddle.mean(acc)\n", " avg_loss.backward()\n", " if batch_id % 20 == 0:\n", " print(\"batch_id: {}, loss is: {}, acc is: {}\".format(batch_id, avg_loss.numpy(), avg_acc.numpy()))\n", "test(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 训练方式一结束\n", "以上就是训练方式一,通过这种方式,可以清楚的看到训练和测试中的每一步过程。但是,这种方式句法比较复杂。因此,我们提供了训练方式二,能够更加快速、高效的完成模型的训练与测试。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3.训练方式二\n", "通过paddle提供的`Model` 构建实例,使用封装好的训练与测试接口,快速完成模型训练与测试。" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "import paddle\n", "from paddle.static import InputSpec\n", "from paddle.metric import Accuracy\n", "inputs = InputSpec([None, 784], 'float32', 'x')\n", "labels = InputSpec([None, 10], 'float32', 'x')\n", "model = paddle.Model(LeNet(), inputs, labels)\n", "optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())\n", "\n", "model.prepare(\n", " optim,\n", " paddle.nn.loss.CrossEntropyLoss(),\n", " Accuracy(topk=(1, 2))\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 使用model.fit来训练模型" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/2\n", "step 100/938 - loss: 1.5644 - acc_top1: 0.6281 - acc_top2: 0.7145 - 14ms/step\n", "step 200/938 - loss: 1.6221 - acc_top1: 0.7634 - acc_top2: 0.8380 - 13ms/step\n", "step 300/938 - loss: 1.5123 - acc_top1: 0.8215 - acc_top2: 0.8835 - 13ms/step\n", "step 400/938 - loss: 1.4791 - acc_top1: 0.8530 - acc_top2: 0.9084 - 13ms/step\n", "step 500/938 - loss: 1.4904 - acc_top1: 0.8733 - acc_top2: 0.9235 - 13ms/step\n", "step 600/938 - loss: 1.5101 - acc_top1: 0.8875 - acc_top2: 0.9341 - 13ms/step\n", "step 700/938 - loss: 1.4642 - acc_top1: 0.8983 - acc_top2: 0.9417 - 13ms/step\n", "step 800/938 - loss: 1.4789 - acc_top1: 0.9069 - acc_top2: 0.9477 - 13ms/step\n", "step 900/938 - loss: 1.4773 - acc_top1: 0.9135 - acc_top2: 0.9523 - 13ms/step\n", "step 938/938 - loss: 1.4714 - acc_top1: 0.9157 - acc_top2: 0.9538 - 13ms/step\n", "save checkpoint at /Users/chenlong/online_repo/book/paddle2.0_docs/image_classification/mnist_checkpoint/0\n", "Epoch 2/2\n", "step 100/938 - loss: 1.4863 - acc_top1: 0.9695 - acc_top2: 0.9897 - 13ms/step\n", "step 200/938 - loss: 1.4883 - acc_top1: 0.9707 - acc_top2: 0.9912 - 13ms/step\n", "step 300/938 - loss: 1.4695 - acc_top1: 0.9720 - acc_top2: 0.9910 - 13ms/step\n", "step 400/938 - loss: 1.4628 - acc_top1: 0.9720 - acc_top2: 0.9915 - 13ms/step\n", "step 500/938 - loss: 1.5079 - acc_top1: 0.9727 - acc_top2: 0.9918 - 13ms/step\n", "step 600/938 - loss: 1.4803 - acc_top1: 0.9727 - acc_top2: 0.9919 - 13ms/step\n", "step 700/938 - loss: 1.4612 - acc_top1: 0.9732 - acc_top2: 0.9923 - 13ms/step\n", "step 800/938 - loss: 1.4755 - acc_top1: 0.9732 - acc_top2: 0.9923 - 13ms/step\n", "step 900/938 - loss: 1.4698 - acc_top1: 0.9732 - acc_top2: 0.9922 - 13ms/step\n", "step 938/938 - loss: 1.4764 - acc_top1: 0.9734 - acc_top2: 0.9923 - 13ms/step\n", "save checkpoint at /Users/chenlong/online_repo/book/paddle2.0_docs/image_classification/mnist_checkpoint/1\n", "save checkpoint at /Users/chenlong/online_repo/book/paddle2.0_docs/image_classification/mnist_checkpoint/final\n" ] } ], "source": [ "model.fit(train_dataset,\n", " epochs=2,\n", " batch_size=64,\n", " log_freq=100,\n", " save_dir='mnist_checkpoint')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 使用model.evaluate来预测模型" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Eval begin...\n", "step 10/157 - loss: 1.5238 - acc_top1: 0.9750 - acc_top2: 0.9938 - 7ms/step\n", "step 20/157 - loss: 1.5143 - acc_top1: 0.9727 - acc_top2: 0.9922 - 7ms/step\n", "step 30/157 - loss: 1.5290 - acc_top1: 0.9698 - acc_top2: 0.9932 - 7ms/step\n", "step 40/157 - loss: 1.4624 - acc_top1: 0.9684 - acc_top2: 0.9930 - 7ms/step\n", "step 50/157 - loss: 1.4771 - acc_top1: 0.9697 - acc_top2: 0.9925 - 7ms/step\n", "step 60/157 - loss: 1.5066 - acc_top1: 0.9701 - acc_top2: 0.9922 - 6ms/step\n", "step 70/157 - loss: 1.4804 - acc_top1: 0.9699 - acc_top2: 0.9920 - 6ms/step\n", "step 80/157 - loss: 1.4718 - acc_top1: 0.9707 - acc_top2: 0.9930 - 6ms/step\n", "step 90/157 - loss: 1.4874 - acc_top1: 0.9726 - acc_top2: 0.9934 - 6ms/step\n", "step 100/157 - loss: 1.4612 - acc_top1: 0.9736 - acc_top2: 0.9936 - 6ms/step\n", "step 110/157 - loss: 1.4612 - acc_top1: 0.9746 - acc_top2: 0.9938 - 6ms/step\n", "step 120/157 - loss: 1.4763 - acc_top1: 0.9763 - acc_top2: 0.9941 - 6ms/step\n", "step 130/157 - loss: 1.4786 - acc_top1: 0.9764 - acc_top2: 0.9935 - 6ms/step\n", "step 140/157 - loss: 1.4612 - acc_top1: 0.9775 - acc_top2: 0.9939 - 6ms/step\n", "step 150/157 - loss: 1.4894 - acc_top1: 0.9785 - acc_top2: 0.9943 - 6ms/step\n", "step 157/157 - loss: 1.4612 - acc_top1: 0.9777 - acc_top2: 0.9941 - 6ms/step\n", "Eval samples: 10000\n" ] }, { "data": { "text/plain": [ "{'loss': [1.4611504], 'acc_top1': 0.9777, 'acc_top2': 0.9941}" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.evaluate(test_dataset, batch_size=64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 训练方式二结束\n", "以上就是训练方式二,可以快速、高效的完成网络模型训练与预测。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 总结\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "以上就是用LeNet对手写数字数据及MNIST进行分类。本示例提供了两种训练模型的方式,一种可以快速完成模型的组建与预测,非常适合新手用户上手。另一种则需要多个步骤来完成模型的训练,适合进阶用户使用。" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 4 }