diff --git a/tutorials/notebook/customized_debugging_information.ipynb b/tutorials/notebook/customized_debugging_information.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..ee24b878115b48f5cb9b89011220188a47e99dd0
--- /dev/null
+++ b/tutorials/notebook/customized_debugging_information.ipynb
@@ -0,0 +1,617 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#
自定义调试体验文档"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 概述"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "本文将使用[快速入门](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/lenet.py)作为样例,并通过构建自定义调试函数:`Callback`、`metrics`、`Print算子`、日志打印等,同时将构建的自定义调试函数添加进代码中,通过运行效果来展示具体如何使用MindSpore提供给我们的自定义调试能力,帮助快速调试训练网络。\n",
+ "体验过程如下:\n",
+ "1. 数据集准备。\n",
+ "2. 定义深度学习网络LeNet5。\n",
+ "3. 使用Callback回调函数构建StopAtTime类来控制训练停止时间。\n",
+ "4. 设置日志环境变量。\n",
+ "5. 定义模型并执行训练。\n",
+ "6. 执行测试。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 数据集准备"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 数据集的下载"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "这里我们需要将MNIST数据集中随机取出一张图片,并增强成适合LeNet网络的数据格式(如何处理请参考[quick_start.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/quick_start.ipynb)),训练数据集下载地址:{\"http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\", \"http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\"} 。\n",
+ "
数据集放在----`Jupyter工作目录+\\MNIST_Data\\train\\`,如下图结构:"
+ ]
+ },
+ {
+ "cell_type": "raw",
+ "metadata": {},
+ "source": [
+ "MNIST\n",
+ "├── test\n",
+ "│ ├── t10k-images-idx3-ubyte\n",
+ "│ └── t10k-labels-idx1-ubyte\n",
+ "└── train\n",
+ " ├── train-images-idx3-ubyte\n",
+ " └── train-labels-idx1-ubyte"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 数据集的增强操作"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "下载的数据集,需要通过`mindspore.dataset`处理成适用于MindSpore框架的数据,再使用一系列框架中提供的工具进行数据增强操作来适应LeNet网络的数据处理需求。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import mindspore.dataset as ds\n",
+ "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.transforms.c_transforms as C\n",
+ "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.common import dtype as mstype\n",
+ "\n",
+ "def create_dataset(data_path, batch_size=32, repeat_size=1,\n",
+ " num_parallel_workers=1):\n",
+ " \"\"\" create dataset for train or test\n",
+ " Args:\n",
+ " data_path (str): Data path\n",
+ " batch_size (int): The number of data records in each group\n",
+ " repeat_size (int): The number of replicated data records\n",
+ " num_parallel_workers (int): The number of parallel workers\n",
+ " \"\"\"\n",
+ " # define dataset\n",
+ " mnist_ds = ds.MnistDataset(data_path)\n",
+ "\n",
+ " # define operation parameters\n",
+ " resize_height, resize_width = 32, 32\n",
+ " rescale = 1.0 / 255.0\n",
+ " shift = 0.0\n",
+ " rescale_nml = 1 / 0.3081\n",
+ " shift_nml = -1 * 0.1307 / 0.3081\n",
+ "\n",
+ " # define map operations\n",
+ " resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR)\n",
+ " rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)\n",
+ " rescale_op = CV.Rescale(rescale, shift)\n",
+ " hwc2chw_op = CV.HWC2CHW() \n",
+ " type_cast_op = C.TypeCast(mstype.int32)\n",
+ "\n",
+ " # apply map operations on images\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ "\n",
+ " # apply DatasetOps\n",
+ " buffer_size = 10000\n",
+ " mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)\n",
+ " mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)\n",
+ " mnist_ds = mnist_ds.repeat(repeat_size)\n",
+ "\n",
+ " return mnist_ds"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 定义深度学习网络LeNet"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "针对MNIST数据集我们采用的是LeNet5网络,先对卷积函数和全连接函数初始化,然后`construct`构建神经网络。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from mindspore.common.initializer import TruncatedNormal\n",
+ "import mindspore.nn as nn\n",
+ "from mindspore.ops import operations as P\n",
+ "\n",
+ "def conv(in_channels, out_channels, kernel_size, stride=1, padding=0):\n",
+ " \"\"\"Conv layer weight initial.\"\"\"\n",
+ " weight = weight_variable()\n",
+ " return nn.Conv2d(in_channels, out_channels,\n",
+ " kernel_size=kernel_size, stride=stride, padding=padding,\n",
+ " weight_init=weight, has_bias=False, pad_mode=\"valid\")\n",
+ "\n",
+ "\n",
+ "def fc_with_initialize(input_channels, out_channels):\n",
+ " \"\"\"Fc layer weight initial.\"\"\"\n",
+ " weight = weight_variable()\n",
+ " bias = weight_variable()\n",
+ " return nn.Dense(input_channels, out_channels, weight, bias)\n",
+ "\n",
+ "\n",
+ "def weight_variable():\n",
+ " \"\"\"Weight initial.\"\"\"\n",
+ " return TruncatedNormal(0.02)\n",
+ "\n",
+ "\n",
+ "class LeNet5(nn.Cell):\n",
+ " \"\"\"Lenet network structure.\"\"\"\n",
+ " def __init__(self):\n",
+ " super(LeNet5, self).__init__()\n",
+ " self.batch_size = 32\n",
+ " self.conv1 = conv(1, 6, 5)\n",
+ " self.conv2 = conv(6, 16, 5)\n",
+ " self.fc1 = fc_with_initialize(16 * 5 * 5, 120)\n",
+ " self.fc2 = fc_with_initialize(120, 84)\n",
+ " self.fc3 = fc_with_initialize(84, 10)\n",
+ " self.relu = nn.ReLU()\n",
+ " self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n",
+ " self.flatten = nn.Flatten()\n",
+ " \n",
+ " def construct(self, x):\n",
+ " x = self.conv1(x)\n",
+ " x = self.relu(x)\n",
+ " x = self.max_pool2d(x)\n",
+ " x = self.conv2(x)\n",
+ " x = self.relu(x)\n",
+ " x = self.max_pool2d(x)\n",
+ " x = self.flatten(x)\n",
+ " x = self.fc1(x)\n",
+ " x = self.relu(x)\n",
+ " x = self.fc2(x)\n",
+ " x = self.relu(x)\n",
+ " x = self.fc3(x)\n",
+ " return x"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 构建自定义回调函数StopAtTime"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "使用回调函数的基类Callback,构建训练定时器`StopAtTime`,其基类(可在源码中找到位置在`/mindspore/nn/callback`)为:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "class Callback():\n",
+ " def begin(self, run_context):\n",
+ " pass\n",
+ " def epoch_begin(self, run_context):\n",
+ " pass\n",
+ " def epoch_end(self, run_context):\n",
+ " pass\n",
+ " def step_begin(self, run_context): \n",
+ " pass\n",
+ " def step_end(self, run_context):\n",
+ " pass\n",
+ " def end(self, run_context):\n",
+ " pass\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- `begin`:表示训练开始时执行。\n",
+ "- `epoch_begin`:表示每个epoch开始时执行。\n",
+ "- `epoch_end`:表示每个epoch结束时执行。\n",
+ "- `step_begin`:表示每个step刚开始时执行。\n",
+ "- `step_end`:表示每个step结束时执行。\n",
+ "- `end`:表示训练结束时执行。\n",
+ "\n",
+ "了解上述基类的用法后,还有一个参数`run_context`,这是一个类,存储了模型训练中的各种参数,我们在这里使用`print(cb_params.list_callback)`将其放在`end`中打印(当然也可以使用`print(cb_param)`打印所有参数信息,由于参数信息太多,我们这里只选了一个参数举例),后续在执行完训练后,根据打印信息,会简单介绍`run_context`类中各参数的意义,我们开始构建训练定时器,如下:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from mindspore.train.callback import Callback\n",
+ "import time\n",
+ "\n",
+ "class StopAtTime(Callback):\n",
+ " def __init__(self, run_time):\n",
+ " super(StopAtTime, self).__init__()\n",
+ " self.run_time = run_time*60\n",
+ "\n",
+ " def begin(self, run_context):\n",
+ " cb_params = run_context.original_args()\n",
+ " cb_params.init_time = time.time()\n",
+ " \n",
+ " def step_end(self, run_context):\n",
+ " cb_params = run_context.original_args()\n",
+ " epoch_num = cb_params.cur_epoch_num\n",
+ " step_num = cb_params.cur_step_num\n",
+ " loss = cb_params.net_outputs\n",
+ " cur_time = time.time()\n",
+ " if (cur_time - cb_params.init_time) > self.run_time:\n",
+ " print(\"epoch: \", epoch_num, \" step: \", step_num, \" loss: \", loss)\n",
+ " run_context.request_stop()\n",
+ " def end(self, run_context):\n",
+ " cb_params = run_context.original_args()\n",
+ " print(cb_params.list_callback)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 设置日志环境变量"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "MindSpore采用`glog`来输出日志,我们这里将日志输出到屏幕:\n",
+ "\n",
+ "`GlOG_v`:控制日志的级别,默认值为2,即WARNING级别,对应关系如下:0-DEBUG、1-INFO、2-WARNING、3-ERROR。本次设置为1。\n",
+ "\n",
+ "`GLOG_logtostderr`:控制日志输出方式,设置为`1`时,日志输出到屏幕;值设置为`0`时,日志输出到文件。设置输出屏幕时,日志部分的信息会显示成红色,设置成输出到文件时,会在`GLOG_log_dir`路径下生成`mindspore.log`文件。\n",
+ "\n",
+ "> 更多设置请参考官网:https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/customized_debugging_information.html"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'GLOG_v': '1', 'GLOG_logtostderr': '1'}\n"
+ ]
+ }
+ ],
+ "source": [
+ "import os\n",
+ "from mindspore import log as logger\n",
+ "\n",
+ "os.environ['GLOG_v'] = '1'\n",
+ "os.environ['GLOG_logtostderr'] = '1'\n",
+ "os.environ['GLOG_log_dir'] = 'D:/' if os.name==\"nt\" else '/var/log/mindspore'\n",
+ "os.environ['logger_maxBytes'] = '5242880'\n",
+ "os.environ['logger_backupCount'] = '10'\n",
+ "print(logger.get_log_config())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "打印信息为`GLOG_v`的等级:`INFO`级别。\n",
+ "\n",
+ "输出方式`GLOG_logtostderr`:`1`表示屏幕输出。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 定义网络模型并执行训练"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 定义网络模型"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "此过程中先将之前生成的模型文件`.ckpt`和`.meta`的数据删除,并将模型需要用到的参数配置到`Model`。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from mindspore import context\n",
+ "from mindspore.train import Model\n",
+ "from mindspore.nn.metrics import Accuracy\n",
+ "from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits\n",
+ "from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor\n",
+ "\n",
+ "# clean files\n",
+ "if os.name == \"nt\":\n",
+ " os.system('del/f/s/q *.ckpt *.meta')\n",
+ "else:\n",
+ " os.system('rm -f *.ckpt *.meta *.pb')\n",
+ "\n",
+ "context.set_context(mode=context.GRAPH_MODE, device_target=\"CPU\")\n",
+ "lr = 0.01\n",
+ "momentum = 0.9 \n",
+ "epoch_size = 3\n",
+ "train_data_path = \"./MNIST_Data/train\"\n",
+ "eval_data_path = \"./MNIST_Data/train\"\n",
+ "\n",
+ "net_loss = SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True, reduction='mean')\n",
+ "repeat_size = epoch_size\n",
+ "network = LeNet5()\n",
+ "\n",
+ "metrics = {\n",
+ " 'accuracy': nn.Accuracy(),\n",
+ " 'loss': nn.Loss(),\n",
+ " 'precision': nn.Precision(),\n",
+ " 'recall': nn.Recall(),\n",
+ " 'f1_score': nn.F1()\n",
+ " }\n",
+ "net_opt = nn.Momentum(network.trainable_params(), lr, momentum)\n",
+ "\n",
+ "config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10)\n",
+ "\n",
+ "ckpoint_cb = ModelCheckpoint(prefix=\"checkpoint_lenet\", config=config_ck)\n",
+ "\n",
+ "model = Model(network, net_loss, net_opt, metrics=metrics)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 执行训练"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "在构建训练网络中,给`model.train`传入了三个回调函数,分别是`ckpoint_cb`,`LossMonitor`,`stop_cb`;其分别代表如下:\n",
+ "\n",
+ "`ckpoint_cb`:即是`ModelCheckpoint`,设置模型保存的回调函数。\n",
+ "\n",
+ "`LossMonitor`:loss值监视器,打印训练过程每步的loss值。\n",
+ "\n",
+ "`stop_cb`:即是`StopAtTime`,上面刚构建的训练定时器。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "我们将训练定时器`StopAtTime`设置成18秒,即`run_time=0.3`。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "============== Starting Training ==============\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "[INFO] ME(10004:11540,MainProcess):2020-07-22-16:52:22.904.779 [mindspore\\train\\serialization.py:308] Execute save the graph process.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "epoch: 1 step 375, loss is 2.3015153408050537\n",
+ "epoch: 1 step 750, loss is 2.2981557846069336\n",
+ "epoch: 1 step 1125, loss is 2.304901361465454\n",
+ "epoch: 1 step 1500, loss is 0.27651622891426086\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "[INFO] ME(10004:11540,MainProcess):2020-07-22-16:52:33.315.965 [mindspore\\train\\serialization.py:119] Execute save checkpoint process.\n",
+ "[INFO] ME(10004:11540,MainProcess):2020-07-22-16:52:33.325.978 [mindspore\\train\\serialization.py:147] Save checkpoint process finish.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "epoch: 1 step 1875, loss is 0.263612300157547\n",
+ "Epoch time: 11051.060, per step time: 5.894, avg loss: 1.702\n",
+ "************************************************************\n",
+ "epoch: 2 step 375, loss is 0.22589832544326782\n",
+ "epoch: 2 step 750, loss is 0.12003941088914871\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "[INFO] ME(10004:11540,MainProcess):2020-07-22-16:52:40.282.209 [mindspore\\train\\serialization.py:119] Execute save checkpoint process.\n",
+ "[INFO] ME(10004:11540,MainProcess):2020-07-22-16:52:40.297.275 [mindspore\\train\\serialization.py:147] Save checkpoint process finish.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "epoch: 2 step: 2927 loss: 0.26415202\n",
+ "Epoch time: 6953.909, per step time: 3.709, avg loss: 0.130\n",
+ "************************************************************\n",
+ "[, , <__main__.StopAtTime object at 0x000001ACBD016148>]\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(\"============== Starting Training ==============\")\n",
+ "ds_train = create_dataset(train_data_path, repeat_size = repeat_size)\n",
+ "stop_cb = StopAtTime(run_time=0.3)\n",
+ "model.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(375), stop_cb], dataset_sink_mode=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "以上打印信息中,主要分为两部分:\n",
+ "- 日志信息部分:\n",
+ " - `[INFO]`部分信息即为日志输出的信息,由于没有Warning信息,目前主要记录的是训练的几个重要步骤。\n",
+ " \n",
+ "- 回调函数信息部分:\n",
+ " - `LossMonitor`:每步的loss值。\n",
+ " - `StopAtTime`:在每个epoch结束及训练时间结束时,打印当前epoch的训练总时间(单位为毫秒),每步训练花费的时间以及平均loss值,另外在训练结束时还打印了`run_context.list_callback`的信息,这条信息表示本次训练过程中使用的回调函数;另外`run_conext.original_args`中还包含以下参数:\n",
+ " - `train_network`:网络的各类参数。\n",
+ " - `epoch_num`:训练的epoch数。\n",
+ " - `batch_num`:一个epoch的step数。\n",
+ " - `mode`:MODEL的模式。\n",
+ " - `loss_fn`:使用的损失函数。\n",
+ " - `optimizer`:使用的优化器。\n",
+ " - `parallel_mode`:并行模式。\n",
+ " - `device_number`:训练卡的数量。\n",
+ " - `train_dataset`:训练的数据集。\n",
+ " - `list_callback`:使用的回调函数。\n",
+ " - `train_dataset_element`:打印当前batch的数据集。\n",
+ " - `cur_step_num`:当前训练的step数。\n",
+ " - `cur_epoch_num`:当前的epoch。\n",
+ " - `net_outputs`:网络返回值。\n",
+ "\n",
+ " 几乎在训练中的所有重要数据,都可以从Callback中取得,所以Callback也是在自定义调试中比较常用的功能。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 执行测试"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "测试网络中我们的自定义函数`metrics`将在`model.eval`中被调用,除了模型的预测正确率外`recall`,`F1`等不同的检验标准下的预测正确率也会打印出来:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "============== Starting Testing ==============\n",
+ "============== Accuracy:{'accuracy': 0.9712666666666666, 'loss': 0.0918103571044902, 'precision': array([0.979007 , 0.9815034 , 0.9695254 , 0.99006449, 0.97207177,\n",
+ " 0.9750469 , 0.97660037, 0.9832609 , 0.91311292, 0.97466828]), 'recall': array([0.99206483, 0.98383269, 0.97717355, 0.92643941, 0.98305375,\n",
+ " 0.95867921, 0.9873268 , 0.97509976, 0.97709793, 0.95074802]), 'f1_score': array([0.98549266, 0.98266667, 0.97333445, 0.95719582, 0.97753191,\n",
+ " 0.96679379, 0.98193429, 0.97916333, 0.94402246, 0.96255956])} ==============\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(\"============== Starting Testing ==============\")\n",
+ "ds_eval = create_dataset(eval_data_path, repeat_size=repeat_size)\n",
+ "acc = model.eval(ds_eval,dataset_sink_mode = False)\n",
+ "print(\"============== Accuracy:{} ==============\".format(acc))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "`[INFO]`部分为日志信息。\n",
+ "\n",
+ "`Accuracy`部分的信息即为`metric`控制输出的信息,模型的预测值正确率和其他标准下验证(0-9)的正确率值,至于不同的验证标准计算方法,大家可以去官网搜索`mindspore.nn`查找,这里就不多介绍了。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 总结\n",
+ "\n",
+ "我们使用了MNIST数据集,通过LeNet5神经网络进行训练,将自定义调试函数结合进其代码中进行调试,展示了使用方法和部分功能,并在过程中展示了训练过程中我们能够通过自定义调试函数输出的数据,来更好的认识自定义调试函数的方便性,以上就是本次的体验内容。"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/tutorials/notebook/quick_start.ipynb b/tutorials/notebook/quick_start.ipynb
index 558fd5bc0b20632de3236e16cebec89127e90ab4..8f425891f317ab34f53d1164e076343d890f39a1 100644
--- a/tutorials/notebook/quick_start.ipynb
+++ b/tutorials/notebook/quick_start.ipynb
@@ -204,7 +204,7 @@
},
{
"data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAEICAYAAACZA4KlAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAANdklEQVR4nO3df6wlZX3H8fendF0i0pSVAitQoRRTraGruUUTmpaGiohNgD8kbhqyJKZLIyQ1wUZCm0iampCmQm2sP5ayZWkVIVUCbWiFYhOCNoQLwWURKpSgrLuyNUBFq+uC3/5xh/Zyub/2nDk/7j7vV3Jy5sycOfPdyX7uM2eemfOkqpB06PuZSRcgaTwMu9QIwy41wrBLjTDsUiMMu9QIw65XSfJUkt+ZdB3ql2HX2CV5c5KvJPnvJE8kuWDSNbXAsGtkkvzsEvNuA/4J2ABsBf4+yZvGXF5zDPsa0h1efzjJzq5VvDnJ4UkuTnLvgvdWkl/upm9I8qkk/5zkB0m+muS4JH+Z5LkkjyV524LN/XqSb3TL/zbJ4fM++3eTPJTk+SRfS3Lagho/kmQn8MNFAv8rwBuAa6vqpar6CvBV4KIed5UWYdjXnguBc4CTgdOAiw9ivT8Bjgb2A/8OPNi9/gfgmgXv/z3g3cApwJu6dUnydmA7cAnweuCzwO1J1s9bdzPwXuDnq+rF7g/Np7plWaS2AG9d5b9DAzLsa89fVdWeqnoW+Edg0yrXu7WqHqiqHwO3Aj+uqhur6iXgZmBhy/7Jqnq6287HmAswwO8Dn62q+7qWeQdzfzzeuaDGp6vqRwBV9cGq+mC37DFgH/BHSdYlORv4LeC1B7MTdPAM+9rz3XnT/wO8bpXrPTNv+keLvF74OU/Pm/4Wc4feAG8ELu8O4Z9P8jxw4rzlC9d9hao6AJzPXMv/XeBy4BZg9yr/HRrQq06gaE36IfNaxiTH9fCZJ86b/kVgTzf9NPCxqvrYMusueytlVe1krjUHIMnXgB0D1qlVsmU/NHwd+NUkm7oTaVf18JmXJjkhyQbgSuYO9QGuA/4gyTsy54gk701y5Go/OMlp3YnF1yb5MLARuKGHmrUMw34IqKpvAn8K/CvwOHDv8musyueBO4Enu8efdduaZe57+yeB54AnWOEkYZLPJPnMvFkXAXuZ++5+FvCuqtrfQ81aRvzxCqkNtuxSIwy71AjDLjXCsEuNGGs/+2uyvg7niHFuUmrKj/khP6n9i12SPFzYk5wDfAI4DPibqrp6ufcfzhG8I2cNs0lJy7iv7l5y2cCH8UkOA/4aeA/wFmBzkrcM+nmSRmuY7+ynA09U1ZNV9RPgC8B5/ZQlqW/DhP14XnnDw+5u3isk2ZpkNsnsAbxISpqUYcK+2EmAV12OV1XbqmqmqmbWsX6RVSSNwzBh380r74w6gf+/M0rSlBkm7PcDpyY5OclrgPcDt/dTlqS+Ddz11v3c0GXAl5nretteVY/0VpmkXg3Vz15VdwB39FSLpBHyclmpEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdasRQQzYneQp4AXgJeLGqZvooSlL/hgp757er6ns9fI6kEfIwXmrEsGEv4M4kDyTZutgbkmxNMptk9gD7h9ycpEENexh/RlXtSXIMcFeSx6rqnvlvqKptwDaAn8uGGnJ7kgY0VMteVXu6533ArcDpfRQlqX8Dhz3JEUmOfHkaOBvY1Vdhkvo1zGH8scCtSV7+nM9X1b/0UpWk3g0c9qp6Evi1HmuRNEJ2vUmNMOxSIwy71AjDLjXCsEuN6ONGGB3CvrznoUmXMLB3v2HTpEuYKrbsUiMMu9QIwy41wrBLjTDsUiMMu9QIwy41wn72Q9xa7idXv2zZpUYYdqkRhl1qhGGXGmHYpUYYdqkRhl1qhP3shwD70rUatuxSIwy71AjDLjXCsEuNMOxSIwy71AjDLjXCfvY1wH509WHFlj3J9iT7kuyaN29DkruSPN49HzXaMiUNazWH8TcA5yyYdwVwd1WdCtzdvZY0xVYMe1XdAzy7YPZ5wI5uegdwfs91SerZoCfojq2qvQDd8zFLvTHJ1iSzSWYPsH/AzUka1sjPxlfVtqqaqaqZdawf9eYkLWHQsD+TZCNA97yvv5IkjcKgYb8d2NJNbwFu66ccSaOyYj97kpuAM4Gjk+wGPgpcDdyS5APAt4H3jbLIQ90k+9EnOYa51w+M14phr6rNSyw6q+daJI2Ql8tKjTDsUiMMu9QIwy41wrBLjfAW1zFotWsN7F6bJrbsUiMMu9QIwy41wrBLjTDsUiMMu9QIwy41wn72Q8Ck+9K1NtiyS40w7FIjDLvUCMMuNcKwS40w7FIjDLvUCPvZe+A921oLbNmlRhh2qRGGXWqEYZcaYdilRhh2qRGGXWqE/exrgPerL879cnBWbNmTbE+yL8muefOuSvKdJA91j3NHW6akYa3mMP4G4JxF5l9bVZu6xx39liWpbyuGvaruAZ4dQy2SRmiYE3SXJdnZHeYftdSbkmxNMptk9gD7h9icpGEMGvZPA6cAm4C9wMeXemNVbauqmaqaWcf6ATcnaVgDhb2qnqmql6rqp8B1wOn9liWpbwOFPcnGeS8vAHYt9V5J02HFfvYkNwFnAkcn2Q18FDgzySaggKeAS0ZYo6aY9/KvHSuGvao2LzL7+hHUImmEvFxWaoRhlxph2KVGGHapEYZdaoS3uK4BK3VvDXOrp11n7bBllxph2KVGGHapEYZdaoRhlxph2KVGGHapEfaz92Clfu5R92Ufqn3l/lR0v2zZpUYYdqkRhl1qhGGXGmHYpUYYdqkRhl1qhP3sYzDpfngJbNmlZhh2qRGGXWqEYZcaYdilRhh2qRGGXWrEaoZsPhG4ETgO+Cmwrao+kWQDcDNwEnPDNl9YVc+NrtRD11q+b9trBNaO1bTsLwKXV9WbgXcClyZ5C3AFcHdVnQrc3b2WNKVWDHtV7a2qB7vpF4BHgeOB84Ad3dt2AOePqkhJwzuo7+xJTgLeBtwHHFtVe2HuDwJwTN/FSerPqsOe5HXAF4EPVdX3D2K9rUlmk8weYP8gNUrqwarCnmQdc0H/XFV9qZv9TJKN3fKNwL7F1q2qbVU1U1Uz61jfR82SBrBi2JMEuB54tKqumbfodmBLN70FuK3/8iT1ZTW3uJ4BXAQ8nOTlfpYrgauBW5J8APg28L7RlCipDyuGvaruBbLE4rP6LUfSqHgFndQIwy41wrBLjTDsUiMMu9QIwy41wrBLjTDsUiMMu9QIwy41wrBLjTDsUiMMu9QIwy41wiGbNTFr+Se01yJbdqkRhl1qhGGXGmHYpUYYdqkRhl1qhGGXGmHYpUYYdqkRhl1qhGGXGmHYpUYYdqkRhl1qhGGXGrFi2JOcmOTfkjya5JEkf9jNvyrJd5I81D3OHX25kga1mh+veBG4vKoeTHIk8ECSu7pl11bVX4yuPEl9WTHsVbUX2NtNv5DkUeD4URcmqV8H9Z09yUnA24D7ulmXJdmZZHuSo5ZYZ2uS2SSzB9g/VLGSBrfqsCd5HfBF4ENV9X3g08ApwCbmWv6PL7ZeVW2rqpmqmlnH+h5KljSIVYU9yTrmgv65qvoSQFU9U1UvVdVPgeuA00dXpqRhreZsfIDrgUer6pp58zfOe9sFwK7+y5PUl9WcjT8DuAh4OMlD3bwrgc1JNgEFPAVcMpIKJfViNWfj7wWyyKI7+i9H0qh4BZ3UCMMuNcKwS40w7FIjDLvUCMMuNcIhmzUUh11eO2zZpUYYdqkRhl1qhGGXGmHYpUYYdqkRhl1qRKpqfBtL/gv41rxZRwPfG1sBB2daa5vWusDaBtVnbW+sql9YbMFYw/6qjSezVTUzsQKWMa21TWtdYG2DGldtHsZLjTDsUiMmHfZtE97+cqa1tmmtC6xtUGOpbaLf2SWNz6RbdkljYtilRkwk7EnOSfIfSZ5IcsUkalhKkqeSPNwNQz074Vq2J9mXZNe8eRuS3JXk8e550TH2JlTbVAzjvcww4xPdd5Me/nzs39mTHAZ8E3gXsBu4H9hcVd8YayFLSPIUMFNVE78AI8lvAj8Abqyqt3bz/hx4tqqu7v5QHlVVH5mS2q4CfjDpYby70Yo2zh9mHDgfuJgJ7rtl6rqQMey3SbTspwNPVNWTVfUT4AvAeROoY+pV1T3Aswtmnwfs6KZ3MPefZeyWqG0qVNXeqnqwm34BeHmY8Ynuu2XqGotJhP144Ol5r3czXeO9F3BnkgeSbJ10MYs4tqr2wtx/HuCYCdez0IrDeI/TgmHGp2bfDTL8+bAmEfbFhpKapv6/M6rq7cB7gEu7w1WtzqqG8R6XRYYZnwqDDn8+rEmEfTdw4rzXJwB7JlDHoqpqT/e8D7iV6RuK+pmXR9DtnvdNuJ7/M03DeC82zDhTsO8mOfz5JMJ+P3BqkpOTvAZ4P3D7BOp4lSRHdCdOSHIEcDbTNxT17cCWbnoLcNsEa3mFaRnGe6lhxpnwvpv48OdVNfYHcC5zZ+T/E/jjSdSwRF2/BHy9ezwy6dqAm5g7rDvA3BHRB4DXA3cDj3fPG6aotr8DHgZ2MhesjROq7TeY+2q4E3ioe5w76X23TF1j2W9eLis1wivopEYYdqkRhl1qhGGXGmHYpUYYdqkRhl1qxP8CnmgwF585kCIAAAAASUVORK5CYII=\n",
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAEICAYAAACZA4KlAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAANnklEQVR4nO3df6xkZX3H8fendF0i2pSVAitSoRRTqbGruUUTmtaGiohNgD8kkoZgYlwaNakJNhraRNLUhDRVa2P9sRTK0ipCqgTa0CrFJgRtCBeyLiBUKEFZd2U1SAWr6wLf/nEP7eVyf+3MmTlzed6vZDJnzo8533tyP/c5c54590lVIemF7+eGLkDSdBh2qRGGXWqEYZcaYdilRhh2qRGGXc+T5OEkvzd0HeqXYdfUJXl1kq8m+e8kDyY5d+iaWmDYNTFJfn6FeTcA/wxsAbYD/5DkVVMurzmGfQPpTq8/kGR31ypem+TwJO9MctuSdSvJr3bTVyX5VJJ/SfJkkq8lOTbJXyX5YZL7k7xuye5+M8k3u+V/l+TwRe/9+0l2JXk8ydeTvHZJjR9Mshv48TKB/zXg5cDHq+rpqvoq8DXggh4PlZZh2Dee84AzgROB1wLvPITt/hQ4CjgA/AdwV/f6H4GPLVn/D4C3ACcBr+q2JcnrgSuBi4CXAZ8FbkyyedG25wNvA36xqp7q/tB8qluWZWoL8Jp1/hwakWHfeP66qvZW1WPAPwHb1rnd9VV1Z1X9FLge+GlVXV1VTwPXAktb9k9W1SPdfj7CQoAB3g18tqpu71rmnSz88XjjkhofqaqfAFTVe6rqPd2y+4H9wB8n2ZTkDOB3gBcfykHQoTPsG8/3Fk3/D/CSdW736KLpnyzzeun7PLJo+tssnHoDvBK4uDuFfzzJ48Dxi5Yv3fY5quogcA4LLf/3gIuB64A96/w5NKLnXUDRhvRjFrWMSY7t4T2PXzT9y8DebvoR4CNV9ZFVtl31Vsqq2s1Caw5Akq8DO0esU+tky/7C8A3g15Ns6y6kXdrDe743ySuSbAEuYeFUH+By4A+TvCELjkjytiQvXe8bJ3ltd2HxxUk+AGwFruqhZq3CsL8AVNW3gD8D/g14ALht9S3W5fPAV4CHusefd/uaZ+Fz+yeBHwIPssZFwiSfSfKZRbMuAPax8Nn9dODNVXWgh5q1ivjPK6Q22LJLjTDsUiMMu9QIwy41Yqr97C/K5jqcI6a5S6kpP+XH/KwOLPeV5PHCnuRM4BPAYcDfVtVlq61/OEfwhpw+zi4lreL2umXFZSOfxic5DPgb4K3AKcD5SU4Z9f0kTdY4n9lPBR6sqoeq6mfAF4Cz+ylLUt/GCftxPPeGhz3dvOdIsj3JfJL5g/glKWko44R9uYsAz/s6XlXtqKq5qprbxOZlNpE0DeOEfQ/PvTPqFfz/nVGSZsw4Yb8DODnJiUleBLwDuLGfsiT1beSut+7fDb0P+DILXW9XVtW9vVUmqVdj9bNX1U3ATT3VImmC/Lqs1AjDLjXCsEuNMOxSIwy71AjDLjXCsEuNMOxSIwy71AjDLjXCsEuNMOxSIwy71AjDLjXCsEuNMOxSIwy71AjDLjXCsEuNMOxSIwy71AjDLjXCsEuNMOxSIwy71AjDLjXCsEuNMOxSIwy71IixRnHVdHx5766hS5iIt7x829AlNGWssCd5GHgCeBp4qqrm+ihKUv/6aNl/t6p+0MP7SJogP7NLjRg37AV8JcmdSbYvt0KS7Unmk8wf5MCYu5M0qnFP40+rqr1JjgZuTnJ/Vd26eIWq2gHsAPiFbKkx9ydpRGO17FW1t3veD1wPnNpHUZL6N3LYkxyR5KXPTgNnAPf0VZikfo1zGn8McH2SZ9/n81X1r71U1ZgXaj/6Wtb6ue2H79fIYa+qh4Df6LEWSRNk15vUCMMuNcKwS40w7FIjDLvUCG9xbdxa3Vutdgu+ENmyS40w7FIjDLvUCMMuNcKwS40w7FIjDLvUCPvZp2DIvupxbxMdsh/eW2D7ZcsuNcKwS40w7FIjDLvUCMMuNcKwS40w7FIj7GfvwdD3fA/Z3+z98BuHLbvUCMMuNcKwS40w7FIjDLvUCMMuNcKwS40w7FIj1gx7kiuT7E9yz6J5W5LcnOSB7vnIyZYpaVzradmvAs5cMu9DwC1VdTJwS/da0gxbM+xVdSvw2JLZZwM7u+mdwDk91yWpZ6N+Zj+mqvYBdM9Hr7Riku1J5pPMH+TAiLuTNK6JX6Crqh1VNVdVc5vYPOndSVrBqGF/NMlWgO55f38lSZqEUcN+I3BhN30hcEM/5UialDXvZ09yDfAm4Kgke4APA5cB1yV5F/Ad4O2TLLJ1/n909WHNsFfV+SssOr3nWiRNkN+gkxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxrhkM3asMYZDrrF24Zt2aVGGHapEYZdaoRhlxph2KVGGHapEYZdaoT97Jqocfqzx+lH1/PZskuNMOxSIwy71AjDLjXCsEuNMOxSIwy71Aj72TeAtfqbN+q92fajT9eaLXuSK5PsT3LPonmXJvlukl3d46zJlilpXOs5jb8KOHOZ+R+vqm3d46Z+y5LUtzXDXlW3Ao9NoRZJEzTOBbr3JdndneYfudJKSbYnmU8yf5ADY+xO0jhGDfungZOAbcA+4KMrrVhVO6pqrqrmNrF5xN1JGtdIYa+qR6vq6ap6BrgcOLXfsiT1baSwJ9m66OW5wD0rrStpNqzZz57kGuBNwFFJ9gAfBt6UZBtQwMPARROsUVIP1gx7VZ2/zOwrJlCLpAny67JSIwy71AjDLjXCsEuNMOxSI7zFtQdr3WI66Vs5V3v/oW9/HfI21qF/9lljyy41wrBLjTDsUiMMu9QIwy41wrBLjTDsUiPsZ5+CIfvh/XfNepYtu9QIwy41wrBLjTDsUiMMu9QIwy41wrBLjbCffQYMfT+82mDLLjXCsEuNMOxSIwy71AjDLjXCsEuNMOxSI9YzZPPxwNXAscAzwI6q+kSSLcC1wAksDNt8XlX9cHKltst+ePVhPS37U8DFVfVq4I3Ae5OcAnwIuKWqTgZu6V5LmlFrhr2q9lXVXd30E8B9wHHA2cDObrWdwDmTKlLS+A7pM3uSE4DXAbcDx1TVPlj4gwAc3Xdxkvqz7rAneQnwReD9VfWjQ9hue5L5JPMHOTBKjZJ6sK6wJ9nEQtA/V1Vf6mY/mmRrt3wrsH+5batqR1XNVdXcJjb3UbOkEawZ9iQBrgDuq6qPLVp0I3BhN30hcEP/5Unqy3pucT0NuAC4O8mzfTyXAJcB1yV5F/Ad4O2TKVFr2ahDE9tlOF1rhr2qbgOywuLT+y1H0qT4DTqpEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdaoRhlxph2KVGGHapEYZdasSaQzYnOR64GjgWeAbYUVWfSHIp8G7g+92ql1TVTZMqVC88G3Vc+Y1qzbADTwEXV9VdSV4K3Jnk5m7Zx6vqLydXnqS+rBn2qtoH7Oumn0hyH3DcpAuT1K9D+sye5ATgdcDt3az3Jdmd5MokR66wzfYk80nmD3JgrGIljW7dYU/yEuCLwPur6kfAp4GTgG0stPwfXW67qtpRVXNVNbeJzT2ULGkU6wp7kk0sBP1zVfUlgKp6tKqerqpngMuBUydXpqRxrRn2JAGuAO6rqo8tmr910WrnAvf0X56kvqznavxpwAXA3Ul2dfMuAc5Psg0o4GHgoolUKKkX67kafxuQZRbZpy5tIH6DTmqEYZcaYdilRhh2qRGGXWqEYZcaYdilRhh2qRGGXWqEYZcaYdilRhh2qRGGXWqEYZcakaqa3s6S7wPfXjTrKOAHUyvg0MxqbbNaF1jbqPqs7ZVV9UvLLZhq2J+382S+quYGK2AVs1rbrNYF1jaqadXmabzUCMMuNWLosO8YeP+rmdXaZrUusLZRTaW2QT+zS5qeoVt2SVNi2KVGDBL2JGcm+c8kDyb50BA1rCTJw0nuTrIryfzAtVyZZH+SexbN25Lk5iQPdM/LjrE3UG2XJvlud+x2JTlroNqOT/LvSe5Lcm+SP+rmD3rsVqlrKsdt6p/ZkxwGfAt4M7AHuAM4v6q+OdVCVpDkYWCuqgb/AkaS3waeBK6uqtd08/4CeKyqLuv+UB5ZVR+ckdouBZ4cehjvbrSirYuHGQfOAd7JgMdulbrOYwrHbYiW/VTgwap6qKp+BnwBOHuAOmZeVd0KPLZk9tnAzm56Jwu/LFO3Qm0zoar2VdVd3fQTwLPDjA967FapayqGCPtxwCOLXu9htsZ7L+ArSe5Msn3oYpZxTFXtg4VfHuDogetZas1hvKdpyTDjM3PsRhn+fFxDhH25oaRmqf/vtKp6PfBW4L3d6arWZ13DeE/LMsOMz4RRhz8f1xBh3wMcv+j1K4C9A9SxrKra2z3vB65n9oaifvTZEXS75/0D1/N/ZmkY7+WGGWcGjt2Qw58PEfY7gJOTnJjkRcA7gBsHqON5khzRXTghyRHAGczeUNQ3Ahd20xcCNwxYy3PMyjDeKw0zzsDHbvDhz6tq6g/gLBauyP8X8CdD1LBCXb8CfKN73Dt0bcA1LJzWHWThjOhdwMuAW4AHuuctM1Tb3wN3A7tZCNbWgWr7LRY+Gu4GdnWPs4Y+dqvUNZXj5tdlpUb4DTqpEYZdaoRhlxph2KVGGHapEYZdaoRhlxrxv3jNRdG9OXAOAAAAAElFTkSuQmCC\n",
"text/plain": [
"