introduction_cn.ipynb 7.2 KB
Notebook
Newer Older
Q
Quleaf 已提交
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213
{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 医学图像分类简介\n",
    "\n",
    "*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
    "\n",
    "医学图像分类(Medical image classification)是计算机辅助诊断系统的关键技术。医学图像分类问题主要是如何从图像中提取特征并进行分类,从而识别和了解人体的哪些部位受到特定疾病的影响。在这里我们主要使用量子神经网络对公开数据集 MedMNIST 中的胸腔数据进行分类。其中数据形式如下图所示:\n",
    "\n",
    "<img src=\"./med_image_example.png\" width=\"20%\" height=\"20%\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用 QNNMIC 模型进行医学图像分类\n",
    "\n",
    "### QNNMIC 模型简介\n",
    "QNNMIC 模型是一个可以用于医学图像分类的量子机器学习模型(Quantum Machine Learning,QML)。我们具体称其为一种量子神经网络 (Quantum Neural Network, QNN),它结合了参数化量子电路(Parameterized Quantum Circuit, PQC)和经典神经网络。对于医学图像数据,QNNMIC 可以达到 85% 以上的分类准确率。模型主要分为量子和经典两部分,结构图如下:\n",
    "\n",
    "<img src=\"./qnnmic_model_cn.png\" width=\"60%\" height=\"60%\"/>\n",
    "\n",
    "\n",
    "注:\n",
    "- 通常我们使用主成分分析将图片数据进行降维处理,使其更容易通过编码电路将经典数据编码为量子态。\n",
    "- 参数化电路的作用是特征提取,其电路参数可以在训练中调整。\n",
    "- 量子测量由一组测量算子表示,是将量子态转化为经典数据的过程,我们可以对得到的经典数据做进一步处理。\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 如何使用\n",
    "\n",
    "### 使用模型进行预测\n",
    "\n",
    "这里,我们已经给出了一个训练好的模型,可以直接用于医学图片的预测。只需要在 `test.toml` 这个配置文件中进行对应的配置,然后输入命令 `python qnn_medical_image.py --config test.toml` 即可使用训练好的医学图片分类模型对输入的图片进行测试。\n",
    "\n",
    "### 在线演示\n",
    "\n",
    "这里,我们给出一个在线演示的版本,可以在线进行测试。首先定义配置文件的内容对测试集中图片进行预测:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import toml\n",
    "\n",
    "test_toml = r\"\"\"\n",
    "# 模型的整体配置文件。\n",
    "# 图片的文件路径。\n",
    "image_path = 'pneumoniamnist'\n",
    "\n",
    "# 训练集中的数据个数,默认值为 -1 即使用全部数据。\n",
    "num_samples = 20\n",
    "\n",
    "# 训练好的模型参数文件的文件路径。\n",
    "model_path = 'qnnmic.pdparams'\n",
    "\n",
    "# 量子电路所包含的量子比特的数量。\n",
    "num_qubits = [8, 8]\n",
    "\n",
    "# 每一层量子电路中的电路深度。\n",
    "num_depths = [2, 2]\n",
    "\n",
    "# 量子电路中可观测量的设置。\n",
    "observables = [['Z0', 'Z1', 'Z2', 'Z3'], ['X0', 'X1', 'X2', 'X3']]\n",
    "\"\"\"\n",
    "\n",
    "config = toml.loads(test_toml)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "图片的预测结果分别为 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0\n",
      "图片的实际标签分别为 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0\n"
     ]
    }
   ],
   "source": [
    "from paddle_quantum.qml.qnnmic import inference\n",
    "\n",
    "prediction, prob, label = inference(**config)\n",
    "print(f\"图片的预测结果分别为 {str(prediction)[1:-1]}\")\n",
    "print(f\"图片的实际标签分别为 {str(label)[1:-1]}\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其中标签 0 代表肺部异常,标签 1 代表正常。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在 `test_toml` 配置文件中:\n",
    "- `model_path`: 为训练好的模型,这里固定为 `qnnmic.pdparams`;\n",
    "- `num_qubits`、`num_depths`、`observables` 三个参数应与训练好的模型 ``qnnmic.pdparams`` 相匹配。`num_qubits = [8,8]` 表示量子部分一共两层电路;每层电路为 8 的量子比特;`num_depths = [2,2]` 表示每层参数化电路深度为 2;`observables` 表示每层测量算子的具体形式。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对于数据集中的某张肺部异常的图片:\n",
    "\n",
    "<img src=\"./medical_image/image_label_0.png\" width=\"20%\" height=\"20%\"/>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "对于上述输入的图片,模型有 98.30% 的置信度检测出肺部异常。\n"
     ]
    }
   ],
   "source": [
    "# 使用模型进行预测并得到对应概率值\n",
    "msg = f'对于上述输入的图片,模型有 {prob[10][1]:3.2%} 的置信度检测出肺部异常。'\n",
    "print(msg)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 注意事项\n",
    "\n",
    "我们通常考虑调整 `num_qubits`,`num_depths`,`observables` 三个主要超参数,对模型的影响较大。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 引用信息\n",
    "\n",
    "```\n",
    "@article{medmnistv2,\n",
    "    title={MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification},\n",
    "    author={Yang, Jiancheng and Shi, Rui and Wei, Donglai and Liu, Zequan and Zhao, Lin and Ke, Bilian and Pfister, Hanspeter and Ni, Bingbing},\n",
    "    journal={arXiv preprint arXiv:2110.14795},\n",
    "    year={2021}\n",
    "}\n",
    "```\n",
    "\n",
    "## 参考文献\n",
    "\\[1\\] Yang, Jiancheng, et al. \"Medmnist v2: A large-scale lightweight benchmark for 2d and 3d biomedical image classification.\" arXiv preprint arXiv:2110.14795 (2021)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "modellib",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.15 (default, Nov 24 2022, 18:44:54) [MSC v.1916 64 bit (AMD64)]"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "dfa0523b1e359b8fd3ea126fa0459d0c86d49956d91b464930b80cba21582eac"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}