未验证 提交 72375b21 编写于 作者: Q QuLeaf 提交者: GitHub

update the contents to adapt the paddle-quantum v2.3.0 (#5712)

上级 ae6828f4
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "6f0a162f",
"metadata": {},
"source": [
"## 1. VSQL Introduction\n",
"\n",
"Variational Shadow Quantum Learning (VSQL) is a hybird quantum-classical framework for supervised quantum learning, which utilizes parameterized quantum circuits and classical shadows. Unlike commonly used variational quantum algorithms, the VSQL method extracts \"local\" features from the subspace instead of the whole Hilbert space."
"Variational Shadow Quantum Learning (VSQL) is a hybrid quantum-classical framework for supervised quantum learning, which utilizes parameterized quantum circuits and classical shadows. Unlike commonly used variational quantum algorithms, the VSQL method extracts \"local\" features from the subspace instead of the whole Hilbert space."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "99c07da5",
"metadata": {},
......@@ -65,12 +67,12 @@
"![2-local](https://ai-studio-static-online.cdn.bcebos.com/0c1035262cb64f61bd3cc87dbf53253aa6a7ecc170634c4db8dd71d576a9409c \"The 2-local shadow circuit design\")\n",
"<div style=\"text-align:center\">The 2-local shadow circuit design</div>\n",
"\n",
"The circuit layer in the dashed box is repeated for $D$ times to increase the expressive power of the quantum circuit. The structure of the circuit is not unique. You can try to design your own circuit."
"The circuit layer in the dashed box is repeated for $D$ times to increase the expressive power of the quantum circuit. The structure of the circuit is not unique. You can try to design your own circuit.\n"
]
},
{
"cell_type": "markdown",
"id": "bbec4432",
"id": "cf1da740",
"metadata": {},
"source": [
"## 3. Model Performance\n",
......@@ -100,125 +102,71 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "77177110",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2022-11-24 13:45:54-- https://release-data.cdn.bcebos.com/PaddleQuantum/vsql.pdparams\n",
"Resolving release-data.cdn.bcebos.com (release-data.cdn.bcebos.com)... 222.35.73.1\n",
"Connecting to release-data.cdn.bcebos.com (release-data.cdn.bcebos.com)|222.35.73.1|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 857 [application/octet-stream]\n",
"Saving to: ‘vsql.pdparams.2’\n",
"\n",
"vsql.pdparams.2 100%[===================>] 857 --.-KB/s in 0s \n",
"\n",
"2022-11-24 13:45:54 (817 MB/s) - ‘vsql.pdparams.2’ saved [857/857]\n",
"\n"
]
}
],
"outputs": [],
"source": [
"# Install the paddle quantum\n",
"%pip install paddle-quantum\n",
"# Download the pretrained model\n",
"!wget https://release-data.cdn.bcebos.com/PaddleQuantum/vsql.pdparams"
"%pip install --user paddle-quantum\n",
"# Download the pre-trained model\n",
"!wget https://release-data.cdn.bcebos.com/PaddleQuantum/vsql.pdparams -O vsql.pdparams"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "4843c62f",
"id": "2a8ef70f",
"metadata": {},
"source": [
"Next, the model can be loaded and tested."
"After installing Paddle Quantum successfully, let's load the VSQL model and the images to be predicted."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "86d4405c",
"id": "7d88445e",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/wangzihe/opt/anaconda3/envs/py37/lib/python3.7/site-packages/scipy/linalg/__init__.py:212: DeprecationWarning: The module numpy.dual is deprecated. Instead of using dual, use the functions directly from numpy or scipy.\n",
" from numpy.dual import register_func\n",
"/Users/wangzihe/opt/anaconda3/envs/py37/lib/python3.7/site-packages/scipy/sparse/sputils.py:16: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.\n",
" supported_dtypes = [np.typeDict[x] for x in supported_dtypes]\n",
"/Users/wangzihe/opt/anaconda3/envs/py37/lib/python3.7/site-packages/scipy/special/orthogonal.py:81: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.\n",
"Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n",
" from numpy import (exp, inf, pi, sqrt, floor, sin, cos, around, int,\n",
"/Users/wangzihe/opt/anaconda3/envs/py37/lib/python3.7/site-packages/scipy/io/matlab/mio5.py:98: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.\n",
"Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n",
" from .mio5_utils import VarReader5\n"
]
}
],
"outputs": [],
"source": [
"# Import the required packages\n",
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings('ignore')\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import numpy as np\n",
"import paddle\n",
"import paddle_quantum as pq\n",
"import toml\n",
"import matplotlib.pyplot as plt\n",
"from paddle_quantum.qml.vsql import VSQL\n",
"\n",
"# Set model parameters\n",
"num_qubits = 10\n",
"num_shadow = 2\n",
"classes = [0, 1]\n",
"num_classes = len(classes)\n",
"depth = 1\n",
"\n",
"# Load the trained model\n",
"model = VSQL(\n",
" num_qubits=num_qubits,\n",
" num_shadow=num_shadow,\n",
" num_classes=num_classes,\n",
" depth=depth,\n",
")\n",
"state_dict = paddle.load('./vsql.pdparams')\n",
"model.set_state_dict(state_dict)"
"from paddle_quantum.qml.vsql import inference"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6676b204",
"id": "d12ea921",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2022-11-24 13:46:01-- https://ai-studio-static-online.cdn.bcebos.com/088dc9dbabf349c88d029dfd2e07827aa6e41ba958c5434bbd96bc167fc65347\n",
"Resolving ai-studio-static-online.cdn.bcebos.com (ai-studio-static-online.cdn.bcebos.com)... 222.35.73.1\n",
"Connecting to ai-studio-static-online.cdn.bcebos.com (ai-studio-static-online.cdn.bcebos.com)|222.35.73.1|:443... connected.\n",
"--2023-01-18 15:24:03-- https://ai-studio-static-online.cdn.bcebos.com/088dc9dbabf349c88d029dfd2e07827aa6e41ba958c5434bbd96bc167fc65347\n",
"Resolving ai-studio-static-online.cdn.bcebos.com (ai-studio-static-online.cdn.bcebos.com)... 119.167.254.35, 153.35.89.225, 211.97.83.35, ...\n",
"Connecting to ai-studio-static-online.cdn.bcebos.com (ai-studio-static-online.cdn.bcebos.com)|119.167.254.35|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 290 [image/png]\n",
"Saving to: ‘data-0.png’\n",
"Saving to: ‘data_0.png’\n",
"\n",
"data-0.png 100%[===================>] 290 --.-KB/s in 0s \n",
"data_0.png 100%[===================>] 290 --.-KB/s in 0s \n",
"\n",
"2022-11-24 13:46:02 (138 MB/s) - ‘data-0.png’ saved [290/290]\n",
"2023-01-18 15:24:03 (138 MB/s) - ‘data_0.png’ saved [290/290]\n",
"\n"
]
},
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fa1199f8710>"
"<matplotlib.image.AxesImage at 0x7f81fc0a0fd0>"
]
},
"execution_count": 3,
......@@ -238,41 +186,125 @@
],
"source": [
"# Load handwritten digit 0\n",
"!wget https://ai-studio-static-online.cdn.bcebos.com/088dc9dbabf349c88d029dfd2e07827aa6e41ba958c5434bbd96bc167fc65347 -O data-0.png\n",
"image0 = plt.imread('data-0.png')\n",
"!wget https://ai-studio-static-online.cdn.bcebos.com/088dc9dbabf349c88d029dfd2e07827aa6e41ba958c5434bbd96bc167fc65347 -O data_0.png\n",
"image0 = plt.imread('data_0.png')\n",
"plt.imshow(image0)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "34dcf660",
"metadata": {},
"source": [
"Next, let's configure the model parameters."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f637d0ca",
"id": "52d92fdf",
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"# The overall configuration file of the model.\n",
"# Enter the current task, which can be 'train' or 'test', representing training and prediction respectively. Here we use test, indicating that we want to make a prediction.\n",
"task = 'test'\n",
"# The file path of the image to be predicted.\n",
"image_path = 'data_0.png'\n",
"# Whether the image path above is a folder or not. For folder paths, we will predict all image files inside the folder. This way you can test multiple images at once.\n",
"is_dir = false\n",
"# The file path of the trained model parameter file.\n",
"model_path = 'vsql.pdparams'\n",
"# The number of qubits that the quantum circuit contains.\n",
"num_qubits = 10\n",
"# The number of qubits that the shadow circuit contains.\n",
"num_shadow = 2\n",
"# Circuit depth.\n",
"depth = 1\n",
"# The class to be predicted by the model. Here, 0 and 1 are classified.\n",
"classes = [0, 1]\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "35514ce7",
"metadata": {},
"source": [
"Then, we use the VSQL model to make predictions."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ca32fb07",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"For the input image, the model has 89.22% confidence that it is 0, and 10.78% confidence that it is 1.\n"
]
}
],
"source": [
"config = toml.loads(test_toml)\n",
"task = config.pop('task')\n",
"prediction, prob = inference(**config)\n",
"prob = prob[0]\n",
"msg = 'For the input image, the model has'\n",
"for idx, item in enumerate(prob):\n",
" if idx == len(prob) - 1:\n",
" msg += 'and'\n",
" label = config['classes'][idx]\n",
" msg += f' {item:3.2%} confidence that it is {label:d}'\n",
" msg += '.' if idx == len(prob) - 1 else ', '\n",
"print(msg)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "390302db",
"metadata": {},
"source": [
"Next, let's test another image."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4b0b6abe",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2022-11-24 13:46:03-- https://ai-studio-static-online.cdn.bcebos.com/c755f723af3d4a1c8f113f8ac3bd365406decd1be70944b7b7b9d41413e8bc7a\n",
"Resolving ai-studio-static-online.cdn.bcebos.com (ai-studio-static-online.cdn.bcebos.com)... 222.35.73.1\n",
"Connecting to ai-studio-static-online.cdn.bcebos.com (ai-studio-static-online.cdn.bcebos.com)|222.35.73.1|:443... connected.\n",
"--2023-01-18 15:24:12-- https://ai-studio-static-online.cdn.bcebos.com/c755f723af3d4a1c8f113f8ac3bd365406decd1be70944b7b7b9d41413e8bc7a\n",
"Resolving ai-studio-static-online.cdn.bcebos.com (ai-studio-static-online.cdn.bcebos.com)... 119.176.25.35, 153.35.89.225, 211.97.83.35, ...\n",
"Connecting to ai-studio-static-online.cdn.bcebos.com (ai-studio-static-online.cdn.bcebos.com)|119.176.25.35|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 173 [image/png]\n",
"Saving to: ‘data-1.png’\n",
"Saving to: ‘data_1.png’\n",
"\n",
"data-1.png 100%[===================>] 173 --.-KB/s in 0s \n",
"data_1.png 100%[===================>] 173 --.-KB/s in 0s \n",
"\n",
"2022-11-24 13:46:03 (165 MB/s) - ‘data-1.png’ saved [173/173]\n",
"2023-01-18 15:24:12 (3.38 KB/s) - ‘data_1.png’ saved [173/173]\n",
"\n"
]
},
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fa1008969d0>"
"<matplotlib.image.AxesImage at 0x7f81dd91eb50>"
]
},
"execution_count": 4,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
......@@ -289,66 +321,73 @@
],
"source": [
"# Load handwritten digit 1\n",
"!wget https://ai-studio-static-online.cdn.bcebos.com/c755f723af3d4a1c8f113f8ac3bd365406decd1be70944b7b7b9d41413e8bc7a -O data-1.png\n",
"image1 = plt.imread('data-1.png')\n",
"!wget https://ai-studio-static-online.cdn.bcebos.com/c755f723af3d4a1c8f113f8ac3bd365406decd1be70944b7b7b9d41413e8bc7a -O data_1.png\n",
"image1 = plt.imread('data_1.png')\n",
"plt.imshow(image1)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e40d847a",
"execution_count": 7,
"id": "a5bcba99",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/wangzihe/opt/anaconda3/envs/py37/lib/python3.7/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. \n",
"Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n",
" if data.dtype == np.object:\n",
"/Users/wangzihe/opt/anaconda3/envs/py37/lib/python3.7/site-packages/paddle/fluid/framework.py:1104: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.\n",
"Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n",
" elif dtype == np.bool:\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"For handwritten digits 0, the model has 89.22% confidence that it is 0 and 10.78% confidence that it is 1.\n",
"For handwritten digits 1, the model has 18.29% confidence that it is 0 and 81.71% confidence that it is 1.\n"
"For the input image, the model has 18.29% confidence that it is 0, and 81.71% confidence that it is 1.\n"
]
}
],
"source": [
"# Encoding images into quantum states\n",
"test_data = [np.array(image0).flatten(), np.array(image1).flatten()]\n",
"test_data = [np.pad(datum, pad_width=(0, 2 ** num_qubits - datum.size)) for datum in test_data]\n",
"test_data = [paddle.to_tensor(datum / np.linalg.norm(datum), dtype=pq.get_dtype()) for datum in test_data]\n",
"# Use the model to make predictions and get the corresponding probability\n",
"test_output = model(test_data)\n",
"test_prob = paddle.nn.functional.softmax(test_output)\n",
"print(\n",
" f\"For handwritten digits 0, \"\n",
" f\"the model has {test_prob[0][0].item():3.2%} confidence that it is 0 \"\n",
" f\"and {test_prob[0][1].item():3.2%} confidence that it is 1.\"\n",
")\n",
"print(\n",
" f\"For handwritten digits 1, \"\n",
" f\"the model has {test_prob[1][0].item():3.2%} confidence that it is 0 \"\n",
" f\"and {test_prob[1][1].item():3.2%} confidence that it is 1.\"\n",
")"
"test_toml = r\"\"\"\n",
"# The overall configuration file of the model.\n",
"# Enter the current task, which can be 'train' or 'test', representing training and prediction respectively. Here we use test, indicating that we want to make a prediction.\n",
"task = 'test'\n",
"# The file path of the image to be predicted.\n",
"image_path = 'data_1.png'\n",
"# Whether the image path above is a folder or not. For folder paths, we will predict all image files inside the folder. This way you can test multiple images at once.\n",
"is_dir = false\n",
"# The file path of the trained model parameter file.\n",
"model_path = 'vsql.pdparams'\n",
"# The number of qubits that the quantum circuit contains.\n",
"num_qubits = 10\n",
"# The number of qubits that the shadow circuit contains.\n",
"num_shadow = 2\n",
"# Circuit depth.\n",
"depth = 1\n",
"# The class to be predicted by the model. Here, 0 and 1 are classified.\n",
"classes = [0, 1]\n",
"\"\"\"\n",
"\n",
"config = toml.loads(test_toml)\n",
"task = config.pop('task')\n",
"prediction, prob = inference(**config)\n",
"prob = prob[0]\n",
"msg = 'For the input image, the model has'\n",
"for idx, item in enumerate(prob):\n",
" if idx == len(prob) - 1:\n",
" msg += 'and'\n",
" label = config['classes'][idx]\n",
" msg += f' {item:3.2%} confidence that it is {label:d}'\n",
" msg += '.' if idx == len(prob) - 1 else ', '\n",
"print(msg)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3990efae",
"metadata": {},
"source": [
"## 5. Note\n",
"\n",
"The model we provide is a binary classification model that can only be used to distinguish handwritten digits 0 and 1. For other classification tasks, it needs to be retrained."
"The model we provide is a binary classification model that can only be used to distinguish handwritten digits 0 and 1. For other classification tasks, it needs to be retrained.\n",
"\n",
"A more detailed description of the use can be found at https://github.com/PaddlePaddle/Quantum/blob/master/applications/handwritten_digits_classification/introduction_en.ipynb .\n",
"\n",
"A detailed description of the VSQL model can be found at https://github.com/PaddlePaddle/Quantum/blob/master/tutorials/machine_learning/VSQL_EN.ipynb ."
]
},
{
......@@ -374,7 +413,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.7.15 ('py37')",
"display_name": "pq-dev",
"language": "python",
"name": "python3"
},
......@@ -388,7 +427,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15"
"version": "3.8.15 (default, Nov 10 2022, 13:17:42) \n[Clang 14.0.6 ]"
},
"toc": {
"base_numbering": 1,
......@@ -405,7 +444,7 @@
},
"vscode": {
"interpreter": {
"hash": "49b49097121cb1ab3a8a640b71467d7eda4aacc01fc9ff84d52fcb3bd4007bf1"
"hash": "5fea01cac43c34394d065c23bb8c1e536fdb97a765a18633fd0c4eb359001810"
}
}
},
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册