PaddleQuantum_GPU_EN.ipynb 17.5 KB
Notebook
Newer Older
Q
Quleaf 已提交
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Use Paddle Quantum on GPU\n",
    "\n",
    "<em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction\n",
    "\n",
    "> Note that this tutorial is time-sensitive. And different computers will have individual differences. This tutorial does not guarantee that all computers can install it successfully.\n",
    "\n",
Q
Quleaf 已提交
20
    "In deep learning, people usually use GPU for neural network model training because GPU has significant advantages in floating-point operations compared with CPU. Therefore, using GPU to train neural network models has gradually become a common choice. In Paddle Quantum, our quantum states and quantum gates are also represented by complex numbers based on floating-point numbers. If our model can be deployed on GPU for training, it will also significantly increase the training speed."
Q
Quleaf 已提交
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## GPU selection\n",
    "\n",
    "Here, we choose Nvidia's hardware devices, and its CUDA (Compute Unified Device Architecture) supports deep learning framework better. PaddlePaddle can also be easily installed on CUDA.\n",
    "\n",
    "## Configure CUDA environment\n",
    "\n",
    "### Install CUDA\n",
    "\n",
    "Here, we introduce how to configure the CUDA environment in Windows 10 on the x64 platform. First, check on [CUDA GPUs | NVIDIA Developer](https://developer.nvidia.com/cuda-gpus) to see if your GPU support the CUDA environment. Then, download the latest version of your graphics card driver from [NVIDIA Driver Download](https://www.nvidia.cn/Download/index.aspx?lang=cn) and install it on your computer.\n",
    "\n",
Q
Quleaf 已提交
37
    "In [PaddlePaddle Installation Steps](https://www.paddlepaddle.org.cn/install/quick), we found that **Paddle Paddle only supports CUDA CUDA 9.0/10.0/10.1/10.2/11.0 single card mode under Windows**, so we install CUDA10.2 here. Find the download link of CUDA 10.2 in [CUDA Toolkit Archive | NVIDIA Developer](https://developer.nvidia.com/cuda-toolkit-archive): [CUDA Toolkit 10.2 Archive | NVIDIA Developer](https://developer.nvidia.com/cuda-10.2-download-archive). After downloading CUDA, run the installation.\n",
Q
Quleaf 已提交
38 39 40 41 42 43 44
    "\n",
    "During the installation process, select **Custom Installation** in the CUDA options, check all the boxes except for Visual Studio Integration (unless you are familiar with it). Then check CUDA option only. Then select the default location for the installation location (please pay attention to the installation location of your CUDA, you need to set environment variables later), and wait for the installation to complete.\n",
    "\n",
    "After the installation is complete, open the Windows command line and enter `nvcc -V`. If you see the version information, the CUDA installation is successful.\n",
    "\n",
    "### Install cuDNN\n",
    "\n",
Q
Quleaf 已提交
45
    "Download cuDNN in [NVIDIA cuDNN | NVIDIA Developer](https://developer.nvidia.com/cudnn), according to [PaddlePaddle Installation Steps](https://www.paddlepaddle.org.cn/install/quick) requirements, we **need to use cuDNN 7.6.5+**, so we can download the version 7.6.5 of cuDNN that supports CUDA 10.2. After downloading cuDNN, unzip it. Assuming the installation path of our CUDA is `C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2`. After decompressing cuDNN, we take the files in `bin`, `include` and `lib` and replace the corresponding original files in the CUDA installation path (if the file already exists, replace it, if it does not exist, paste it directly into the corresponding directory). At this point, cuDNN has been installed.\n",
Q
Quleaf 已提交
46 47 48 49 50
    "\n",
    "### Configure environment variables\n",
    "\n",
    "Next, you need to configure environment variables. Right-click \"This PC\" on the desktop of the computer (or \"This PC\" in the left column of \"File Explorer\"), select \"Properties\", and then select \"Advanced System Settings\" on the left, under the \"Advanced\" column Select \"Environmental Variables\".\n",
    "\n",
Q
Quleaf 已提交
51
    "Now you enter the setting page of environment variables, select `Path` in the `System variables`, and click `Edit`. In the page that appears, check if there are two addresses `C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\\bin` and `C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\\libnvvp`  (the prefix `C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2` should be your CUDA installation location), if not, please add them manually.\n",
Q
Quleaf 已提交
52 53 54
    "\n",
    "### Verify that the installation is successful\n",
    "\n",
Q
Quleaf 已提交
55
    "Open the command line and enter `cd C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\\extras\\demo_suite` to enter the CUDA installation path (this should also be your CUDA installation location). Then execute `.\\bandwidthTest.exe` and `.\\deviceQuery.exe` respectively. If both `Result = PASS` appear, the installation is successful.\n",
Q
Quleaf 已提交
56 57 58 59 60 61 62 63 64
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Install PaddlePaddle on CUDA environment\n",
    "\n",
Q
Quleaf 已提交
65 66
    "According to the instructions in [PaddlePaddle Installation Steps](https://www.paddlepaddle.org.cn/install/quick), we first need to make sure our python environment is correct and use `python --version` to check the python version. Ensure that the **python version is 3.5.1+/3.6+/3.7/3.8**, and use `python -m ensurepip` and `python -m pip --version` to check the pip version, **confirm it is 20.2.2+**. Then, use `python -m pip install paddlepaddle-gpu -i https://mirror.baidu.com/pypi/simple` to install the GPU version of PaddlePaddle.\n",
    "\n",
Q
Quleaf 已提交
67 68
    "## Install Paddle Quantum\n",
    "\n",
Q
Quleaf 已提交
69
    "Download the Paddle Quantum installation package, modify `setup.py` and `requirements.txt`, change `paddlepaddle` to `paddlepaddle-gpu`, and then execute `pip install -e .` according to the installation guide of Paddle Quantum from source code.\n",
Q
Quleaf 已提交
70
    "\n",
Q
Quleaf 已提交
71
    "> If you have installed paddlepaddle-gpu and paddle_quantum in a new python environment, please also install jupyter in the new python environment, and reopen this tutorial under the new jupyter notebook and run it.\n"
Q
Quleaf 已提交
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Check if the installation is successful\n",
    "\n",
    "Open the new environment where we installed  the GPU version of PaddlePaddle and execute the following command. If the output is `True`, it means that the current PaddlePaddle framework can run on the GPU.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "True\n"
     ]
    }
   ],
   "source": [
    "import paddle \n",
Q
Quleaf 已提交
99
    "print(paddle.is_compiled_with_cuda())"
Q
Quleaf 已提交
100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Use tutorials and examples\n",
    "\n",
    "In Paddle Quantum, we use the dynamic graph mode to define and train our parameterized quantum circuits. Here, we still use the dynamic graph mode and only need to define the GPU core where we run the dynamic graph mode."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```python\n",
    "# 0 means to use GPU number 0\n",
Q
Quleaf 已提交
117 118
    "paddle.set_device('gpu:0')\n",
    "# build and train your quantum circuit model\n",
Q
Quleaf 已提交
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If we want to run on CPU,  pretty much the same,  define the running device as CPU:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```python\n",
Q
Quleaf 已提交
134 135
    "paddle.set_device('cpu')\n",
    "# build and train your quantum circuit model\n",
Q
Quleaf 已提交
136 137 138 139 140 141 142 143 144
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can enter `nvidia-smi` in the command line to view the usage of the GPU, including which programs are running on which GPUs, and its memory usage.\n",
    "\n",
Q
Quleaf 已提交
145 146 147 148 149 150
    "Here, we take Variational Quantum Eigensolver ([VQE](/tutorials/quantum-simulation/variational-quantum-eigensolver.html)) as an example to illustrate how we should use GPU. \n",
    "For simplicity, VQA use a parameterized quantum circuit to search the vast Hilbert space, and uses the gradient descent method to find the optimal parameters, to get close to the ground state of a Hamiltonian (the smallest eigenvalue of the Hermitian matrix). The Hamiltonian in our example is given by the following H2_generator() function, and the quantum neural network has the following structure:\n",
    "\n",
    "![circuit](./figures/gpu-fig-circuit.jpg)\n",
    "\n",
    "First, import the related packages and define some variables and functions:"
Q
Quleaf 已提交
151 152 153 154 155 156 157 158 159
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
Q
Quleaf 已提交
160
    "import numpy\n",
Q
Quleaf 已提交
161 162 163
    "from numpy import concatenate\n",
    "from numpy import pi as PI\n",
    "from numpy import savez, zeros\n",
Q
Quleaf 已提交
164
    "from paddle import matmul, transpose\n",
Q
Quleaf 已提交
165 166 167 168
    "from paddle_quantum.circuit import UAnsatz\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy\n",
Q
Quleaf 已提交
169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194
    "\n",
    "\n",
    "def H2_generator():\n",
    "    \n",
    "    H = [\n",
    "        [-0.04207897647782277, 'i0'],\n",
    "        [0.17771287465139946, 'z0'],\n",
    "        [0.1777128746513994, 'z1'],\n",
    "        [-0.2427428051314046, 'z2'],\n",
    "        [-0.24274280513140462, 'z3'],\n",
    "        [0.17059738328801055, 'z0,z1'],\n",
    "        [0.04475014401535163, 'y0,x1,x2,y3'],\n",
    "        [-0.04475014401535163, 'y0,y1,x2,x3'],\n",
    "        [-0.04475014401535163, 'x0,x1,y2,y3'],\n",
    "        [0.04475014401535163, 'x0,y1,y2,x3'],\n",
    "        [0.12293305056183797, 'z0,z2'],\n",
    "        [0.1676831945771896, 'z0,z3'],\n",
    "        [0.1676831945771896, 'z1,z2'],\n",
    "        [0.12293305056183797, 'z1,z3'],\n",
    "        [0.1762764080431959, 'z2,z3']\n",
    "        ]\n",
    "    N = 4\n",
    "    \n",
    "    return H, N\n",
    "\n",
    "\n",
Q
Quleaf 已提交
195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222
    "\n",
    "Hamiltonian, N = H2_generator()\n",
    "\n",
    "\n",
    "def U_theta(theta, Hamiltonian, N, D):\n",
    "    \"\"\"\n",
    "    Quantum Neural Network\n",
    "    \"\"\"\n",
    "    \n",
    "    # Initialize the quantum neural network according to the number of qubits/network width\n",
    "    cir = UAnsatz(N)\n",
    "    \n",
    "    # Built-in {R_y + CNOT} circuit template\n",
    "    cir.real_entangled_layer(theta[:D], D)\n",
    "\n",
    "    # Add in the last row a layer of R_y rotation gates\n",
    "    for i in range(N):\n",
    "        cir.ry(theta=theta[D][i][0], which_qubit=i)\n",
    "\n",
    "    # The quantum neural network acts on the default initial state |0000>\n",
    "    cir.run_state_vector()\n",
    "\n",
    "    # Calculate the expected value of a given Hamiltonian\n",
    "    expectation_val = cir.expecval(Hamiltonian)\n",
    "\n",
    "    return expectation_val\n",
    "\n",
    "\n",
Q
Quleaf 已提交
223
    "class StateNet(paddle.nn.Layer):\n",
Q
Quleaf 已提交
224 225 226 227
    "    \"\"\"\n",
    "    Construct the model net\n",
    "    \"\"\"\n",
    "\n",
Q
Quleaf 已提交
228
    "    def __init__(self, shape, dtype=\"float64\"):\n",
Q
Quleaf 已提交
229 230 231 232
    "        super(StateNet, self).__init__()\n",
    "\n",
    "        # Initialize the theta parameter list and fill the initial value with the uniform distribution of [0, 2*pi]\n",
    "        self.theta = self.create_parameter(\n",
Q
Quleaf 已提交
233 234 235 236
    "            shape=shape, \n",
    "            default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2 * PI), \n",
    "            dtype=dtype, \n",
    "            is_bias=False)\n",
Q
Quleaf 已提交
237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265
    "\n",
    "    # Define loss function and forward propagation mechanism\n",
    "    def forward(self, Hamiltonian, N, D):\n",
    "        # Calculate loss function/expected value\n",
    "        loss = U_theta(self.theta, Hamiltonian, N, D)\n",
    "\n",
    "        return loss\n",
    "\n",
    "ITR = 80 # Set the total number of training iterations\n",
    "LR = 0.2 # Set the learning rate\n",
    "D = 2 # Set the depth of the repeated calculation module in the neural network"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you want to use GPU to train, run the following program:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
Q
Quleaf 已提交
266 267 268 269 270 271
      "iter: 20 loss: -1.0305\n",
      "iter: 20 Ground state energy: -1.0305 Ha\n",
      "iter: 40 loss: -1.1111\n",
      "iter: 40 Ground state energy: -1.1111 Ha\n",
      "iter: 60 loss: -1.1342\n",
      "iter: 60 Ground state energy: -1.1342 Ha\n",
Q
Quleaf 已提交
272 273 274 275 276 277 278
      "iter: 80 loss: -1.1359\n",
      "iter: 80 Ground state energy: -1.1359 Ha\n"
     ]
    }
   ],
   "source": [
    "# 0 means to use GPU number 0\n",
Q
Quleaf 已提交
279
    "paddle.set_device('gpu:0')\n",
Q
Quleaf 已提交
280
    "  \n",
Q
Quleaf 已提交
281 282
    "# Determine the parameter dimension of the network\n",
    "net = StateNet(shape=[D + 1, N, 1])\n",
Q
Quleaf 已提交
283
    "\n",
Q
Quleaf 已提交
284 285 286
    "# Generally speaking, we use Adam optimizer to get relatively good convergence\n",
    "# Of course, you can change to SGD or RMSProp\n",
    "opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())\n",
Q
Quleaf 已提交
287
    "\n",
Q
Quleaf 已提交
288 289
    "# Record optimization results\n",
    "summary_iter, summary_loss = [], []\n",
Q
Quleaf 已提交
290
    "\n",
Q
Quleaf 已提交
291 292
    "# Optimization cycle\n",
    "for itr in range(1, ITR + 1):\n",
Q
Quleaf 已提交
293
    "\n",
Q
Quleaf 已提交
294 295
    "    # Forward propagation to calculate loss function\n",
    "    loss = net(Hamiltonian, N, D)\n",
Q
Quleaf 已提交
296
    "\n",
Q
Quleaf 已提交
297 298 299 300
    "    # Under the dynamic graph mechanism, back propagation minimizes the loss function\n",
    "    loss.backward()\n",
    "    opt.minimize(loss)\n",
    "    opt.clear_grad()\n",
Q
Quleaf 已提交
301
    "\n",
Q
Quleaf 已提交
302 303 304
    "    # Update optimization results\n",
    "    summary_loss.append(loss.numpy())\n",
    "    summary_iter.append(itr)\n",
Q
Quleaf 已提交
305
    "\n",
Q
Quleaf 已提交
306 307 308 309 310
    "    # Print results\n",
    "    if itr% 20 == 0:\n",
    "        print(\"iter:\", itr, \"loss:\", \"%.4f\"% loss.numpy())\n",
    "        print(\"iter:\", itr, \"Ground state energy:\",\n",
    "              \"%.4f Ha\"% loss.numpy())"
Q
Quleaf 已提交
311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you want to use CPU to train, run the following program:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
Q
Quleaf 已提交
329 330 331 332 333 334 335 336
      "iter: 20 loss: -1.0639\n",
      "iter: 20 Ground state energy: -1.0639 Ha\n",
      "iter: 40 loss: -1.1209\n",
      "iter: 40 Ground state energy: -1.1209 Ha\n",
      "iter: 60 loss: -1.1345\n",
      "iter: 60 Ground state energy: -1.1345 Ha\n",
      "iter: 80 loss: -1.1360\n",
      "iter: 80 Ground state energy: -1.1360 Ha\n"
Q
Quleaf 已提交
337 338 339 340 341
     ]
    }
   ],
   "source": [
    "# Use CPU\n",
Q
Quleaf 已提交
342 343 344 345
    "paddle.set_device(\"cpu\")\n",
    "\n",
    "# Determine the parameter dimension of the network\n",
    "net = StateNet(shape=[D + 1, N, 1])\n",
Q
Quleaf 已提交
346
    "\n",
Q
Quleaf 已提交
347 348 349
    "# Generally speaking, we use Adam optimizer to get relatively good convergence\n",
    "# Of course you can change to SGD or RMSProp\n",
    "opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())\n",
Q
Quleaf 已提交
350
    "\n",
Q
Quleaf 已提交
351 352
    "# Record optimization results\n",
    "summary_iter, summary_loss = [], []\n",
Q
Quleaf 已提交
353
    "\n",
Q
Quleaf 已提交
354 355
    "# Optimization cycle\n",
    "for itr in range(1, ITR + 1):\n",
Q
Quleaf 已提交
356
    "\n",
Q
Quleaf 已提交
357 358
    "    # Forward propagation to calculate loss function\n",
    "    loss = net(Hamiltonian, N, D)\n",
Q
Quleaf 已提交
359
    "\n",
Q
Quleaf 已提交
360 361 362 363
    "    # Under the dynamic graph mechanism, back propagation minimizes the loss function\n",
    "    loss.backward()\n",
    "    opt.minimize(loss)\n",
    "    opt.clear_grad()\n",
Q
Quleaf 已提交
364
    "\n",
Q
Quleaf 已提交
365 366 367
    "    # Update optimization results\n",
    "    summary_loss.append(loss.numpy())\n",
    "    summary_iter.append(itr)\n",
Q
Quleaf 已提交
368
    "\n",
Q
Quleaf 已提交
369 370 371 372 373
    "    # Print results\n",
    "    if itr% 20 == 0:\n",
    "        print(\"iter:\", itr, \"loss:\", \"%.4f\"% loss.numpy())\n",
    "        print(\"iter:\", itr, \"Ground state energy:\",\n",
    "              \"%.4f Ha\"% loss.numpy())"
Q
Quleaf 已提交
374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "According to our test, the current version of paddle_quantum can run under GPU, but it needs better GPU resources to show sufficient acceleration. In future versions, we will continue to optimize the performance of Paddle Quantum under GPU. \n",
    "\n",
    "_______\n",
    "\n",
    "## Reference\n",
    "\n",
    "[1] [Installation Guide Windows :: CUDA Toolkit Documentation](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)\n",
    "\n",
    "[2] [Installation Guide :: NVIDIA Deep Learning cuDNN Documentation](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#installwindows)\n",
    "\n",
    "[3] [Getting Started PaddlePaddle](https://www.paddlepaddle.org.cn/install/quick)\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
Q
Quleaf 已提交
413
   "version": "3.7.10"
Q
Quleaf 已提交
414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}