"[5] Klanšek, Uroš. \"Using the TSP solution for optimal route scheduling in construction management.\" [Organization, technology & management in construction: an international journal 3.1 (2011): 243-249.](https://www.semanticscholar.org/paper/Using-the-TSP-Solution-for-Optimal-Route-Scheduling-Klansek/3d809f185c03a8e776ac07473c76e9d77654c389)\n",
"[5] Klanšek, Uroš. \"Using the TSP solution for optimal route scheduling in construction management.\" [Organization, technology & management in construction: an international journal 3.1 (2011): 243-249.](https://www.semanticscholar.org/paper/Using-the-TSP-Solution-for-Optimal-Route-Scheduling-Klansek/3d809f185c03a8e776ac07473c76e9d77654c389)\n",
"\n",
"\n",
"[6] Matai, Rajesh, Surya Prakash Singh, and Murari Lal Mittal. \"Traveling salesman problem: an overview of applications, formulations, and solution approaches.\" [Traveling salesman problem, theory and applications 1 (2010).](https://www.sciencedirect.com/topics/computer-science/traveling-salesman-problem)"
"[6] Matai, Rajesh, Surya Prakash Singh, and Murari Lal Mittal. \"Traveling salesman problem: an overview of applications, formulations, and solution approaches.\" [Traveling salesman problem, theory and applications 1 (2010).](https://www.sciencedirect.com/topics/computer-science/traveling-salesman-problem)"
"<em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>"
"<em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>"
]
],
"metadata": {}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"## Overview\n",
"## Overview\n",
"\n",
"\n",
"One of the most famous NP-hard problems in combinatorial optimization, the travelling salesman problem (TSP) considers the following question: \"Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?\" \n",
"One of the most famous NP-hard problems in combinatorial optimization, the travelling salesman problem (TSP) considers the following question: \"Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?\" \n",
"\n",
"\n",
"This question can also be formulated in the language of graph theory. Given a weighted undirected complete graph $G = (V,E)$, where each vertex $i \\in V$ corresponds to city $i$ and the weight $w_{i,j}$ of each edge $(i,j,w_{i,j}) \\in E$ represents the distance between cities $i$ and $j$, the TSP is to find the shortest Hamiltonian cycle in $G$, where a Hamiltonian cycle is a closed loop on a graph in which every vertex is visited exactly once. Note that because $G$ is an undirected graph, weights are symmetric, i.e., $w_{i,j} = w_{j,i}$. "
"This question can also be formulated in the language of graph theory. Given a weighted undirected complete graph $G = (V,E)$, where each vertex $i \\in V$ corresponds to city $i$ and the weight $w_{i,j}$ of each edge $(i,j,w_{i,j}) \\in E$ represents the distance between cities $i$ and $j$, the TSP is to find the shortest Hamiltonian cycle in $G$, where a Hamiltonian cycle is a closed loop on a graph in which every vertex is visited exactly once. Note that because $G$ is an undirected graph, weights are symmetric, i.e., $w_{i,j} = w_{j,i}$. "
]
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"## Use QNN to solve TSP\n",
"\n",
"To use QNN to solve travelling salesman problem, we need to first encode the classical problem to quantum. \n",
"The encoding consists of two parts:\n",
"\n",
"1. The route how the salesman visits each city is encoded in quantum states -- ${\\rm qubit}_{i,t} = |1\\rangle$ corresponds to salesman visiting city $i$ at time $t$. \n",
" 1. As an example, if there are two cities $\\{A,B\\}$, visiting $A$ then $B$ will be in state $|1001\\rangle$, as the salesman visits the city $A$ at time $1$ and the city $B$ at time $2$.\n",
" 2. Similary, $|0110\\rangle$ means visiting $B$ then $A$.\n",
" 3. Note: $|0101\\rangle$ means visiting $A$ and $B$ both at time $2$, so it is infeasible. To aviod such states, a penalty function will be used (see the next section for details.)\n",
"\n",
"2. The total distance is encoded in a loss function: \n",
"where $|\\psi(\\vec{\\theta})\\rangle$ is the output state from a parameterized quantum circuit. \n",
"\n",
"The details about how to encode the classical problem to quantum is given in detail in the next section. \n",
"After optimizing the loss function, we will obtain the optimal quantum state. Then a decoding process will be performed to get the final route."
],
"metadata": {}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"## Encoding the TSP\n",
"### Encoding the TSP\n",
"\n",
"\n",
"To transform the TSP into a problem applicable for parameterized quantum circuits, we need to encode the TSP into a Hamiltonian. We realize the encoding by first constructing an integer programming problem. Suppose there are $n=|V|$ vertices in graph $G$. Then for each vertex $i \\in V$, we define $n$ binary variables $x_{i,t}$, where $t \\in [0,n-1]$, such that\n",
"To transform the TSP into a problem applicable for parameterized quantum circuits, we need to encode the TSP into a Hamiltonian. \n",
"\n",
"We realize the encoding by first constructing an integer programming problem. Suppose there are $n=|V|$ vertices in graph $G$. Then for each vertex $i \\in V$, we define $n$ binary variables $x_{i,t}$, where $t \\in [0,n-1]$, such that\n",
"\n",
"\n",
"$$\n",
"$$\n",
"x_{i, t}=\n",
"x_{i, t}=\n",
...
@@ -34,28 +62,28 @@
...
@@ -34,28 +62,28 @@
"1, & \\text {if in the resulting Hamiltonian cycle, vertex } i \\text { is visited at time } t\\\\\n",
"1, & \\text {if in the resulting Hamiltonian cycle, vertex } i \\text { is visited at time } t\\\\\n",
"0, & \\text{otherwise}\n",
"0, & \\text{otherwise}\n",
"\\end{cases}.\n",
"\\end{cases}.\n",
"\\tag{1}\n",
"\\tag{2}\n",
"$$\n",
"$$\n",
"\n",
"\n",
"As there are $n$ vertices, we have $n^2$ variables in total, whose value we denote by a bit string $x=x_{1,1}x_{1,2}\\dots x_{n,n}$. Assume for now that the bit string $x$ represents a Hamiltonian cycle. Then for each edge $(i,j,w_{i,j}) \\in E$, we will have $x_{i,t} = x_{j,t+1}=1$, i.e., $x_{i,t}\\cdot x_{j,t+1}=1$, if and only if the Hamiltonian cycle visits vertex $i$ at time $t$ and vertex $j$ at time $t+1$; otherwise, $x_{i,t}\\cdot x_{j,t+1}$ will be $0$. Therefore the length of a Hamiltonian cycle is\n",
"As there are $n$ vertices, we have $n^2$ variables in total, whose value we denote by a bit string $x=x_{1,1}x_{1,2}\\dots x_{n,n}$. Assume for now that the bit string $x$ represents a Hamiltonian cycle. Then for each edge $(i,j,w_{i,j}) \\in E$, we will have $x_{i,t} = x_{j,t+1}=1$, i.e., $x_{i,t}\\cdot x_{j,t+1}=1$, if and only if the Hamiltonian cycle visits vertex $i$ at time $t$ and vertex $j$ at time $t+1$; otherwise, $x_{i,t}\\cdot x_{j,t+1}$ will be $0$. Therefore the length of a Hamiltonian cycle is\n",
"For $x$ to represent a valid Hamiltonian cycle, the following constraint needs to be met:\n",
"For $x$ to represent a valid Hamiltonian cycle, the following constraint needs to be met:\n",
"\n",
"\n",
"$$\n",
"$$\n",
"\\sum_t x_{i,t} = 1 \\quad \\forall i \\in [0,n-1] \\quad \\text{ and } \\quad \\sum_i x_{i,t} = 1 \\quad \\forall t \\in [0,n-1],\n",
"\\sum_t x_{i,t} = 1 \\quad \\forall i \\in [0,n-1] \\quad \\text{ and } \\quad \\sum_i x_{i,t} = 1 \\quad \\forall t \\in [0,n-1],\n",
"\\tag{3}\n",
"\\tag{4}\n",
"$$\n",
"$$\n",
"\n",
"\n",
"where the first equation guarantees that each vertex is only visited once and the second guarantees that only one vertex is visited at each time $t$. Then the cost function under the constraint can be formulated below, with $A$ being the penalty parameter set to ensure that the constraint is satisfied:\n",
"where the first equation guarantees that each vertex is only visited once and the second guarantees that only one vertex is visited at each time $t$. Then the cost function under the constraint can be formulated below, with $A$ being the penalty parameter set to ensure that the constraint is satisfied:\n",
"Note that as we would like to minimize the length $D(x)$ while ensuring $x$ represents a valid Hamiltonian cycle, we had better set $A$ large, at least larger than the largest weight of edges.\n",
"Note that as we would like to minimize the length $D(x)$ while ensuring $x$ represents a valid Hamiltonian cycle, we had better set $A$ large, at least larger than the largest weight of edges.\n",
"where $Z_{i,t} = I \\otimes I \\otimes \\ldots \\otimes Z \\otimes \\ldots \\otimes I$ with $Z$ operates on the qubit at position $(i,t)$. Under this mapping, if a qubit $(i,t)$ is in state $|1\\rangle$, then $x_{i,t}|1\\rangle = \\frac{I-Z_{i,t}}{2} |1\\rangle = 1 |1\\rangle$, which means vertex $i$ is visited at time $t$. Also, for a qubit $(i,t)$ in state $|0\\rangle$, $x_{i,t} |0\\rangle= \\frac{I-Z_{i,t}}{2} |0\\rangle = 0|0\\rangle$.\n",
"where $Z_{i,t} = I \\otimes I \\otimes \\ldots \\otimes Z \\otimes \\ldots \\otimes I$ with $Z$ operates on the qubit at position $(i,t)$. Under this mapping, if a qubit $(i,t)$ is in state $|1\\rangle$, then $x_{i,t}|1\\rangle = \\frac{I-Z_{i,t}}{2} |1\\rangle = 1 |1\\rangle$, which means vertex $i$ is visited at time $t$. Also, for a qubit $(i,t)$ in state $|0\\rangle$, $x_{i,t} |0\\rangle= \\frac{I-Z_{i,t}}{2} |0\\rangle = 0|0\\rangle$.\n",
"\n",
"\n",
"Thus using the above mapping, we can transform the cost function $C(x)$ into a Hamiltonian $H_C$ for the system of $n^2$ qubits and realize the quantumization of the TSP. Then the ground state of $H_C$ is the optimal solution to the TSP. In the following section, we will show how to use a parametrized quantum circuit to find the ground state, i.e., the eigenvector with the smallest eigenvalue.\n",
"Thus using the above mapping, we can transform the cost function $C(x)$ into a Hamiltonian $H_C$ for the system of $n^2$ qubits and realize the quantumization of the TSP. Then the ground state of $H_C$ is the optimal solution to the TSP. In the following section, we will show how to use a parametrized quantum circuit to find the ground state, i.e., the eigenvector with the smallest eigenvalue.\n",
"\n"
"\n"
]
],
"metadata": {}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"## Paddle Quantum Implementation\n",
"## Paddle Quantum Implementation\n",
"\n",
"\n",
"To investigate the TSP using Paddle Quantum, there are some required packages to import, which are shown below. The ``networkx`` package is the tool to handle graphs."
"To investigate the TSP using Paddle Quantum, there are some required packages to import, which are shown below. The ``networkx`` package is the tool to handle graphs."
]
],
"metadata": {}
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 1,
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2021-05-17T08:24:17.197426Z",
"start_time": "2021-05-17T08:24:12.896488Z"
}
},
"outputs": [],
"source": [
"source": [
"# Import related modules from Paddle Quantum and PaddlePaddle\n",
"# Import related modules from Paddle Quantum and PaddlePaddle\n",
"from paddle_quantum.QAOA.tsp import tsp_hamiltonian # Get the Hamiltonian for salesman problem\n",
"from paddle_quantum.QAOA.tsp import solve_tsp_brute_force # Solve the salesman problem by brute force\n",
"\n",
"# Create Graph\n",
"import networkx as nx\n",
"\n",
"\n",
"# Import additional packages needed\n",
"# Import additional packages needed\n",
"from numpy import pi as PI\n",
"from numpy import pi as PI\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.pyplot as plt\n",
"import networkx as nx\n",
"import random\n",
"import random"
"import time"
]
],
"outputs": [],
"metadata": {
"ExecuteTime": {
"end_time": "2021-05-17T08:24:17.197426Z",
"start_time": "2021-05-17T08:24:12.896488Z"
}
}
},
{
"cell_type": "markdown",
"source": [
"### Generate a weighted complete graph"
],
"metadata": {}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"Next, we generate a weighted complete graph $G$ with four vertices. For the convenience of computation, the vertices here are labeled starting from $0$."
"Next, we generate a weighted complete graph $G$ with four vertices. For the convenience of computation, the vertices here are labeled starting from $0$."
"In Paddle Quantum, a Hamiltonian can be input in the form of ``list``. Here we construct the Hamiltonian $H_C$ of Eq. (4) with the replacement in Eq. (5). \n",
"In Paddle Quantum, a Hamiltonian can be input in the form of ``list``. Here we construct the Hamiltonian $H_C$ of Eq. (4) with the replacement in Eq. (5). It can be realized with a build-in function \"tsp_hamiltonian(G, A, n)\".\n",
"\n",
"\n",
"To save the number of qubits needed, we observe the following fact: it is clear that vertex $n-1$ must always be included in the Hamiltonian cycle, and without loss of generality, we can set $x_{n-1,t} = \\delta_{n-1,t}$ for all $t$ and $x_{i,n-1} = \\delta_{i,n-1}$ for all $i$. **This just means that the overall ordering of the cycle is chosen so that vertex $n-1$ comes last.** This reduces the number of qubits to $(n-1)^2$. We adopt this slight modification of the TSP Hamiltonian in our implementation."
"**Note:** For the salesman problem, the number of qubits can be reduced to $(n-1)^2$ since we can always select city $0$ to be the first city."
]
],
"metadata": {}
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 3,
"execution_count": 3,
"source": [
"# Construct the Hamiltonian H_C in the form of list -- with build-in function tsp_hamiltonian(G, A, n)\n",
"A = 20 # Penalty parameter\n",
"H_C_list = tsp_hamiltonian(G, A, n)"
],
"outputs": [],
"metadata": {
"metadata": {
"ExecuteTime": {
"ExecuteTime": {
"end_time": "2021-05-17T08:24:25.956145Z",
"end_time": "2021-05-17T08:24:25.956145Z",
"start_time": "2021-05-17T08:24:25.950463Z"
"start_time": "2021-05-17T08:24:25.950463Z"
}
}
},
}
"outputs": [],
"source": [
"# Construct the Hamiltonian H_C in the form of list\n",
"A = 20 # Penalty parameter\n",
"H_C_list = tsp_hamiltonian(G, A, n)"
]
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"### Calculating the loss function \n",
"### Calculating the loss function \n",
"\n",
"\n",
"In the [Max-Cut tutorial](./MAXCUT_EN.ipynb), we use a circuit given by QAOA to find the ground state, but we can also use other circuits to solve combinatorial optimization problems. For the TSP, we adopt a parametrized quantum circuit constructed by $U_3(\\vec{\\theta})$ and $\\text{CNOT}$ gates, which we call the [`complex entangled layer`](https://qml.baidu.com/api/paddle_quantum.circuit.uansatz.html).\n",
"In the [Max-Cut tutorial](./MAXCUT_EN.ipynb), we use a circuit given by QAOA to find the ground state, but we can also use other circuits to solve combinatorial optimization problems. For the TSP, we adopt a parametrized quantum circuit constructed by $U_3(\\vec{\\theta})$ and $\\text{CNOT}$ gates, which we call the [`complex entangled layer`](https://qml.baidu.com/api/paddle_quantum.circuit.uansatz.html).\n",
"\n",
"\n",
"After running the quantum circuit, we ontain the output circuit $|\\vec{\\theta}\\rangle$. From the output state of the circuit we can calculate the objective function, and also the loss function of the TSP:\n",
"<center> Figure 1: Parametrized Quantum Circuit used for TSM Problem </center>\n",
"\n",
"After running the quantum circuit, we obtain the output state $|\\psi(\\vec{\\theta})\\rangle$. From the output state of the circuit we can calculate the objective function, and also the loss function of the TSP:\n",
"We then use a classical optimization algorithm to minimize this function and find the optimal parameters $\\vec{\\theta}^*$. The following code shows a complete network built with Paddle Quantum and PaddlePaddle."
"We then use a classical optimization algorithm to minimize this function and find the optimal parameters $\\vec{\\theta}^*$. The following code shows a complete network built with Paddle Quantum and PaddlePaddle."
]
],
"metadata": {}
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 4,
"execution_count": 4,
"metadata": {
"source": [
"ExecuteTime": {
"# In this tutorial we use build-in PQC: complex_entangled_layer()\n",
"print('The final minimum distance from QNN:', loss.numpy())"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"iter: 10 loss: 46.0232 run time: 7.641107082366943\n",
"iter: 20 loss: 22.6648 run time: 15.020977258682251\n",
"iter: 30 loss: 16.6194 run time: 22.464542627334595\n",
"iter: 40 loss: 14.3719 run time: 30.163496732711792\n",
"iter: 50 loss: 13.5547 run time: 38.4432737827301\n",
"iter: 60 loss: 13.1736 run time: 46.77324390411377\n",
"iter: 70 loss: 13.0661 run time: 55.22942876815796\n",
"iter: 80 loss: 13.0219 run time: 63.490843057632446\n",
"iter: 90 loss: 13.0035 run time: 72.72753691673279\n",
"iter: 100 loss: 13.0032 run time: 82.62676620483398\n",
"iter: 110 loss: 13.0008 run time: 91.19076180458069\n",
"iter: 120 loss: 13.0004 run time: 99.36567878723145\n",
"The final minimum distance from QNN: [13.00038342]\n"
]
}
],
"metadata": {
"ExecuteTime": {
"end_time": "2021-05-17T08:26:08.098742Z",
"start_time": "2021-05-17T08:24:28.741155Z"
}
}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"Note that ideally the training network will find the shortest Hamiltonian cycle, and the final loss above would correspond to the total weights of the optimal cycle, i.e. the distance of the optimal path for the salesman. If not, then one should adjust parameters of the parameterized quantum circuits above for better training performance."
"Note that ideally the training network will find the shortest Hamiltonian cycle, and the final loss above would correspond to the total weights of the optimal cycle, i.e. the distance of the optimal path for the salesman. If not, then one should adjust parameters of the parameterized quantum circuits above for better training performance."
]
],
"metadata": {}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"### Decoding the quantum solution\n",
"### Decoding the quantum solution\n",
"\n",
"\n",
"After obtaining the minimum value of the loss function and the corresponding set of parameters $\\vec{\\theta}^*$, our task has not been completed. In order to obtain an approximate solution to the TSP, it is necessary to decode the solution to the classical optimization problem from the quantum state $|\\vec{\\theta}^*\\rangle$ output by the circuit. Physically, to decode a quantum state, we need to measure it and then calculate the probability distribution of the measurement results, where a measurement result is a bit string that represents an answer for the TSP: \n",
"After obtaining the minimum value of the loss function and the corresponding set of parameters $\\vec{\\theta}^*$, our task has not been completed. In order to obtain an approximate solution to the TSP, it is necessary to decode the solution to the classical optimization problem from the quantum state $|\\psi(\\vec{\\theta})^*\\rangle$ output by the circuit. Physically, to decode a quantum state, we need to measure it and then calculate the probability distribution of the measurement results, where a measurement result is a bit string that represents an answer for the TSP: \n",
"print(\"The reduced bit string form of the walk found:\", reduced_salesman_walk)"
}
]
}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"As we have slightly modified the TSP Hamiltonian to reduce the number of qubits used, the bit string found above has lost the information for our fixed vertex $n-1$ and the status of other vertices at time $n-1$. So we need to extend the found bit string to include these information.\n",
"As we have slightly modified the TSP Hamiltonian to reduce the number of qubits used, the bit string found above has lost the information for our fixed vertex $n-1$ and the status of other vertices at time $n-1$. So we need to extend the found bit string to include these information.\n",
"\n",
"\n",
...
@@ -379,27 +441,12 @@
...
@@ -379,27 +441,12 @@
"After measurement, we have found the bit string with the highest probability of occurrence, the optimal walk in the form of the bit string. Each qubit contains the information of $x_{i,t}$ defined in Eq. (1). The following code maps the bit string back to the classic solution in the form of `dictionary`, where the `key` represents the vertex labeling and the `value` represents its order, i.e. when it is visited. \n",
"After measurement, we have found the bit string with the highest probability of occurrence, the optimal walk in the form of the bit string. Each qubit contains the information of $x_{i,t}$ defined in Eq. (1). The following code maps the bit string back to the classic solution in the form of `dictionary`, where the `key` represents the vertex labeling and the `value` represents its order, i.e. when it is visited. \n",
"\n",
"\n",
"Also, we have compared it with the solution found by the brute-force algorithm, to verify the correctness of the quantum algorithm."
"Also, we have compared it with the solution found by the brute-force algorithm, to verify the correctness of the quantum algorithm."
]
],
"metadata": {}
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"metadata": {
"ExecuteTime": {
"end_time": "2021-05-17T08:26:08.169372Z",
"start_time": "2021-05-17T08:26:08.156656Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The walk found by parameterized quantum circuit: {0: 1, 1: 2, 2: 0, 3: 3} with distance 13\n",
"The walk found by the brute-force algorithm: {0: 0, 1: 1, 2: 3, 3: 2} with distance 13\n"
]
}
],
"source": [
"source": [
"# Optimal walk found by parameterized quantum circuit\n",
"# Optimal walk found by parameterized quantum circuit\n",
"str_by_vertex = [reduced_salesman_walk[i:i + n - 1] for i in range(0, len(reduced_salesman_walk) + 1, n - 1)]\n",
"str_by_vertex = [reduced_salesman_walk[i:i + n - 1] for i in range(0, len(reduced_salesman_walk) + 1, n - 1)]\n",
"The left graph given above shows a solution found by the parameterized quantum circuit, while the right graph given above shows a solution found by the brute-force algorithm. It can be seen that even if the order of the vertices are different, the routes are essentially the same, which verifies the correctness of using parameterized quantum circuit to solve the TSP."
"The left graph given above shows a solution found by the parameterized quantum circuit, while the right graph given above shows a solution found by the brute-force algorithm. It can be seen that even if the order of the vertices are different, the routes are essentially the same, which verifies the correctness of using parameterized quantum circuit to solve the TSP."
]
],
"metadata": {}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"## Applications\n",
"## Applications\n",
"\n",
"\n",
...
@@ -490,11 +552,11 @@
...
@@ -490,11 +552,11 @@
"The TSP, as one of the most famous optimization problems, also provides a platform for the study of general methods in solving combinatorial problem. This is usually the first several problems that researchers give a try for experiments of new algorithms.\n",
"The TSP, as one of the most famous optimization problems, also provides a platform for the study of general methods in solving combinatorial problem. This is usually the first several problems that researchers give a try for experiments of new algorithms.\n",
"\n",
"\n",
"More applications, formulations and solution approaches can be found in [6]."
"More applications, formulations and solution approaches can be found in [6]."
]
],
"metadata": {}
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"_______\n",
"_______\n",
"\n",
"\n",
...
@@ -511,7 +573,8 @@
...
@@ -511,7 +573,8 @@
"[5] Klanšek, Uroš. \"Using the TSP solution for optimal route scheduling in construction management.\" [Organization, technology & management in construction: an international journal 3.1 (2011): 243-249.](https://www.semanticscholar.org/paper/Using-the-TSP-Solution-for-Optimal-Route-Scheduling-Klansek/3d809f185c03a8e776ac07473c76e9d77654c389)\n",
"[5] Klanšek, Uroš. \"Using the TSP solution for optimal route scheduling in construction management.\" [Organization, technology & management in construction: an international journal 3.1 (2011): 243-249.](https://www.semanticscholar.org/paper/Using-the-TSP-Solution-for-Optimal-Route-Scheduling-Klansek/3d809f185c03a8e776ac07473c76e9d77654c389)\n",
"\n",
"\n",
"[6] Matai, Rajesh, Surya Prakash Singh, and Murari Lal Mittal. \"Traveling salesman problem: an overview of applications, formulations, and solution approaches.\" [Traveling salesman problem, theory and applications 1 (2010).](https://www.sciencedirect.com/topics/computer-science/traveling-salesman-problem)"
"[6] Matai, Rajesh, Surya Prakash Singh, and Murari Lal Mittal. \"Traveling salesman problem: an overview of applications, formulations, and solution approaches.\" [Traveling salesman problem, theory and applications 1 (2010).](https://www.sciencedirect.com/topics/computer-science/traveling-salesman-problem)"
"[2] Farhi, Edward, and Hartmut Neven. Classification with quantum neural networks on near term processors. [arXiv preprint arXiv:1802.06002 (2018).](https://arxiv.org/abs/1802.06002)\n",
"[2] Farhi, Edward, and Hartmut Neven. Classification with quantum neural networks on near term processors. [arXiv preprint arXiv:1802.06002 (2018).](https://arxiv.org/abs/1802.06002)\n",
"\n",
"\n",
"[3] [Schuld, Maria, et al. Circuit-centric quantum classifiers. [Physical Review A 101.3 (2020): 032308.](https://arxiv.org/abs/1804.00633)"
"[3] Schuld, Maria, et al. Circuit-centric quantum classifiers. [Physical Review A 101.3 (2020): 032308.](https://arxiv.org/abs/1804.00633)"
"In the language of supervised learning, we need to enter a data set composed of $N$ groups of labeled data points $D = \\{(x^k,y^k)\\}_{k=1}^{N}$ , Where $x^k\\in \\mathbb{R}^{m}$ is the data point, and $y^k \\in\\{0,1\\}$ is the label associated with the data point $x^k$. **The classification process is essentially a decision-making process, which determines the label attribution of a given data point**. For the quantum classifier framework, the realization of the classifier $\\mathcal{F}$ is a combination of a quantum neural network (or parameterized quantum circuit) with parameters $\\theta$, measurement, and data processing. An excellent classifier $\\mathcal{F}_\\theta$ should correctly map the data points in each data set to the corresponding labels as accurate as possible $\\mathcal{F}_\\theta(x^k ) \\rightarrow y^k$. Therefore, we use the cumulative distance between the predicted label $\\tilde{y}^{k} = \\mathcal{F}_\\theta(x^k)$ and the actual label $y^k$ as the loss function $\\mathcal {L}(\\theta)$ to be optimized. For binary classification tasks, we can choose the following loss function,\n",
"In the language of supervised learning, we need to enter a data set composed of $N$ pairs of labeled data points $D = \\{(x^k,y^k)\\}_{k=1}^{N}$ , Where $x^k\\in \\mathbb{R}^{m}$ is the data point, and $y^k \\in\\{0,1\\}$ is the label associated with the data point $x^k$. **The classification process is essentially a decision-making process, which determines the label attribution of a given data point**. For the quantum classifier framework, the realization of the classifier $\\mathcal{F}$ is a combination of a quantum neural network (or parameterized quantum circuit) with parameters $\\theta$, measurement, and data processing. An excellent classifier $\\mathcal{F}_\\theta$ should correctly map the data points in each data set to the corresponding labels as accurate as possible $\\mathcal{F}_\\theta(x^k ) \\rightarrow y^k$. Therefore, we use the cumulative distance between the predicted label $\\tilde{y}^{k} = \\mathcal{F}_\\theta(x^k)$ and the actual label $y^k$ as the loss function $\\mathcal {L}(\\theta)$ to be optimized. For binary classification tasks, we can choose the following loss function,\n",
"Here we give the whole pipeline to implement a quantum classifier under the framework of quantum circuit learning (QCL).\n",
"Here we give the whole pipeline to implement a quantum classifier under the framework of quantum circuit learning (QCL).\n",
"\n",
"\n",
"1. Apply the parameterized quantum circuit $U$ on the initialized qubit $\\lvert 0 \\rangle$ to encode the original classical data point $x^k$ into quantum data that can be processed on a quantum computer $\\lvert \\psi_{in}\\rangle^k$.\n",
"1. Encode the classical data $x^k$ to quantum data $\\lvert \\psi_{\\rm in}\\rangle^k$. In this tutorial, we use Angle Encoding, see [encoding methods](./DataEncoding_EN.ipynb) for details. Readers can also try other encoding methods, e.g., Amplitude Encoding, and see the performance.\n",
"2. Apply the parameterized circuit $U(\\theta)$ with the parameter $\\theta$ on input states $\\lvert \\psi_{in} \\rangle^k$, thereby obtaining the output state $\\lvert \\psi_{out} \\rangle^k = U(\\theta)\\lvert \\psi_{in} \\rangle^k$.\n",
"2. Construct the parameterized quantum circuit (PQC), corresponds to the unitary gate $U(\\theta)$.\n",
"3. Measure the quantum state $\\lvert \\psi_{out}\\rangle^k$ processed by the quantum neural network to get the estimated label $\\tilde{y}^{k}$.\n",
"3. Apply the parameterized circuit $U(\\theta)$ with the parameter $\\theta$ on input states $\\lvert \\psi_{\\rm in} \\rangle^k$, thereby obtaining the output state $\\lvert \\psi_{\\rm out} \\rangle^k = U(\\theta)\\lvert \\psi_{\\rm in} \\rangle^k$.\n",
"4. Repeat steps 2-3 until all data points in the data set have been processed. Then calculate the loss function $\\mathcal{L}(\\theta)$.\n",
"4. Measure the quantum state $\\lvert \\psi_{\\rm out}\\rangle^k$ processed by the quantum neural network to get the estimated label $\\tilde{y}^{k}$.\n",
"5. Continuously adjust the parameter $\\theta$ through optimization methods such as gradient descent to minimize the loss function. Record the optimal parameters after optimization $\\theta^* $, and then we obtain the optimal classifier $\\mathcal{F}_{\\theta^*}$.\n",
"5. Repeat steps 3-4 until all data points in the data set have been processed. Then calculate the loss function $\\mathcal{L}(\\theta)$.\n",
"6. Continuously adjust the parameter $\\theta$ through optimization methods such as gradient descent to minimize the loss function. Record the optimal parameters after optimization $\\theta^* $, and then we obtain the optimal classifier $\\mathcal{F}_{\\theta^*}$.\n",
"\n",
"\n",
"\n",
"from paddle_quantum.utils import pauli_str_to_matrix,dagger # N qubits Pauli matrix, complex conjugate\n",
"\n",
"# Plot figures, calculate the run time\n",
"from matplotlib import pyplot as plt\n",
"import time"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Parameters used for classification"
]
]
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 2,
"execution_count": 2,
"metadata": {
"metadata": {},
"ExecuteTime": {
"end_time": "2021-03-09T04:03:35.682126Z",
"start_time": "2021-03-09T04:03:35.668825Z"
}
},
"outputs": [],
"outputs": [],
"source": [
"source": [
"# These are the main functions that will be used in the tutorial\n",
"Ntrain = 200 # Specify the training set size\n",
"__all__ = [\n",
"Ntest = 100 # Specify the test set size\n",
" \"circle_data_point_generator\",\n",
"gap = 0.5 # Set the width of the decision boundary\n",
" \"data_point_plot\",\n",
"N = 4 # Number of qubits required\n",
" \"heatmap_plot\",\n",
"DEPTH = 1 # Circuit depth\n",
" \"Ry\",\n",
"BATCH = 20 # Batch size during training\n",
" \"Rz\",\n",
"EPOCH = int(200 * BATCH / Ntrain)\n",
" \"Observable\",\n",
" # Number of training epochs, the total iteration number \"EPOCH * (Ntrain / BATCH)\" is chosen to be about 200\n",
" \"U_theta\",\n",
"LR = 0.01 # Set the learning rate\n",
" \"Net\",\n",
"seed_paras = 19 # Set random seed to initialize various parameters\n",
" \"QC\",\n",
"seed_data = 2 # Fixed random seed required to generate the data set\n"
" \"main\",\n",
"]"
]
]
},
},
{
{
...
@@ -103,14 +103,21 @@
...
@@ -103,14 +103,21 @@
"source": [
"source": [
"### Data set generation\n",
"### Data set generation\n",
"\n",
"\n",
"One of the key parts in supervised learning is what data set to use? In this tutorial, we follow the exact approach introduced in QCL paper to generate a simple binary data set $\\{(x^{(i)}, y^{(i)})\\}$ with circular decision boundary, where the data point $x^{(i)}\\in \\mathbb{R}^{2}$, and the label $y^{(i)} \\in \\{0,1\\}$. The figure below provides us a concrete example.\n",
"One of the key parts in supervised learning is what data set to use? In this tutorial, we follow the exact approach introduced in QCL paper to generate a simple binary data set $\\{(x^{k}, y^{k})\\}$ with circular decision boundary, where the data point $x^{k}\\in \\mathbb{R}^{2}$, and the label $y^{k} \\in \\{0,1\\}$. The figure below provides us a concrete example.\n",
"\n",
"\n",
"\n",
"print(\"Visualization of {} data points in the training set: \".format(Ntrain))\n",
"print(\"Visualization of {} data points in the training set: \".format(Ntrain))\n",
"data_point_plot(train_x, train_y)\n",
"data_point_plot(train_x, train_y)\n",
"print(\"Visualization of {} data points in the test set: \".format(Ntest))\n",
"print(\"Visualization of {} data points in the test set: \".format(Ntest))\n",
...
@@ -260,10 +290,12 @@
...
@@ -260,10 +290,12 @@
"metadata": {},
"metadata": {},
"source": [
"source": [
"### Data preprocessing\n",
"### Data preprocessing\n",
"Different from classical machine learning, quantum classifiers need to consider data preprocessing heavily. We need one more step to convert classical data into quantum information before running on a quantum computer. Now let's take a look at how it can be done. First, we determine the number of qubits that need to be used. Because our data $\\{x^{(i)} = (x^{(i)}_0, x^{(i)}_1)\\}$ is two-dimensional, according to the paper by Mitarai (2018) we need at least 2 qubits for encoding. Then prepare a group of initial quantum states $|00\\rangle$. Encode the classical information $\\{x^{(i)}\\}$ into a group of quantum gates $U(x^{(i)})$ and act them on the initial quantum states. Finally we get a group of quantum states $|\\psi^{(i)}\\rangle = U(x^{(i)})|00\\rangle$. In this way, we have completed the encoding from classical information into quantum information! Given $m$ qubits to encode a two-dimensional classical data point, the quantum gate is:\n",
"Different from classical machine learning, quantum classifiers need to consider data preprocessing heavily. We need one more step to convert classical data into quantum information before running on a quantum computer. In this tutorial we use \"Angle Encoding\" to get quantum data.\n",
"\n",
"First, we determine the number of qubits that need to be used. Because our data $\\{x^{k} = (x^{k}_0, x^{k}_1)\\}$ is two-dimensional, according to the paper by Mitarai (2018) we need at least 2 qubits for encoding. Then prepare a group of initial quantum states $|00\\rangle$. Encode the classical information $\\{x^{k}\\}$ into a group of quantum gates $U(x^{k})$ and act them on the initial quantum states. Finally we get a group of quantum states $|\\psi_{\\rm in}\\rangle^k = U(x^{k})|00\\rangle$. In this way, we have completed the encoding from classical information into quantum information! Given $m$ qubits to encode a two-dimensional classical data point, the quantum gate is:\n",
"After simplification, we can get the encoded quantum state $|\\psi\\rangle$ by acting the quantum gate on the initialized quantum state $|00\\rangle$,\n",
"After simplification, we can get the encoded quantum state $|\\psi_{\\rm in}\\rangle$ by acting the quantum gate on the initialized quantum state $|00\\rangle$,\n",
"\n",
"\n",
"$$\n",
"$$\n",
"|\\psi\\rangle =\n",
"|\\psi_{\\rm in}\\rangle =\n",
"U(x)|00\\rangle = \\frac{1}{2}\n",
"U(x)|00\\rangle = \\frac{1}{2}\n",
"\\begin{bmatrix}\n",
"\\begin{bmatrix}\n",
"1-i &0 &-1+i &0 \\\\\n",
"1-i &0 &-1+i &0 \\\\\n",
...
@@ -378,27 +410,17 @@
...
@@ -378,27 +410,17 @@
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {
"metadata": {
"ExecuteTime": {
"ExecuteTime": {
"end_time": "2021-03-09T04:03:37.354267Z",
"end_time": "2021-03-09T04:03:37.354267Z",
"start_time": "2021-03-09T04:03:37.258314Z"
"start_time": "2021-03-09T04:03:37.258314Z"
}
}
},
},
"outputs": [
"outputs": [],
{
"name": "stdout",
"output_type": "stream",
"text": [
"As a test, we enter the classical information:\n",
"(x_0, x_1) = (1, 0)\n",
"The 2-qubit quantum state output after encoding is:\n",
"[[[0.5-0.5j 0. +0.j 0.5-0.5j 0. +0.j ]]]\n"
]
}
],
"source": [
"source": [
"def myRy(theta):\n",
"# Gate: rotate around Y-axis, Z-axis with angle theta\n",
"As a test, we enter the classical information:\n",
"(x_0, x_1) = (1, 0)\n",
"The 2-qubit quantum state output after encoding is:\n",
"[[[0.5-0.5j 0. +0.j 0.5-0.5j 0. +0.j ]]]\n"
]
}
],
"source": [
"print(\"As a test, we enter the classical information:\")\n",
"print(\"As a test, we enter the classical information:\")\n",
"print(\"(x_0, x_1) = (1, 0)\")\n",
"print(\"(x_0, x_1) = (1, 0)\")\n",
"print(\"The 2-qubit quantum state output after encoding is:\")\n",
"print(\"The 2-qubit quantum state output after encoding is:\")\n",
...
@@ -454,12 +505,14 @@
...
@@ -454,12 +505,14 @@
"### Building Quantum Neural Network \n",
"### Building Quantum Neural Network \n",
"After completing the encoding from classical data to quantum data, we can now input these quantum states into the quantum computer. Before that, we also need to design the quantum neural network.\n",
"After completing the encoding from classical data to quantum data, we can now input these quantum states into the quantum computer. Before that, we also need to design the quantum neural network.\n",
"For convenience, we call the parameterized quantum neural network as $U(\\boldsymbol{\\theta})$. $U(\\boldsymbol{\\theta})$ is a key component of our classifier, and it needs a certain complex structure to fit our decision boundary. Similar to traditional neural networks, the structure of a quantum neural network is not unique. The structure shown above is just one case. You could design your own structure. Let’s take the previously mentioned data point $x = (x_0, x_1)= (1,0)$ as an example. After encoding, we have obtained a quantum state $|\\psi\\rangle$,\n",
"\n",
"For convenience, we call the parameterized quantum neural network as $U(\\boldsymbol{\\theta})$. $U(\\boldsymbol{\\theta})$ is a key component of our classifier, and it needs a certain complex structure to fit our decision boundary. Similar to traditional neural networks, the structure of a quantum neural network is not unique. The structure shown above is just one case. You could design your own structure. Let’s take the previously mentioned data point $x = (x_0, x_1)= (1,0)$ as an example. After encoding, we have obtained a quantum state $|\\psi_{\\rm in}\\rangle$,\n",
"\n",
"\n",
"$$\n",
"$$\n",
"|\\psi\\rangle =\n",
"|\\psi_{\\rm in}\\rangle =\n",
"\\frac{1}{2}\n",
"\\frac{1}{2}\n",
"\\begin{bmatrix}\n",
"\\begin{bmatrix}\n",
"1-i \\\\\n",
"1-i \\\\\n",
...
@@ -473,15 +526,15 @@
...
@@ -473,15 +526,15 @@
"Then we input this quantum state into our quantum neural network (QNN). That is, multiply a unitary matrix by a vector to get the processed quantum state $|\\varphi\\rangle$\n",
"Then we input this quantum state into our quantum neural network (QNN). That is, multiply a unitary matrix by a vector to get the processed quantum state $|\\varphi\\rangle$\n",
"# Simulation of building a quantum neural network\n",
"# Simulation of building a quantum neural network\n",
"def U_theta(theta, n, depth):\n",
"def cir_Classifier(theta, n, depth): \n",
" \"\"\"\n",
" \"\"\"\n",
" :param theta: dim: [n, depth + 3]\n",
" :param theta: dim: [n, depth + 3], \"+3\" because we add an initial generalized rotation gate to each qubit\n",
" :param n: number of qubits\n",
" :param n: number of qubits\n",
" :param depth: circuit depth\n",
" :param depth: circuit depth\n",
" :return: U_theta\n",
" :return: U_theta\n",
...
@@ -529,7 +582,7 @@
...
@@ -529,7 +582,7 @@
" # Initialize the network\n",
" # Initialize the network\n",
" cir = UAnsatz(n)\n",
" cir = UAnsatz(n)\n",
" \n",
" \n",
" # Build a rotation layer\n",
" # Build a generalized rotation layer\n",
" for i in range(n):\n",
" for i in range(n):\n",
" cir.rz(theta[i][0], i)\n",
" cir.rz(theta[i][0], i)\n",
" cir.ry(theta[i][1], i)\n",
" cir.ry(theta[i][1], i)\n",
...
@@ -538,25 +591,29 @@
...
@@ -538,25 +591,29 @@
" # The default depth is depth = 1\n",
" # The default depth is depth = 1\n",
" # Build the entangleed layer and Ry rotation layer\n",
" # Build the entangleed layer and Ry rotation layer\n",
" for d in range(3, depth + 3):\n",
" for d in range(3, depth + 3):\n",
" for i in range(n - 1):\n",
" # The entanglement layer\n",
" for i in range(n-1):\n",
" cir.cnot([i, i + 1])\n",
" cir.cnot([i, i + 1])\n",
" cir.cnot([n - 1, 0])\n",
" cir.cnot([n-1, 0])\n",
" # Add Ry to each qubit\n",
" for i in range(n):\n",
" for i in range(n):\n",
" cir.ry(theta[i][d], i)\n",
" cir.ry(theta[i][d], i)\n",
"\n",
"\n",
" return cir"
" return cir\n"
]
]
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"metadata": {},
"metadata": {},
"source": [
"source": [
"### Measurement and loss function\n",
"### Measurement\n",
"After the initial quantum state, $|\\psi\\rangle$ has been processed with QNN on the quantum computer (QPU), we need to measure this new quantum state $|\\varphi\\rangle$ to obtain the classical information. These processed classical information can be used to calculate the loss function $\\mathcal{L}(\\boldsymbol{\\theta})$. Finally, we use the classical computer (CPU) to continuously update the QNN parameters $\\boldsymbol{\\theta}$ and optimize the loss function. Here we measure the expected value of the Pauli $Z$ operator on the first qubit. Specifically,\n",
"After passing through the PQC $U(\\theta)$, the quantum data becomes $\\lvert \\psi_{\\rm out}\\rangle^k = U(\\theta)\\lvert \\psi_{\\rm in} \\rangle^k$. To get its label, we need to measure this new quantum state to obtain the classical information. These processed classical information will then be used to calculate the loss function $\\mathcal{L}(\\boldsymbol{\\theta})$. Finally, based on the gradient descent algorithm, we continuously update the PQC parameters $\\boldsymbol{\\theta}$ and optimize the loss function. \n",
"\n",
"Here we measure the expected value of the Pauli $Z$ operator on the first qubit. Specifically,\n",
"This measurement result seems to be our original label 1. Does this mean that we have successfully classified this data point? This is not the case because the range of $\\langle Z \\rangle$ is usually between $[-1,1]$. To match it to our label range $y^{(i)} \\in \\{0,1\\}$, we need to map the upper and lower limits. The simplest mapping is \n",
"This measurement result seems to be our original label 1. Does this mean that we have successfully classified this data point? This is not the case because the range of $\\langle Z \\rangle$ is usually between $[-1,1]$. \n",
"To match it to our label range $y^{k} \\in \\{0,1\\}$, we need to map the upper and lower limits. The simplest mapping is \n",
"Using bias is a trick in machine learning. The purpose is to make the decision boundary not restricted by the origin or some hyperplane. Generally, the default bias is initialized to be 0, and the optimizer will continuously update it like all the other parameters $\\theta$ in the iterative process to ensure $\\tilde{y}^{k} \\in [0, 1]$. Of course, you can also choose other complex mappings (activation functions), such as the sigmoid function. After mapping, we can regard $\\tilde{y}^{k}$ as the label we estimated. $\\tilde{y}^{k}< 0.5$ corresponds to label 0, and $\\tilde{y}^{k}> 0.5$ corresponds to label 1. It's time to quickly review the whole process before we finish discussion,\n",
"Using bias is a trick in machine learning. The purpose is to make the decision boundary not restricted by the origin or some hyperplane. Generally, the default bias is initialized to be 0, and the optimizer will continuously update it like all the other parameters $\\theta$ in the iterative process to ensure $\\tilde{y}^{k} \\in [0, 1]$. Of course, you can also choose other complex mappings (activation functions), such as the sigmoid function. After mapping, we can regard $\\tilde{y}^{k}$ as the label we estimated. $\\tilde{y}^{k}< 0.5$ corresponds to label 0, and $\\tilde{y}^{k}> 0.5$ corresponds to label 1. It's time to quickly review the whole process before we finish discussion,\n",
"\\rightarrow \\langle Z \\rangle \\rightarrow \\tilde{y}^{(i)}. \\tag{16}\n",
"\\rightarrow \\langle Z \\rangle \\rightarrow \\tilde{y}^{k}.\\tag{16}\n",
"$$\n",
"$$\n",
"\n",
"\n",
"Finally, we can define the loss function as a square loss function:\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loss function\n",
"To calculate the loss function in Eq. (1), we need to measure all training data in each iteration. In real practice, we devide the training data into \"Ntrain/BATCH\" groups, where each group contains \"BATCH\" data pairs.\n",
"The main program finished running in 23.123697519302368 seconds.\n"
"The main program finished running in 7.169757127761841 seconds.\n"
]
]
}
}
],
],
...
@@ -901,10 +985,11 @@
...
@@ -901,10 +985,11 @@
" Ntest = 100, # Specify the test set size\n",
" Ntest = 100, # Specify the test set size\n",
" gap = 0.5, # Set the width of the decision boundary\n",
" gap = 0.5, # Set the width of the decision boundary\n",
" N = 4, # Number of qubits required\n",
" N = 4, # Number of qubits required\n",
" D = 1, # Circuit depth\n",
" DEPTH = 1, # Circuit depth\n",
" EPOCH = 4, # Number of training epochs\n",
" BATCH = 20, # Batch size during training\n",
" EPOCH = int(200 * BATCH / Ntrain),\n",
" # Number of training epochs, the total iteration number \"EPOCH * (Ntrain / BATCH)\" is chosen to be about 200\n",
" LR = 0.01, # Set the learning rate\n",
" LR = 0.01, # Set the learning rate\n",
" BATCH = 1, # Batch size during training\n",
" seed_paras = 19, # Set random seed to initialize various parameters\n",
" seed_paras = 19, # Set random seed to initialize various parameters\n",
" seed_data = 2, # Fixed random seed required to generate the data set\n",
" seed_data = 2, # Fixed random seed required to generate the data set\n",
" )\n",
" )\n",
...
@@ -935,7 +1020,7 @@
...
@@ -935,7 +1020,7 @@
"\n",
"\n",
"[2] Farhi, Edward, and Hartmut Neven. Classification with quantum neural networks on near term processors. [arXiv preprint arXiv:1802.06002 (2018).](https://arxiv.org/abs/1802.06002)\n",
"[2] Farhi, Edward, and Hartmut Neven. Classification with quantum neural networks on near term processors. [arXiv preprint arXiv:1802.06002 (2018).](https://arxiv.org/abs/1802.06002)\n",
"\n",
"\n",
"[3] [Schuld, Maria, et al. Circuit-centric quantum classifiers. [Physical Review A 101.3 (2020): 032308.](https://arxiv.org/abs/1804.00633)\n"
"[3] Schuld, Maria, et al. Circuit-centric quantum classifiers. [Physical Review A 101.3 (2020): 032308.](https://arxiv.org/abs/1804.00633)\n"