提交 fbcf1296 编写于 作者: Q Quleaf

update to v2.1.0

上级 af346570
......@@ -18,7 +18,7 @@ English | [简体中文](README_CN.md)
- [Copyright and License](#copyright-and-license)
- [References](#references)
[Paddle Quantum (量桨)](https://qml.baidu.com/) is a quantum machine learning (QML) toolkit developed based on Baidu PaddlePaddle. It provides a platform to construct and train quantum neural networks (QNNs) with easy-to-use QML development kits suporting combinatorial optimization, quantum chemistry and other cutting-edge quantum applications, making PaddlePaddle the first and only deep learning framework in China that supports quantum machine learning.
[Paddle Quantum (量桨)](https://qml.baidu.com/) is a quantum machine learning (QML) toolkit developed based on Baidu PaddlePaddle. It provides a platform to construct and train quantum neural networks (QNNs) with easy-to-use QML development kits supporting combinatorial optimization, quantum chemistry and other cutting-edge quantum applications, making PaddlePaddle the first and only deep learning framework in China that supports quantum machine learning.
<p align="center">
<a href="https://qml.baidu.com/">
......@@ -33,7 +33,7 @@ English | [简体中文](README_CN.md)
</a>
<!-- PyPI -->
<a href="https://pypi.org/project/paddle-quantum/">
<img src="https://img.shields.io/badge/pypi-v2.0.1-orange.svg?style=flat-square&logo=pypi"/>
<img src="https://img.shields.io/badge/pypi-v2.1.0-orange.svg?style=flat-square&logo=pypi"/>
</a>
<!-- Python -->
<a href="https://www.python.org/">
......@@ -55,7 +55,7 @@ Paddle Quantum aims at establishing a bridge between artificial intelligence (AI
## Features
- Easy-to-use
- Many online learning resources (17+ tutorials)
- Many online learning resources (23+ tutorials)
- High efficiency in building QNN with various QNN templates
- Automatic differentiation
- Versatile
......@@ -66,7 +66,6 @@ Paddle Quantum aims at establishing a bridge between artificial intelligence (AI
- Toolboxes for Chemistry & Optimization
- LOCCNet for distributed quantum information processing
- Self-developed QML algorithms
## Install
......@@ -110,7 +109,7 @@ cd paddle_quantum/QAOA/example
python main.py
```
> For the introduction of QAOA, please refer to our [QAOA tutorial](https://github.com/PaddlePaddle/Quantum/tree/master/tutorial/QAOA).
> For the introduction of QAOA, please refer to our [QAOA tutorial](https://github.com/PaddlePaddle/Quantum/tree/master/tutorial/combinatorial_optimization/QAOA_EN.ipynb).
## Introduction and developments
......@@ -126,27 +125,42 @@ python main.py
### Tutorials
We provide tutorials covering combinatorial optimization, quantum chemistry, quantum classification, quantum entanglement manipulation and other popular QML research topics. Each tutorial currently supports reading on our website and running Jupyter Notebooks locally. For interested developers, we recommend them to download Jupyter Notebooks and play around with it. Here is the tutorial list,
1. [Quantum Approximation Optimization Algorithm (QAOA)](./tutorial/QAOA)
2. [Variational Quantum Eigensolver (VQE)](./tutorial/VQE)
3. [Quantum Classifier](./tutorial/Q-Classifier)
4. [The Barren Plateaus Phenomenon on Quantum Neural Networks (Barren Plateaus)](./tutorial/Barren)
5. [Quantum Autoencoder](./tutorial/Q-Autoencoder)
6. [Quantum GAN](./tutorial/Q-GAN)
7. [Subspace Search-Quantum Variational Quantum Eigensolver (SSVQE)](./tutorial/SSVQE)
8. [Variational Quantum State Diagonalization (VQSD)](./tutorial/VQSD)
9. [Gibbs State Preparation](./tutorial/Gibbs)
10. [Variational Quantum Singular Value Decomposition (VQSVD)](./tutorial/VQSVD)
11. [Local Operations and Classical Communication in QNN (LOCCNet)](./tutorial/LOCC/LOCCNET_Tutorial_EN.ipynb)
12. [Entanglement Distillation -- the BBPSSW protocol](./tutorial/LOCC)
13. [Entanglement Distillation -- the DEJMPS protocol](./tutorial/LOCC)
14. [Entanglement Distillation -- Protocol design with LOCCNet](./tutorial/LOCC)
15. [Quantum Teleportation](./tutorial/LOCC)
16. [Quantum State Discrimination](./tutorial/LOCC)
17. [Noise Model and Quantum Channel](./tutorial/Noise)
With the latest LOCCNet module, Paddle Quantum can efficiently simulate distributed quantum information processing tasks. Interested readers can start with this [tutorial on LOCCNet](./tutorial/LOCC/LOCCNET_Tutorial_EN.ipynb). In addition, Paddle Quantum supports QNN training on GPU. For users who want to get into more details, please check out the tutorial [Use Paddle Quantum on GPU](./introduction/PaddleQuantum_GPU_EN.ipynb). Moreover, Paddle Quantum could design robust quantum algorithms under noise. For more information, please see [Noise tutorial](./tutorial/Noise/Noise_EN.ipynb).
We provide tutorials covering quantum simulation, machine learning, combinatorial optimization, local operations and classical communication (LOCC), and other popular QML research topics. Each tutorial currently supports reading on our website and running Jupyter Notebooks locally. For interested developers, we recommend them to download Jupyter Notebooks and play around with it. Here is the tutorial list,
- [Quantum Simulation](./tutorial/quantum_simulation)
1. [Variational Quantum Eigensolver (VQE)](./tutorial/quantum_simulation/VQE_EN.ipynb)
2. [Subspace Search-Quantum Variational Quantum Eigensolver (SSVQE)](./tutorial/quantum_simulation/SSVQE_EN.ipynb)
3. [Variational Quantum State Diagonalization (VQSD)](./tutorial/quantum_simulation/VQSD_EN.ipynb)
4. [Gibbs State Preparation](./tutorial/quantum_simulation/GibbsState_EN.ipynb)
- [Machine Learning](./tutorial/machine_learning)
1. [Encoding Classical Data into Quantum States](./tutorial/machine_learning/DataEncoding_EN.ipynb)
2. [Quantum Classifier](./tutorial/machine_learning/QClassifier_EN.ipynb)
3. [Variational Shadow Quantum Learning (VSQL)](./tutorial/machine_learning/VSQL_EN.ipynb)
4. [Quantum Kernel Methods](./tutorial/machine_learning/QKernel_EN.ipynb)
5. [Quantum Autoencoder](./tutorial/machine_learning/QAutoencoder_EN.ipynb)
6. [Quantum GAN](./tutorial/machine_learning/QGAN_EN.ipynb)
7. [Variational Quantum Singular Value Decomposition (VQSVD)](./tutorial/machine_learning/VQSVD_EN.ipynb)
- [Combinatorial Optimization](./tutorial/combinatorial_optimization)
1. [Quantum Approximation Optimization Algorithm (QAOA)](./tutorial/combinatorial_optimization/QAOA_EN.ipynb)
2. [Solving Max-Cut Problem with QAOA](./tutorial/combinatorial_optimization/MAXCUT_EN.ipynb)
3. [Large-scale QAOA via Divide-and-Conquer](./tutorial/combinatorial_optimization/DC-QAOA_EN.ipynb)
4. [Travelling Salesman Problem](./tutorial/combinatorial_optimization/TSP_EN.ipynb)
- [LOCC with QNN (LOCCNet)](./tutorial/locc)
1. [Local Operations and Classical Communication in QNN (LOCCNet)](./tutorial/locc/LOCCNET_Tutorial_EN.ipynb)
2. [Entanglement Distillation -- the BBPSSW protocol](./tutorial/locc/EntanglementDistillation_BBPSSW_EN.ipynb)
3. [Entanglement Distillation -- the DEJMPS protocol](./tutorial/locc/EntanglementDistillation_DEJMPS_EN.ipynb)
4. [Entanglement Distillation -- Protocol design with LOCCNet](./tutorial/locc/EntanglementDistillation_LOCCNET_EN.ipynb)
5. [Quantum Teleportation](./tutorial/locc/QuantumTeleportation_EN.ipynb)
6. [Quantum State Discrimination](./tutorial/locc/StateDiscrimination_EN.ipynb)
- [QNN Research](./tutorial/qnn_research)
1. [The Barren Plateaus Phenomenon on Quantum Neural Networks (Barren Plateaus)](./tutorial/qnn_research/BarrenPlateaus_EN.ipynb)
2. [Noise Model and Quantum Channel](./tutorial/qnn_research/Noise_EN.ipynb)
With the latest LOCCNet module, Paddle Quantum can efficiently simulate distributed quantum information processing tasks. Interested readers can start with this [tutorial on LOCCNet](./tutorial/locc/LOCCNET_Tutorial_EN.ipynb). In addition, Paddle Quantum supports QNN training on GPU. For users who want to get into more details, please check out the tutorial [Use Paddle Quantum on GPU](./introduction/PaddleQuantum_GPU_EN.ipynb). Moreover, Paddle Quantum could design robust quantum algorithms under noise. For more information, please see [Noise tutorial](./tutorial/qnn_research/Noise_EN.ipynb).
### API documentation
......@@ -169,17 +183,19 @@ We also highly encourage developers to use Paddle Quantum as a research tool to
So far, we have done several projects with the help of Paddle Quantum as a powerful QML development platform.
[1] Wang, Y., Li, G. & Wang, X. Variational quantum Gibbs state preparation with a truncated Taylor series. arXiv:2005.08797 (2020). [[pdf](https://arxiv.org/pdf/2005.08797.pdf)]
[1] Wang, Youle, Guangxi Li, and Xin Wang. "Variational quantum gibbs state preparation with a truncated taylor series." arXiv preprint arXiv:2005.08797 (2020). [[pdf](https://arxiv.org/pdf/2005.08797.pdf)]
[2] Wang, X., Song, Z. & Wang, Y. Variational Quantum Singular Value Decomposition. arXiv:2006.02336 (2020). [[pdf](https://arxiv.org/pdf/2006.02336.pdf)]
[2] Wang, Xin, Zhixin Song, and Youle Wang. "Variational Quantum Singular Value Decomposition." arXiv preprint arXiv:2006.02336 (2020). [[pdf](https://arxiv.org/pdf/2006.02336.pdf)]
[3] Li, G., Song, Z. & Wang, X. VSQL: Variational Shadow Quantum Learning for Classification. arXiv:2012.08288 (2020). [[pdf]](https://arxiv.org/pdf/2012.08288.pdf), to appear at **AAAI 2021** conference.
[3] Li, Guangxi, Zhixin Song, and Xin Wang. "VSQL: Variational Shadow Quantum Learning for Classification." arXiv preprint arXiv:2012.08288 (2020). [[pdf]](https://arxiv.org/pdf/2012.08288.pdf), to appear at **AAAI 2021** conference.
[4] Chen, R., Song, Z., Zhao, X. & Wang, X. Variational Quantum Algorithms for Trace Distance and Fidelity Estimation. arXiv:2012.05768 (2020). [[pdf]](https://arxiv.org/pdf/2012.05768.pdf)
[4] Chen, Ranyiliu, et al. "Variational Quantum Algorithms for Trace Distance and Fidelity Estimation." arXiv preprint arXiv:2012.05768 (2020). [[pdf]](https://arxiv.org/pdf/2012.05768.pdf)
[5] Wang, K., Song, Z., Zhao, X., Wang Z. & Wang, X. Detecting and quantifying entanglement on near-term quantum devices. arXiv:2012.14311 (2020). [[pdf]](https://arxiv.org/pdf/2012.14311.pdf)
[5] Wang, Kun, et al. "Detecting and quantifying entanglement on near-term quantum devices." arXiv preprint arXiv:2012.14311 (2020). [[pdf]](https://arxiv.org/pdf/2012.14311.pdf)
[6] Zhao, X., Zhao, B., Wang, Z., Song, Z., & Wang, X. LOCCNet: a machine learning framework for distributed quantum information processing. arXiv:2101.12190 (2021). [[pdf]](https://arxiv.org/pdf/2101.12190.pdf)
[6] Zhao, Xuanqiang, et al. "LOCCNet: a machine learning framework for distributed quantum information processing." arXiv preprint arXiv:2101.12190 (2021). [[pdf]](https://arxiv.org/pdf/2101.12190.pdf)
[7] Cao, Chenfeng, and Xin Wang. "Noise-Assisted Quantum Autoencoder." Physical Review Applied 15.5 (2021): 054012. [[pdf]](https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.15.054012)
## Frequently Asked Questions
......
......@@ -34,7 +34,7 @@
</a>
<!-- PyPI -->
<a href="https://pypi.org/project/paddle-quantum/">
<img src="https://img.shields.io/badge/pypi-v2.0.1-orange.svg?style=flat-square&logo=pypi"/>
<img src="https://img.shields.io/badge/pypi-v2.1.0-orange.svg?style=flat-square&logo=pypi"/>
</a>
<!-- Python -->
<a href="https://www.python.org/">
......@@ -55,7 +55,7 @@
## 特色
- 轻松上手
- 丰富的在线学习资源(17+ 教程案例)
- 丰富的在线学习资源(23+ 教程案例)
- 通过模板高效搭建量子神经网络
- 自动微分框架
- 功能丰富
......@@ -109,7 +109,7 @@ cd paddle_quantum/QAOA/example
python main.py
```
> 关于 QAOA 的介绍可以参考我们的 [QAOA 教程](./tutorial/QAOA)。
> 关于 QAOA 的介绍可以参考我们的 [QAOA 教程](./tutorial/combinatorial_optimization/QAOA_CN.ipynb)。
## 入门与开发
......@@ -131,27 +131,42 @@ python main.py
Paddle Quantum(量桨)建立起了人工智能与量子计算的桥梁,为量子机器学习领域的研发提供强有力的支撑,也提供了丰富的案例供开发者学习。
在这里,我们提供了涵盖量子优化、量子化学、量子机器学习、量子纠缠处理等多个领域的案例供大家学习。每个教程目前支持网页阅览和运行 Jupyter Notebook 两种方式。我们推荐用户下载 Notebook 后,本地运行进行实践。
- [量子近似优化算法 (QAOA)](./tutorial/QAOA)
- [变分量子特征求解器 (VQE)](./tutorial/VQE)
- [量子分类器 (Quantum Classifier)](./tutorial/Q-Classifier)
- [量子神经网络的贫瘠高原效应 (Barren Plateaus)](./tutorial/Barren)
- [量子变分自编码器 (Quantum Autoencoder)](./tutorial/Q-Autoencoder)
- [量子生成对抗网络 (Quantum GAN)](./tutorial/Q-GAN)
- [子空间搜索 - 量子变分特征求解器 (SSVQE)](./tutorial/SSVQE)
- [变分量子态对角化算法 (VQSD)](./tutorial/VQSD)
- [吉布斯态的制备 (Gibbs State Preparation)](./tutorial/Gibbs)
- [变分量子奇异值分解 (VQSVD)](./tutorial/VQSVD)
- [LOCC 量子神经网络](./tutorial/LOCC/LOCCNET_Tutorial_CN.ipynb)
- [纠缠蒸馏 -- BBPSSW 协议](./tutorial/LOCC)
- [纠缠蒸馏 -- DEJMPS 协议](./tutorial/LOCC)
- [纠缠蒸馏 -- LOCCNet 设计协议](./tutorial/LOCC)
- [量子隐态传输](./tutorial/LOCC)
- [量子态分辨](./tutorial/LOCC)
- [噪声模型与量子信道](./tutorial/Noise)
随着 LOCCNet 模组的推出,量桨现已支持分布式量子信息处理任务的高效模拟和开发。感兴趣的读者请参见[教程](./tutorial/LOCC/LOCCNET_Tutorial_CN.ipynb)。Paddle Quantum 也支持在 GPU 上进行量子机器学习的训练,具体的方法请参考案例:[在 GPU 上使用 Paddle Quantum](./introduction/PaddleQuantum_GPU_CN.ipynb)。此外,量桨可以基于噪声模块进行含噪算法的开发以及研究,详情请见[噪声模块教程](./tutorial/Noise/Noise_CN.ipynb)
在这里,我们提供了涵盖量子模拟、机器学习、组合优化、本地操作与经典通讯(local operations and classical communication, LOCC)、量子神经网络等多个领域的案例供大家学习。每个教程目前支持网页阅览和运行 Jupyter Notebook 两种方式。我们推荐用户下载 Notebook 后,本地运行进行实践。
- [量子模拟](./tutorial/quantum_simulation)
1. [变分量子特征求解器(VQE)](./tutorial/quantum_simulation/VQE_CN.ipynb)
2. [子空间搜索 - 量子变分特征求解器(SSVQE)](./tutorial/quantum_simulation/SSVQE_CN.ipynb)
3. [变分量子态对角化算法(VQSD)](./tutorial/quantum_simulation/VQSD_CN.ipynb)
4. [吉布斯态的制备(Gibbs State Preparation)](./tutorial/quantum_simulation/GibbsState_CN.ipynb)
- [机器学习](./tutorial/machine_learning)
1. [量子态编码经典数据](./tutorial/machine_learning/DataEncoding_CN.ipynb)
2. [量子分类器(Quantum Classifier)](./tutorial/machine_learning/QClassifier_CN.ipynb)
3. [变分影子量子学习(VSQL)](./tutorial/machine_learning/VSQL_CN.ipynb)
4. [量子核方法(Quantum Kernel)](./tutorial/machine_learning/QKernel_CN.ipynb)
5. [量子变分自编码器(Quantum Autoencoder)](./tutorial/machine_learning/QAutoencoder_CN.ipynb)
6. [量子生成对抗网络(Quantum GAN)](./tutorial/machine_learning/QGAN_CN.ipynb)
7. [变分量子奇异值分解(VQSVD)](./tutorial/machine_learning/VQSVD_CN.ipynb)
- [组合优化](./tutorial/combinatorial_optimization)
1. [量子近似优化算法(QAOA)](./tutorial/combinatorial_optimization/QAOA_CN.ipynb)
2. [QAOA 求解最大割问题](./tutorial/combinatorial_optimization/MAXCUT_CN.ipynb)
3. [大规模量子近似优化分治算法(DC-QAOA)](./tutorial/combinatorial_optimization/DC-QAOA_CN.ipynb)
4. [旅行商问题](./tutorial/combinatorial_optimization/TSP_CN.ipynb)
- [LOCC 量子神经网络(LOCCNet)](./tutorial/locc)
1. [LOCC 量子神经网络](./tutorial/locc/LOCCNET_Tutorial_CN.ipynb)
2. [纠缠蒸馏 -- BBPSSW 协议](./tutorial/locc/EntanglementDistillation_BBPSSW_CN.ipynb)
3. [纠缠蒸馏 -- DEJMPS 协议](./tutorial/locc/EntanglementDistillation_DEJMPS_CN.ipynb)
4. [纠缠蒸馏 -- LOCCNet 设计协议](./tutorial/locc/EntanglementDistillation_LOCCNET_CN.ipynb)
5. [量子隐态传输](./tutorial/locc/QuantumTeleportation_CN.ipynb)
6. [量子态分辨](./tutorial/locc/StateDiscrimination_CN.ipynb)
- [量子神经网络研究](./tutorial/qnn_research)
1. [量子神经网络的贫瘠高原效应(Barren Plateaus)](./tutorial/qnn_research/BarrenPlateaus_CN.ipynb)
2. [噪声模型与量子信道](./tutorial/qnn_research/Noise_CN.ipynb)
随着 LOCCNet 模组的推出,量桨现已支持分布式量子信息处理任务的高效模拟和开发。感兴趣的读者请参见[教程](./tutorial/locc/LOCCNET_Tutorial_CN.ipynb)。Paddle Quantum 也支持在 GPU 上进行量子机器学习的训练,具体的方法请参考案例:[在 GPU 上使用 Paddle Quantum](./introduction/PaddleQuantum_GPU_CN.ipynb)。此外,量桨可以基于噪声模块进行含噪算法的开发以及研究,详情请见[噪声模块教程](./tutorial/qnn_research/Noise_CN.ipynb)
### API 文档
......@@ -169,19 +184,28 @@ Paddle Quantum 使用 setuptools 的 develop 模式进行安装,相关代码
## 使用 Paddle Quantum 的工作
我们非常欢迎开发者使用 Paddle Quantum 进行量子机器学习的研发,如果您的工作有使用 Paddle Quantum,也非常欢迎联系我们。目前使用 Paddle Quantum 的代表性工作包括了吉布斯态的制备和变分量子奇异值分解
我们非常欢迎开发者使用 Paddle Quantum 进行量子机器学习的研发,如果您的工作有使用 Paddle Quantum,也非常欢迎联系我们。以下为 BibTeX 的引用方式
[1] Wang, Y., Li, G. & Wang, X. Variational quantum Gibbs state preparation with a truncated Taylor series. arXiv:2005.08797 (2020). [[pdf](https://arxiv.org/pdf/2005.08797.pdf)]
> @misc{Paddlequantum,
> title = {{Paddle Quantum}},
> year = {2020},
> url = {https://github.com/PaddlePaddle/Quantum}, }
[2] Wang, X., Song, Z. & Wang, Y. Variational Quantum Singular Value Decomposition. arXiv:2006.02336 (2020). [[pdf](https://arxiv.org/pdf/2006.02336.pdf)]
目前使用 Paddle Quantum 的代表性工作包括了吉布斯态的制备和变分量子奇异值分解:
[3] Li, G., Song, Z. & Wang, X. VSQL: Variational Shadow Quantum Learning for Classification. arXiv:2012.08288 (2020). [[pdf]](https://arxiv.org/pdf/2012.08288.pdf), to appear at **AAAI 2021** conference.
[1] Wang, Youle, Guangxi Li, and Xin Wang. "Variational quantum gibbs state preparation with a truncated taylor series." arXiv preprint arXiv:2005.08797 (2020). [[pdf](https://arxiv.org/pdf/2005.08797.pdf)]
[4] Chen, R., et al. Variational Quantum Algorithms for Trace Distance and Fidelity Estimation. arXiv:2012.05768 (2020). [[pdf]](https://arxiv.org/pdf/2012.05768.pdf)
[2] Wang, Xin, Zhixin Song, and Youle Wang. "Variational Quantum Singular Value Decomposition." arXiv preprint arXiv:2006.02336 (2020). [[pdf](https://arxiv.org/pdf/2006.02336.pdf)]
[5] Wang, K., et al. Detecting and quantifying entanglement on near-term quantum devices. arXiv:2012.14311 (2020). [[pdf]](https://arxiv.org/pdf/2012.14311.pdf)
[3] Li, Guangxi, Zhixin Song, and Xin Wang. "VSQL: Variational Shadow Quantum Learning for Classification." arXiv preprint arXiv:2012.08288 (2020). [[pdf]](https://arxiv.org/pdf/2012.08288.pdf), to appear at **AAAI 2021** conference.
[6] Zhao, X., Zhao, B., Wang, Z., Song, Z., & Wang, X. LOCCNet: a machine learning framework for distributed quantum information processing. arXiv:2101.12190 (2021). [[pdf]](https://arxiv.org/pdf/2101.12190.pdf)
[4] Chen, Ranyiliu, et al. "Variational Quantum Algorithms for Trace Distance and Fidelity Estimation." arXiv preprint arXiv:2012.05768 (2020). [[pdf]](https://arxiv.org/pdf/2012.05768.pdf)
[5] Wang, Kun, et al. "Detecting and quantifying entanglement on near-term quantum devices." arXiv preprint arXiv:2012.14311 (2020). [[pdf]](https://arxiv.org/pdf/2012.14311.pdf)
[6] Zhao, Xuanqiang, et al. "LOCCNet: a machine learning framework for distributed quantum information processing." arXiv preprint arXiv:2101.12190 (2021). [[pdf]](https://arxiv.org/pdf/2101.12190.pdf)
[7] Cao, Chenfeng, and Xin Wang. "Noise-Assisted Quantum Autoencoder." Physical Review Applied 15.5 (2021): 054012. [[pdf]](https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.15.054012)
## FAQ
......
......@@ -6,7 +6,6 @@
"source": [
"# 在 GPU 上使用量桨\n",
"\n",
"\n",
"<em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>"
]
},
......@@ -138,7 +137,7 @@
"source": [
"我们可以在命令行中输入`nvidia-smi`来查看 GPU 的使用情况,包括有哪些程序在哪些 GPU 上运行,以及其显存占用情况。\n",
"\n",
"这里,我们以 [VQE](https://github.com/PaddlePaddle/Quantum/blob/master/tutorial/VQE) 为例来说明我们该如何使用 GPU。首先,导入相关的包并定义相关的变量和函数。"
"这里,我们以 [VQE](../tutorial/quantum_simulation/VQE_CN.ipynb) 为例来说明我们该如何使用 GPU。首先,导入相关的包并定义相关的变量和函数。"
]
},
{
......@@ -160,7 +159,7 @@
"\n",
"\n",
"def H2_generator():\n",
" \n",
"\n",
" H = [\n",
" [-0.04207897647782277, 'i0'],\n",
" [0.17771287465139946, 'z0'],\n",
......@@ -399,7 +398,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
"version": "3.8.3"
},
"toc": {
"base_numbering": 1,
......
......@@ -17,7 +17,7 @@
"\n",
"> Note that this tutorial is time-sensitive. And different computers will have individual differences. This tutorial does not guarantee that all computers can install it successfully.\n",
"\n",
"In deep learning, people usually use GPU for neural network model training because GPU has significant advantages in floating-point operations compared with CPU. Therefore, using GPU to train neural network models has gradually become a common choice. In Paddle Quantum, our quantum states and quantum gates are also represented by complex numbers based on floating-point numbers. If our model can be deployed on GPU for training, it will also significantly increase the training speed.\n"
"In deep learning, people usually use GPU for neural network model training because GPU has significant advantages in floating-point operations compared with CPU. Therefore, using GPU to train neural network models has gradually become a common choice. In Paddle Quantum, our quantum states and quantum gates are also represented by complex numbers based on floating-point numbers. If our model can be deployed on GPU for training, it will also significantly increase the training speed."
]
},
{
......@@ -142,7 +142,7 @@
"source": [
"We can enter `nvidia-smi` in the command line to view the usage of the GPU, including which programs are running on which GPUs, and its memory usage.\n",
"\n",
"Here, we take [VQE](https://github.com/PaddlePaddle/Quantum/blob/master/tutorial/VQE) as an example to illustrate how we should use GPU. First, import the related packages and define some variables and functions."
"Here, we take [VQE](../tutorial/quantum_simulation/VQE_EN.ipynb) as an example to illustrate how we should use GPU. First, import the related packages and define some variables and functions."
]
},
{
......@@ -405,7 +405,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
"version": "3.8.3"
},
"toc": {
"base_numbering": 1,
......
......@@ -114,8 +114,7 @@
"\n",
"The row vector (bra) $\\langle0| = |0\\rangle^\\dagger$ is the conjugate transpose of the column vector (ket) $|0\\rangle$.\n",
"\n",
"**Note:** For more information, please refer to [Wikipedia](https://en.wikipedia.org/wiki/Qubit).\n",
"\n"
"**Note:** For more information, please refer to [Wikipedia](https://en.wikipedia.org/wiki/Qubit)."
]
},
{
......@@ -198,7 +197,7 @@
"\n",
"We can expand the idea of single-qubit gates to multi-qubit. There are two ways to realize this expansion. The first is to apply single-qubit gates on selected qubits, while the other qubits are not operated. The figure below gives a concrete example:\n",
"\n",
"![intro-fig-hadamard](./figures/intro-fig-hadamard.png \"**Figure 2.** Circuit representation and interpretation of two-qubit logic operations. [[Picture source]](https://en.wikipedia.org/wiki/Quantum_logic_gate)\")\n",
"![intro-fig-hadamard](./figures/intro-fig-hadamard.png \"**Figure 2.** Circuit representation and interpretation of two-qubit logic operations. [[Image source]](https://en.wikipedia.org/wiki/Quantum_logic_gate)\")\n",
"\n",
"The quantum gate acting on two-qubit system can be expressed as a $4\\times4$ unitary matrix\n",
"\n",
......@@ -286,9 +285,7 @@
"\\tag{16}\n",
"$$\n",
"\n",
"Therefore, it is not difficult to see that the $X$ gate can be expressed as $R_x(\\pi)$. The following code will generate the $X$ gate:\n",
"\n",
"\n"
"Therefore, it is not difficult to see that the $X$ gate can be expressed as $R_x(\\pi)$. The following code will generate the $X$ gate:"
]
},
{
......@@ -296,8 +293,8 @@
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2021-03-09T03:55:14.848783Z",
"start_time": "2021-03-09T03:55:10.779109Z"
"end_time": "2021-04-30T09:20:08.888161Z",
"start_time": "2021-04-30T09:20:05.709738Z"
}
},
"outputs": [
......@@ -374,8 +371,7 @@
"ExecuteTime": {
"end_time": "2021-03-09T03:55:00.653853Z",
"start_time": "2021-03-09T03:55:00.404797Z"
},
"scrolled": true
}
},
"outputs": [],
"source": [
......@@ -427,11 +423,11 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2021-03-09T03:55:17.952285Z",
"start_time": "2021-03-09T03:55:17.912517Z"
"end_time": "2021-04-30T09:20:13.355621Z",
"start_time": "2021-04-30T09:20:13.331839Z"
}
},
"outputs": [
......@@ -486,8 +482,7 @@
"0 &0 &0 &1 \n",
"\\end{bmatrix}.\n",
"\\tag{19}\n",
"$$\n",
"\n"
"$$"
]
},
{
......@@ -501,11 +496,11 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2021-03-09T03:55:18.945226Z",
"start_time": "2021-03-09T03:55:18.935212Z"
"end_time": "2021-04-30T09:20:15.796777Z",
"start_time": "2021-04-30T09:20:15.786887Z"
}
},
"outputs": [],
......@@ -535,8 +530,41 @@
"source": [
"Answer:\n",
"\n",
"![intro-fig-gate2](./figures/intro-fig-gate2.png)\n",
"\n"
"![intro-fig-gate2](./figures/intro-fig-gate2.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can print your circuit using Paddle Quantum as follows:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2021-04-30T09:20:18.642677Z",
"start_time": "2021-04-30T09:20:18.636895Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--Ry(3.142)----*----Ry(3.142)---------------\n",
" | \n",
"--Ry(3.142)----X--------*--------Ry(3.142)--\n",
" | \n",
"--Ry(3.142)-------------X--------Ry(3.142)--\n",
" \n"
]
}
],
"source": [
"print(cir)"
]
},
{
......@@ -667,8 +695,7 @@
"\n",
"![intro-fig-complex_entangled_layer2](./figures/intro-fig-complex_entangled_layer2.png)\n",
"\n",
"When our task does not involve imaginary numbers, it is more efficient to use the circuit template `real_entangled_layer(theta, DEPTH)` ($R_y$ instead of $U_3$).\n",
"\n"
"When our task does not involve imaginary numbers, it is more efficient to use the circuit template `real_entangled_layer(theta, DEPTH)` ($R_y$ instead of $U_3$)."
]
},
{
......@@ -795,8 +822,7 @@
"\\tag{23}\n",
"$$\n",
"\n",
"Function `cir.run_density_matrix()` will be used in the following code. Here is an example:\n",
"\n"
"Function `cir.run_density_matrix()` will be used in the following code. Here is an example:"
]
},
{
......@@ -1460,7 +1486,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
"version": "3.7.10"
},
"toc": {
"base_numbering": 1,
......@@ -1478,7 +1504,7 @@
"width": "580px"
},
"toc_section_display": true,
"toc_window_display": true
"toc_window_display": false
},
"varInspector": {
"cols": {
......
......@@ -31,7 +31,7 @@ def H_generator():
# Generate Pauli string representing a specific Hamiltonian
H = [[-1.0, 'z0,z1'], [-1.0, 'z1,z2'], [-1.0, 'z0,z2']]
# Generate the marix form of the Hamiltonian
# Generate the matrix form of the Hamiltonian
N_SYS_B = 3 # Number of qubits in subsystem B used to generate Gibbs state
hamiltonian = pauli_str_to_matrix(H, N_SYS_B)
......
......@@ -17,7 +17,6 @@ Paddle_GIBBS
"""
from numpy import pi as PI
import paddle
from paddle import matmul, trace
from paddle_quantum.circuit import UAnsatz
......
......@@ -17,7 +17,6 @@ main
"""
import scipy
import paddle
from numpy import trace as np_trace
from paddle_quantum.utils import pauli_str_to_matrix
from paddle_quantum.GIBBS.Paddle_GIBBS import Paddle_GIBBS
......@@ -27,7 +26,7 @@ def main():
# Generate Pauli string representing a specific Hamiltonian
H = [[-1.0, 'z0,z1'], [-1.0, 'z1,z2'], [-1.0, 'z0,z2']]
# Generate the marix form of the Hamiltonian
# Generate the matrix form of the Hamiltonian
N_SYS_B = 3 # Number of qubits in subsystem B used to generate Gibbs state
hamiltonian = pauli_str_to_matrix(H, N_SYS_B)
......
# Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Paddle_QAOA: To learn more about the functions and properties of this application,
you could check the corresponding Jupyter notebook under the Tutorial folder.
"""
import paddle
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import pauli_str_to_matrix
from paddle_quantum.QAOA.QAOA_Prefunc import Generate_H_D, Generate_default_graph
from paddle_quantum.QAOA.QAOA_Prefunc import Draw_benchmark
from paddle_quantum.QAOA.QAOA_Prefunc import Draw_cut_graph, Draw_original_graph
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
# Random seed for optimizer
SEED = 1024
__all__ = [
"circuit_QAOA",
"Net",
"Paddle_QAOA",
]
def circuit_QAOA(E, V, n, p, gamma, beta):
"""
This function constructs the parameterized QAOA circuit which is composed of P layers of two blocks:
one block is based on the problem Hamiltonian H which encodes the classical problem,
and the other is constructed from the driving Hamiltonian describing the rotation around Pauli X
acting on each qubit. It outputs the final state of the QAOA circuit.
Args:
E: edges of the graph
V: vertices of the graph
n: number of qubits in th QAOA circuit
p: number of layers of two blocks in the QAOA circuit
gamma: parameter to be optimized in the QAOA circuit, parameter for the first block
beta: parameter to be optimized in the QAOA circui, parameter for the second block
Returns:
the QAOA circuit
"""
cir = UAnsatz(n)
cir.superposition_layer()
for layer in range(p):
for (u, v) in E:
cir.cnot([u, v])
cir.rz(gamma[layer], v)
cir.cnot([u, v])
for v in V:
cir.rx(beta[layer], v)
return cir
class Net(paddle.nn.Layer):
"""
It constructs the net for QAOA which combines the QAOA circuit with the classical optimizer which sets rules
to update parameters described by theta introduced in the QAOA circuit.
"""
def __init__(
self,
p,
dtype="float64",
):
super(Net, self).__init__()
self.p = p
self.gamma = self.create_parameter(shape = [self.p],
default_initializer = paddle.nn.initializer.Uniform(
low = 0.0,
high = 2 * np.pi
),
dtype = dtype,
is_bias = False)
self.beta = self.create_parameter(shape = [self.p],
default_initializer = paddle.nn.initializer.Uniform(
low = 0.0,
high = 2 * np.pi
),
dtype = dtype, is_bias = False)
def forward(self, n, E, V, H_D_list):
cir = circuit_QAOA(E, V, n, self.p, self.gamma, self.beta)
cir.run_state_vector()
loss = -cir.expecval(H_D_list)
return loss, cir
def Paddle_QAOA(n, p, E, V, H_D_list, ITR, LR):
"""
This is the core function to run QAOA.
Args:
n: number of qubits (default value N=4)
E: edges of the graph
V: vertices of the graph
p: number of layers of blocks in the QAOA circuit (default value p=4)
ITR: number of iteration steps for QAOA (default value ITR=120)
LR: learning rate for the gradient-based optimization method (default value LR=0.1)
Returns:
the optimized QAOA circuit
summary_iter
summary_loss
"""
net = Net(p)
opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())
summary_iter, summary_loss = [], []
for itr in range(1, ITR + 1):
loss, cir = net(n, E, V, H_D_list)
loss.backward()
opt.minimize(loss)
opt.clear_grad()
if itr % 10 == 0:
print("iter:", itr, " loss:", "%.4f" % loss.numpy())
summary_loss.append(loss[0][0].numpy())
summary_iter.append(itr)
gamma_opt = net.gamma.numpy()
print("优化后的参数 gamma:\n", gamma_opt)
beta_opt = net.beta.numpy()
print("优化后的参数 beta:\n", beta_opt)
return cir, summary_iter, summary_loss
def main(n = 4, E = None):
paddle.seed(SEED)
p = 4 # number of layers in the circuit
ITR = 120 #number of iterations
LR = 0.1 #learning rate
if E is None:
G, V, E = Generate_default_graph(n)
else:
G = nx.Graph()
V = range(n)
G.add_nodes_from(V)
G.add_edges_from(E)
Draw_original_graph(G)
#construct the Hamiltonia
H_D_list, H_D_matrix = Generate_H_D(E, n)
H_D_diag = np.diag(H_D_matrix).real
H_max = np.max(H_D_diag)
H_min = -H_max
print(H_D_diag)
print('H_max:', H_max, ' H_min:', H_min)
cir, summary_iter, summary_loss = Paddle_QAOA(n, p, E, V, H_D_list, ITR, LR)
H_min = np.ones([len(summary_iter)]) * H_min
Draw_benchmark(summary_iter, summary_loss, H_min)
prob_measure = cir.measure(plot=True)
cut_bitstring = max(prob_measure, key=prob_measure.get)
print("找到的割的比特串形式:", cut_bitstring)
Draw_cut_graph(V, E, G, cut_bitstring)
if __name__ == "__main__":
n = int(input("Please input the number of vertices: "))
user_input_edge_flag = int(input("Please choose if you want to input edges yourself (0 for yes, 1 for no): "))
if user_input_edge_flag == 1:
main(n)
else:
E = []
prompt = "Please input tuples indicating edges (e.g., (0, 1)), input 'z' if finished: "
while True:
edge = input(prompt)
if edge == 'z':
main(n, E)
break
else:
edge = eval(edge)
E.append(edge)
# Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Aid func
"""
import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
from paddle_quantum.utils import pauli_str_to_matrix
def Draw_original_graph(G):
"""
This is to draw the original graph
Args:
G: the constructed graph
Returns:
Null
"""
pos = nx.circular_layout(G)
options = {
"with_labels": True,
"font_size": 20,
"font_weight": "bold",
"font_color": "white",
"node_size": 2000,
"width": 2
}
nx.draw_networkx(G, pos, **options)
ax = plt.gca()
ax.margins(0.20)
plt.axis("off")
plt.show()
return
def Draw_cut_graph(V, E, G, cut_bitstring):
"""
This is to draw the graph after cutting
Args:
V: vertices in the graph
E: edges in the graph
cut_bitstring: bit string indicate whether vertices belongs to the first group or the second group
Returns:
Null
"""
node_cut = ["blue" if cut_bitstring[v] == "1" else "red" for v in V]
edge_cut = [
"solid" if cut_bitstring[u] == cut_bitstring[v] else "dashed"
for (u, v) in G.edges()
]
pos = nx.circular_layout(G)
options = {
"with_labels": True,
"font_size": 20,
"font_weight": "bold",
"font_color": "white",
"node_size": 2000,
"width": 2
}
nx.draw(
G,
pos,
node_color = node_cut,
style = edge_cut,
**options
)
ax = plt.gca()
ax.margins(0.20)
plt.axis("off")
plt.show()
return
def Generate_default_graph(n):
"""
This is to generate a default graph if no input
Args:
n: number of vertices
Returns:
G: the graph
E: edges list
V: vertices list
"""
G = nx.Graph()
V = range(n)
G.add_nodes_from(V)
E = []
for i in range(n - 1):
E.append((i, i + 1))
E.append((0, n - 1))
G.add_edges_from(E)
return G, V, E
def Generate_H_D(E, n):
"""
This is to construct Hamiltonia H_D
Args:
E: edges of the graph
Returns:
Hamiltonia list
Hamiltonia H_D
"""
H_D_list = []
for (u, v) in E:
H_D_list.append([-1.0, 'z' + str(u) + ',z' + str(v)])
print(H_D_list)
H_D_matrix = pauli_str_to_matrix(H_D_list, n)
return H_D_list, H_D_matrix
def Draw_benchmark(summary_iter, summary_loss, H_min):
"""
This is draw the learning tendency, and difference bwtween it and the benchmark
Args:
summary_iter: indicate which iteration
summary_loss: indicate the energy of that iteration
H_min: benchmark value H_min
Returns:
NULL
"""
plt.figure(1)
loss_QAOA, = plt.plot(
summary_iter,
summary_loss,
alpha=0.7,
marker='',
linestyle="--",
linewidth=2,
color='m')
benchmark, = plt.plot(
summary_iter,
H_min,
alpha=0.7,
marker='',
linestyle=":",
linewidth=2,
color='b')
plt.xlabel('Number of iteration')
plt.ylabel('Performance of the loss function for QAOA')
plt.legend(
handles=[loss_QAOA, benchmark],
labels=[
r'Loss function $\left\langle {\psi \left( {\bf{\theta }} \right)} '
r'\right|H\left| {\psi \left( {\bf{\theta }} \right)} \right\rangle $',
'The benchmark result',
],
loc='best')
# Show the picture
plt.show()
return
def main():
# number of qubits or number of nodes in the graph
n = 4
G, V, E = Generate_default_graph(n)
Draw_original_graph(G)
if __name__ == "__main__":
main()
# Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Benchmark
"""
from matplotlib import pyplot
from numpy import diag, max, min, load, ones
from paddle_quantum.utils import pauli_str_to_matrix
from paddle_quantum.QAOA.QAOA_Prefunc import generate_graph, H_generator
def benchmark_QAOA(classical_graph_adjacency=None, N=None):
"""
This function benchmarks the performance of QAOA. Indeed, it compares its approximate solution obtained
from QAOA with predetermined parameters, such as iteration step = 120 and learning rate = 0.1, to the exact solution
to the classical problem.
"""
# Generate the graph and its adjacency matrix from the classical problem, such as the Max-Cut problem
if all(var is None for var in (classical_graph_adjacency, N)):
N = 4
_, classical_graph_adjacency = generate_graph(N, 1)
# Convert the Hamiltonian's list form to matrix form
H_matrix = pauli_str_to_matrix(H_generator(N, classical_graph_adjacency), N)
H_diag = diag(H_matrix).real
# Compute the exact solution of the original problem to benchmark the performance of QAOA
H_max = max(H_diag)
H_min = min(H_diag)
print('H_max:', H_max, ' H_min:', H_min)
# Load the data of QAOA
x1 = load('./output/summary_data.npz')
H_min = ones([len(x1['iter'])]) * H_min
# Plot it
pyplot.figure(1)
loss_QAOA, = pyplot.plot(x1['iter'], x1['energy'],
alpha=0.7, marker='', linestyle="--", linewidth=2, color='m')
benchmark, = pyplot.plot(
x1['iter'],
H_min,
alpha=0.7,
marker='',
linestyle=":",
linewidth=2,
color='b')
pyplot.xlabel('Number of iteration')
pyplot.ylabel('Performance of the loss function for QAOA')
pyplot.legend(
handles=[loss_QAOA, benchmark],
labels=[
r'Loss function $\left\langle {\psi \left( {\bf{\theta }} \right)} '
r'\right|H\left| {\psi \left( {\bf{\theta }} \right)} \right\rangle $',
'The benchmark result',
],
loc='best')
# Show the picture
pyplot.show()
def main():
"""
main
"""
benchmark_QAOA()
if __name__ == '__main__':
main()
......@@ -17,39 +17,60 @@ main
"""
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import paddle
from paddle_quantum.QAOA.QAOA_Prefunc import Generate_H_D, Draw_cut_graph, Draw_original_graph, Generate_default_graph
from paddle_quantum.QAOA.Paddle_QAOA import Paddle_QAOA
from paddle_quantum.QAOA.maxcut import maxcut_hamiltonian, find_cut
from paddle_quantum.utils import pauli_str_to_matrix
SEED = 1024
options = {
"with_labels": True,
"font_size": 20,
"font_weight": "bold",
"font_color": "white",
"node_size": 2000,
"width": 2
}
def main(n = 4):
def main(n=4):
paddle.seed(SEED)
p = 4 # number of layers in the circuit
ITR = 120 #number of iterations
LR = 0.1 #learning rate
p = 4 # number of layers in the circuit
ITR = 120 # number of iterations
LR = 0.1 # learning rate
G, V, E = Generate_default_graph(n)
G.add_nodes_from(V)
G.add_edges_from(E)
Draw_original_graph(G)
G = nx.cycle_graph(4)
V = list(G.nodes())
E = list(G.edges())
# Draw the original graph
pos = nx.circular_layout(G)
nx.draw_networkx(G, pos, **options)
ax = plt.gca()
ax.margins(0.20)
plt.axis("off")
plt.show()
#construct the Hamiltonia
H_D_list, H_D_matrix = Generate_H_D(E, n)
# construct the Hamiltonian
H_D_list = maxcut_hamiltonian(E)
H_D_matrix = pauli_str_to_matrix(H_D_list, n)
H_D_diag = np.diag(H_D_matrix).real
H_max = np.max(H_D_diag)
print(H_D_diag)
print('H_max:', H_max)
cir, _, _ = Paddle_QAOA(n, p, E, V, H_D_list, ITR, LR)
prob_measure = cir.measure(plot=True)
cut_bitstring = max(prob_measure, key=prob_measure.get)
print("找到的割的比特串形式:", cut_bitstring)
cut_bitstring, _ = find_cut(G, p, ITR, LR, print_loss=True, plot=True)
print("The bit string form of the cut found:", cut_bitstring)
node_cut = ["blue" if cut_bitstring[v] == "1" else "red" for v in V]
edge_cut = ["solid" if cut_bitstring[u] == cut_bitstring[v] else "dashed" for (u, v) in G.edges()]
Draw_cut_graph(V, E, G, cut_bitstring)
nx.draw(G, pos, node_color = node_cut, style=edge_cut, **options)
ax = plt.gca()
ax.margins(0.20)
plt.axis("off")
plt.show()
if __name__ == "__main__":
......
# Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
To learn more about the functions and properties of this application,
you could check the corresponding Jupyter notebook under the Tutorial folder.
"""
import paddle
from paddle_quantum.circuit import UAnsatz
import numpy as np
import networkx as nx
__all__ = [
"maxcut_hamiltonian",
"circuit_maxcut",
"find_cut",
]
def maxcut_hamiltonian(E):
r"""生成最大割问题对应的哈密顿量。
Args:
E (list): 图的边
Returns:
list: 生成的哈密顿量的列表形式
"""
H_D_list = []
for (u, v) in E:
H_D_list.append([-1.0, 'z' + str(u) + ',z' + str(v)])
return H_D_list
def circuit_maxcut(E, V, p, gamma, beta):
r"""构建用于最大割问题的 QAOA 参数化电路。
Args:
E: 图的边
V: 图的顶点
p: QAOA 电路的层数
gamma: 与最大割问题哈密顿量相关的电路参数
beta: 与混合哈密顿量相关的电路参数
Returns:
UAnsatz: 构建好的 QAOA 电路
"""
# Number of qubits needed
n = len(V)
cir = UAnsatz(n)
cir.superposition_layer()
for layer in range(p):
for (u, v) in E:
cir.cnot([u, v])
cir.rz(gamma[layer], v)
cir.cnot([u, v])
for i in V:
cir.rx(beta[layer], i)
return cir
class _MaxcutNet(paddle.nn.Layer):
"""
It constructs the net for maxcut which combines the QAOA circuit with the classical optimizer that sets rules
to update parameters described by theta introduced in the QAOA circuit.
"""
def __init__(
self,
p,
dtype="float64",
):
super(_MaxcutNet, self).__init__()
self.p = p
self.gamma = self.create_parameter(shape=[self.p],
default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2 * np.pi),
dtype=dtype, is_bias=False)
self.beta = self.create_parameter(shape=[self.p],
default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2 * np.pi),
dtype=dtype, is_bias=False)
def forward(self, E, V, H_D_list):
"""
Forward propagation
"""
cir = circuit_maxcut(E, V, self.p, self.gamma, self.beta)
cir.run_state_vector()
loss = -cir.expecval(H_D_list)
return loss, cir
def find_cut(G, p, ITR, LR, print_loss=False, shots=0, plot=False):
r"""运行 QAOA 寻找最大割问题的近似解。
Args:
G (NetworkX graph): 图
p (int): QAOA 电路的层数
ITR (int): 梯度下降优化参数的迭代次数
LR (float): Adam 优化器的学习率
print_loss (bool, optional): 优化过程中是否输出损失函数的值,默认为 ``False``,即不输出
shots (int, optional): QAOA 电路最终输出的量子态的测量次数,默认 0,则返回测量结果的精确概率分布
plot (bool, optional): 是否绘制测量结果图,默认为 ``False`` ,即不绘制
Returns:
tuple: tuple containing:
string: 寻找到的近似解
dict: 所有测量结果和其对应的出现次数
"""
V = list(G.nodes())
# Map nodes' labels to integers from 0 to |V|-1
# node_mapping = {V[i]:i for i in range(len(V))}
# G_mapped = nx.relabel_nodes(G, node_mapping)
G_mapped = nx.convert_node_labels_to_integers(G)
V = list(G_mapped.nodes())
E = list(G_mapped.edges())
n = len(V)
H_D_list = maxcut_hamiltonian(E)
net = _MaxcutNet(p)
opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())
for itr in range(1, ITR + 1):
loss, cir = net(E, V, H_D_list)
loss.backward()
opt.minimize(loss)
opt.clear_grad()
if print_loss and itr % 10 == 0:
print("iter:", itr, " loss:", "%.4f" % loss.numpy())
prob_measure = cir.measure(shots=shots, plot=plot)
cut_bitstring = max(prob_measure, key=prob_measure.get)
return cut_bitstring, prob_measure
# Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Travelling Salesman Problem (TSP): To learn more about the functions and properties of this application,
you could check the corresponding Jupyter notebook under the Tutorial folder.
"""
from itertools import permutations
import numpy as np
import networkx as nx
import paddle
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import pauli_str_to_matrix
__all__ = [
"tsp_hamiltonian",
"solve_tsp",
"solve_tsp_brute_force"
]
def tsp_hamiltonian(g, A, n):
"""
This is to construct Hamiltonia H_C
Args:
G: the graph to solve
Returns:
Hamiltonian list
Hamiltonian H_C
"""
H_C_list1 = []
for i in range(n - 1):
for j in range(n - 1):
if i != j:
w_ij = g[i][j]['weight']
for t in range(n - 2):
H_C_list1.append([w_ij / 4, 'i1'])
H_C_list1.append([w_ij / 4, 'z' + str(i * (n - 1) + t) + ',z' + str(j * (n - 1) + t + 1)])
H_C_list1.append([-w_ij / 4, 'z' + str(i * (n - 1) + t)])
H_C_list1.append([-w_ij / 4, 'z' + str(j * (n - 1) + t + 1)])
H_C_list1.append([g[n - 1][i]['weight'] / 2, 'i1'])
H_C_list1.append([-g[n - 1][i]['weight'] / 2, 'z' + str(i * (n - 1) + (n - 2))])
H_C_list1.append([g[i][n - 1]['weight'] / 2, 'i1'])
H_C_list1.append([-g[i][n - 1]['weight'] / 2, 'z' + str(i * (n - 1))])
H_C_list2 = []
for i in range(n - 1):
H_C_list2.append([1, 'i1'])
for t in range(n-1):
H_C_list2.append([-2 * 1/2, 'i1'])
H_C_list2.append([2 * 1/2, 'z' + str(i * (n - 1) + t)])
H_C_list2.append([2/4, 'i1'])
H_C_list2.append([-2/4, 'z' + str(i * (n - 1) + t)])
for tt in range(t):
H_C_list2.append([2/4, 'i1'])
H_C_list2.append([2/4, 'z' + str(i * (n - 1) + t) + ',z' + str(i * (n - 1) + tt)])
H_C_list2.append([-2/4, 'z' + str(i * (n - 1) + t)])
H_C_list2.append([-2/4, 'z' + str(i * (n - 1) + tt)])
H_C_list2 = [[A * c, s] for (c, s) in H_C_list2]
H_C_list3 = []
for t in range(n - 1):
H_C_list3.append([1, 'i1'])
for i in range(n-1):
H_C_list3.append([-2 * 1/2, 'i1'])
H_C_list3.append([2 * 1/2, 'z' + str(i * (n - 1) + t)])
H_C_list3.append([2/4, 'i1'])
H_C_list3.append([-2/4, 'z' + str(i * (n - 1) + t)])
for ii in range(i):
H_C_list3.append([2/4, 'i1'])
H_C_list3.append([2/4, 'z' + str(i * (n - 1) + t) + ',z' + str(ii * (n - 1) + t)])
H_C_list3.append([-2/4,'z'+str(i * (n - 1) + t)])
H_C_list3.append([-2/4,'z'+str(ii * (n - 1) + t)])
H_C_list3 = [[A * c, s] for (c, s) in H_C_list3]
H_C_list = H_C_list1 + H_C_list2 + H_C_list3
return H_C_list
class _TSPNet(paddle.nn.Layer):
"""
It constructs the net for TSP which combines the complex entangled circuit with the classical optimizer that sets rules
to update parameters described by theta introduced in the circuit.
"""
def __init__(self, n, p, dtype="float64"):
super(_TSPNet, self).__init__()
self.p = p
self.num_qubits = (n - 1) ** 2
self.theta = self.create_parameter(shape=[self.p, self.num_qubits, 3],
default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2 * np.pi),
dtype=dtype, is_bias=False)
def forward(self, H_C_ls):
"""
Forward propagation
"""
cir = UAnsatz(self.num_qubits)
cir.complex_entangled_layer(self.theta, self.p)
cir.run_state_vector()
loss = cir.expecval(H_C_ls)
return loss, cir
def solve_tsp(g, A, p=2, ITR=120, LR=0.4, print_loss=False, shots=0):
"""
This is the core function to solve the TSP.
Args:
g: the graph to solve
A: the penality parameter
p: number of layers of blocks in the complex entangled circuit (default value p=2)
ITR: number of iteration steps for the complex entangled circuit (default value ITR=120)
LR: learning rate for the gradient-based optimization method (default value LR=0.4)
Returns:
string representation for the optimized walk for the salesman
"""
e = list(g.edges(data = True))
v = list(g.nodes)
n = len(v)
H_C_list = tsp_hamiltonian(g, A, n)
net = _TSPNet(n, p)
opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())
for itr in range(1, ITR + 1):
loss, cir = net(H_C_list)
loss.backward()
opt.minimize(loss)
opt.clear_grad()
if print_loss and itr % 10 == 0:
print("iter:", itr, "loss:", "%.4f"% loss.numpy())
prob_measure = cir.measure(shots=shots)
reduced_salesman_walk = max(prob_measure, key=prob_measure.get)
str_by_vertex = [reduced_salesman_walk[i:i + n - 1] for i in range(0, len(reduced_salesman_walk) + 1, n - 1)]
salesman_walk = '0'.join(str_by_vertex) + '0' * (n - 1) + '1'
return salesman_walk
def solve_tsp_brute_force(g):
"""
This is the brute-force algorithm to solve the TSP.
Args:
g: the graph to solve
Returns:
the list of the optimized walk in the visiting order and the optimal distance
"""
n = len(g.nodes)
all_routes = list(permutations(range(n)))
best_distance = 1e10
best_route = (0, 0, 0, 0)
for route in all_routes:
distance = 0
for i in range(n):
u = route[i]
v = route[(i + 1) % n]
distance += g[u][v]['weight']
if distance < best_distance:
best_distance = distance
best_route = route
return list(best_route), best_distance
......@@ -17,8 +17,6 @@ main
"""
import numpy
import paddle
from paddle_quantum.SSVQE.HGenerator import H_generator
from paddle_quantum.SSVQE.Paddle_SSVQE import Paddle_SSVQE
......
......@@ -18,7 +18,6 @@ benchmark the result
import platform
import matplotlib.pyplot as plt
import numpy
from paddle_quantum.utils import pauli_str_to_matrix
......
......@@ -28,7 +28,6 @@ __all__ = [
]
# todo
def Hamiltonian_str_convert(qubit_op):
'''
Convert provided Hamiltonian information to Pauli string
......
......@@ -17,8 +17,6 @@ main
"""
import platform
import paddle
from paddle_quantum.VQE.Paddle_VQE import Paddle_VQE
from paddle_quantum.VQE.benchmark import benchmark_result
from paddle_quantum.VQE.chemistrysub import H2_generator
......
......@@ -16,7 +16,7 @@
HGenerator
"""
from numpy import diag
import numpy
import scipy
import scipy.stats
......@@ -26,13 +26,13 @@ __all__ = ["generate_rho_sigma", ]
def generate_rho_sigma():
scipy.random.seed(SEED)
V = scipy.stats.unitary_group.rvs(4) # Generate a random unitary martrix
D = diag([0.5, 0.3, 0.1, 0.1]) # Input the spectrum of the target state rho
numpy.random.seed(SEED)
V = scipy.stats.unitary_group.rvs(4) # Generate a random unitary matrix
D = numpy.diag([0.5, 0.3, 0.1, 0.1]) # Input the spectrum of the target state rho
V_H = V.conj().T
rho = V @ D @ V_H # Generate rho
# print(rho) # Print quantum state rho
# Input the quantum state sigma
sigma = diag([0.1, 0.2, 0.3, 0.4]).astype('complex128')
sigma = numpy.diag([0.1, 0.2, 0.3, 0.4]).astype('complex128')
return rho, sigma
......@@ -17,8 +17,6 @@ Main
"""
import numpy
import paddle
from paddle_quantum.VQSD.HGenerator import generate_rho_sigma
from paddle_quantum.VQSD.Paddle_VQSD import Paddle_VQSD
......
......@@ -17,4 +17,4 @@ Paddle Quantum Library
"""
name = "paddle_quantum"
__version__ = "2.0.1"
__version__ = "2.1.0"
此差异已折叠。
......@@ -121,14 +121,19 @@ def vec_expecval(H, vec):
def transfer_by_history(state, history):
r"""
It transforms the input state according to the history give.
It transforms the input state according to the history given.
Note:
这是内部函数,你并不需要直接调用到该函数。
"""
for history_ele in history:
if history_ele[0] != 'channel':
state = StateTransfer(state, history_ele[0], history_ele[1], params=history_ele[2])
if history_ele[0] in {'s', 't', 'ry', 'rz', 'rx'}:
state = StateTransfer(state, 'u', history_ele[1], params=history_ele[2])
elif history_ele[0] == 'MS_gate':
state = StateTransfer(state, 'RXX_gate', history_ele[1], params=history_ele[2])
else:
state = StateTransfer(state, history_ele[0], history_ele[1], params=history_ele[2])
return state
......
......@@ -18,10 +18,8 @@ import paddle
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import NKron, partial_trace, dagger
from paddle import matmul, trace, divide, kron, add, multiply
from paddle import sin, cos, concat, zeros, ones, real
from paddle_quantum.state import isotropic_state, bell_state
from paddle import sin, cos, real
from math import log2, sqrt
from numpy import pi as PI
class LoccStatus(object):
......@@ -30,8 +28,8 @@ class LoccStatus(object):
由于我们在 LOCC 中不仅关心量子态的解析形式,同时还关心得到它的概率,以及是经过怎样的测量而得到。因此该类包含三个成员变量:量子态 ``state`` 、得到这个态的概率 ``prob`` 和得到这个态的测量的测量结果是什么,即 ``measured_result`` 。
Attributes:
state (numpy.ndarray): 表示量子态的矩阵形式
prob (numpy.ndarray): 表示得到这个量子态的概率
state (paddle.Tensor): 表示量子态的矩阵形式
prob (paddle.Tensor): 表示得到这个量子态的概率
measured_result (str): 表示得到这个态的测量函数的测量结果
"""
......@@ -39,8 +37,8 @@ class LoccStatus(object):
r"""构造函数,用于实例化一个 ``LoccStatus`` 对象。
Args:
state (numpy.ndarray): 默认为 ``None`` ,该 ``LoccStatus`` 的量子态的矩阵形式
prob (numpy.ndarray): 默认为 ``None`` ,得到该量子态的概率
state (paddle.Tensor): 默认为 ``None`` ,该 ``LoccStatus`` 的量子态的矩阵形式
prob (paddle.Tensor): 默认为 ``None`` ,得到该量子态的概率
measured_result (str): 默认为 ``None`` ,表示得到这个态的测量函数的测量结果
"""
super(LoccStatus, self).__init__()
......@@ -499,7 +497,9 @@ class LoccAnsatz(UAnsatz):
def __add_complex_layer(self, theta, position):
r"""
Add a complex layer on the circuit. theta is a two dimensional tensor. position is the qubit range the layer needs to cover
Add a complex layer on the circuit.
Theta is a two dimensional tensor.
Position is the qubit range the layer needs to cover
Note:
这是内部函数,你并不需要直接调用到该函数。
......@@ -658,6 +658,7 @@ class LoccAnsatz(UAnsatz):
which_qubit = self.party[which_qubit]
super(LoccAnsatz, self).customized_channel(ops, which_qubit)
class LoccNet(paddle.nn.Layer):
r"""用于设计我们的 LOCC 下的 protocol,并进行验证或者训练。
"""
......@@ -734,18 +735,24 @@ class LoccNet(paddle.nn.Layer):
raise ValueError("can't recognize the input status")
assert max(qubits_list) <= n, "qubit index out of range"
qubit2idx = list(range(0, n))
idx2qubit = list(range(0, n))
origin_seq = list(range(0, n))
target_seq = [idx for idx in origin_seq if idx not in qubits_list]
target_seq = qubits_list + target_seq
swaped = [False] * n
swap_list = []
for idx in range(0, n):
if not swaped[idx]:
next_idx = idx
swaped[next_idx] = True
while not swaped[target_seq[next_idx]]:
swaped[target_seq[next_idx]] = True
swap_list.append((next_idx, target_seq[next_idx]))
next_idx = target_seq[next_idx]
cir = UAnsatz(n)
for i, ele in enumerate(qubits_list):
if qubit2idx[ele] != i:
# if qubit2idx[ele] is i, then swap([ele, i]) is identity
cir.swap([i, qubit2idx[ele]])
qubit2idx[idx2qubit[i]] = qubit2idx[ele]
idx2qubit[qubit2idx[ele]] = idx2qubit[i]
idx2qubit[i] = ele
qubit2idx[ele] = i
for a, b in swap_list:
cir.swap([a, b])
if isinstance(status, LoccStatus):
state = cir.run_density_matrix(status.state)
......@@ -798,25 +805,29 @@ class LoccNet(paddle.nn.Layer):
raise ValueError("can't recognize the input status")
assert max(qubits_list) <= n, "qubit index out of range"
qubit2idx = list(range(0, n))
idx2qubit = list(range(0, n))
swap_history = list()
origin_seq = list(range(0, n))
target_seq = [idx for idx in origin_seq if idx not in qubits_list]
target_seq = qubits_list + target_seq
swaped = [False] * n
swap_list = []
for idx in range(0, n):
if not swaped[idx]:
next_idx = idx
swaped[next_idx] = True
while not swaped[target_seq[next_idx]]:
swaped[target_seq[next_idx]] = True
swap_list.append((next_idx, target_seq[next_idx]))
next_idx = target_seq[next_idx]
cir0 = UAnsatz(n)
for i, ele in enumerate(qubits_list):
if qubit2idx[ele] != i: # if qubit2idx[ele] is i, then swap([ele, i]) is identity
swap_history.append((i, qubit2idx[ele]))
cir0.swap([i, qubit2idx[ele]])
qubit2idx[idx2qubit[i]] = qubit2idx[ele]
idx2qubit[qubit2idx[ele]] = idx2qubit[i]
idx2qubit[i] = ele
qubit2idx[ele] = i
for a, b in swap_list:
cir0.swap([a, b])
cir1 = UAnsatz(n)
swap_cnt = len(swap_history)
for idx in range(0, swap_cnt):
a, b = swap_history[swap_cnt - 1 - idx]
cir1.swap([b, a])
swap_list.reverse()
for a, b in swap_list:
cir1.swap([a, b])
if isinstance(status, LoccStatus):
_state = cir0.run_density_matrix(status.state)
......
......@@ -200,7 +200,7 @@ def cx_gate_matrix():
def swap_gate_matrix():
"""
Control Not
Swap gate
:return:
"""
return np.array([[1, 0, 0, 0],
......@@ -209,6 +209,71 @@ def swap_gate_matrix():
[0, 0, 0, 1]], dtype=complex).reshape(2, 2, 2, 2)
def rxx_gate_matrix(params):
"""
RXX gate
:return:
"""
theta = params
re_a = paddle.cos(theta / 2)
re_b = paddle.zeros([1], 'float64')
im_a = paddle.sin(theta / 2)
im_b = paddle.zeros([1], 'float64')
re = paddle.reshape(paddle.concat([re_a, re_b, re_b, re_b,
re_b, re_a, re_b, re_b,
re_b, re_b, re_a, re_b,
re_b, re_b, re_b, re_a]), [4, 4])
im = paddle.reshape(paddle.concat([im_b, im_b, im_b, im_a,
im_b, im_b, im_a, im_b,
im_b, im_a, im_b, im_b,
im_a, im_b, im_b, im_b]), [4, 4])
return re - im * paddle.to_tensor([1j], 'complex128')
def ryy_gate_matrix(params):
"""
RYY gate
:return:
"""
theta = params
re_a = paddle.cos(theta / 2)
re_b = paddle.zeros([1], 'float64')
im_a = paddle.sin(theta / 2)
im_b = paddle.zeros([1], 'float64')
re = paddle.reshape(paddle.concat([re_a, re_b, re_b, re_b,
re_b, re_a, re_b, re_b,
re_b, re_b, re_a, re_b,
re_b, re_b, re_b, re_a]), [4, 4])
im = paddle.reshape(paddle.concat([im_b, im_b, im_b, im_a,
im_b, im_b, -im_a, im_b,
im_b, -im_a, im_b, im_b,
im_a, im_b, im_b, im_b]), [4, 4])
return re + im * paddle.to_tensor([1j], 'complex128')
def rzz_gate_matrix(params):
"""
RZZ gate
:return:
"""
theta = params
re_a = paddle.cos(theta / 2)
re_b = paddle.zeros([1], 'float64')
im_a = paddle.sin(theta / 2)
im_b = paddle.zeros([1], 'float64')
re = paddle.reshape(paddle.concat([re_a, re_b, re_b, re_b,
re_b, re_a, re_b, re_b,
re_b, re_b, re_a, re_b,
re_b, re_b, re_b, re_a]), [4, 4])
im = paddle.reshape(paddle.concat([-im_a, im_b, im_b, im_b,
im_b, im_a, im_b, im_b,
im_b, im_b, im_a, im_b,
im_b, im_b, im_b, -im_a]), [4, 4])
return re + im * paddle.to_tensor([1j], 'complex128')
# PaddleE
def normalize_axis(axis, ndim):
if axis < 0:
......@@ -410,6 +475,15 @@ def StateTransfer(state, gate_name, bits, params=None):
elif gate_name == 'u':
# print('----------', gate_name, bits, '----------')
gate_matrix = u_gate_matrix(params)
elif gate_name == 'RXX_gate':
# print('----------', gate_name, bits, '----------')
gate_matrix = rxx_gate_matrix(params)
elif gate_name == 'RYY_gate':
# print('----------', gate_name, bits, '----------')
gate_matrix = ryy_gate_matrix(params)
elif gate_name == 'RZZ_gate':
# print('----------', gate_name, bits, '----------')
gate_matrix = rzz_gate_matrix(params)
else:
raise Exception("Gate name error")
......
......@@ -13,31 +13,17 @@
# limitations under the License.
from functools import reduce
from math import log2
import numpy as np
from numpy import absolute, log
from numpy import diag, dot, identity
from numpy import kron as np_kron
from numpy import trace as np_trace
from numpy import matmul as np_matmul
from numpy import random as np_random
from numpy import linalg, sqrt
from numpy import sum as np_sum
from numpy import transpose as np_transpose
from numpy import zeros as np_zeros
import paddle
from paddle import add, to_tensor
from paddle import kron as pp_kron
from paddle import kron as kron
from paddle import matmul
from paddle import transpose as pp_transpose
from paddle import concat, cos, ones, reshape, sin
from paddle import zeros as pp_zeros
from paddle import transpose
from paddle import concat, ones
from paddle import zeros
from scipy.linalg import logm, sqrtm
import paddle
__all__ = [
"partial_trace",
"state_fidelity",
......@@ -73,32 +59,76 @@ def partial_trace(rho_AB, dim1, dim2, A_or_B):
if A_or_B == 2:
dim1, dim2 = dim2, dim1
idty_np = identity(dim2).astype("complex128")
idty_np = np.identity(dim2).astype("complex128")
idty_B = to_tensor(idty_np)
zero_np = np_zeros([dim2, dim2], "complex128")
zero_np = np.zeros([dim2, dim2], "complex128")
res = to_tensor(zero_np)
for dim_j in range(dim1):
row_top = pp_zeros([1, dim_j], dtype="float64")
row_top = zeros([1, dim_j], dtype="float64")
row_mid = ones([1, 1], dtype="float64")
row_bot = pp_zeros([1, dim1 - dim_j - 1], dtype="float64")
row_bot = zeros([1, dim1 - dim_j - 1], dtype="float64")
bra_j = concat([row_top, row_mid, row_bot], axis=1)
bra_j = paddle.cast(bra_j, 'complex128')
if A_or_B == 1:
row_tmp = pp_kron(bra_j, idty_B)
row_tmp = kron(bra_j, idty_B)
row_tmp_conj = paddle.conj(row_tmp)
res = add(res, matmul(matmul(row_tmp, rho_AB), pp_transpose(row_tmp_conj, perm=[1, 0]), ), )
res = add(res, matmul(matmul(row_tmp, rho_AB), transpose(row_tmp_conj, perm=[1, 0]), ), )
if A_or_B == 2:
row_tmp = pp_kron(idty_B, bra_j)
row_tmp = kron(idty_B, bra_j)
row_tmp_conj = paddle.conj(row_tmp)
res = add(res, matmul(matmul(row_tmp, rho_AB), pp_transpose(row_tmp_conj, perm=[1, 0]), ), )
res = add(res, matmul(matmul(row_tmp, rho_AB), transpose(row_tmp_conj, perm=[1, 0]), ), )
return res
def partial_trace_discontiguous(rho, preserve_qubits=None):
r"""计算量子态的偏迹,可选取任意子系统。
Args:
rho (Tensor): 输入的量子态
preserve_qubits (list): 要保留的量子比特,默认为 None,表示全保留
"""
if preserve_qubits is None:
return rho
else:
n = int(log2(rho.size) // 2)
num_preserve = len(preserve_qubits)
shape = paddle.ones((n + 1,))
shape = 2 * shape
shape[n] = 2**n
shape = paddle.cast(shape, "int32")
identity = paddle.eye(2 ** n)
identity = paddle.reshape(identity, shape=shape)
discard = list()
for idx in range(0, n):
if idx not in preserve_qubits:
discard.append(idx)
addition = [n]
preserve_qubits.sort()
preserve_qubits = paddle.to_tensor(preserve_qubits)
discard = paddle.to_tensor(discard)
addition = paddle.to_tensor(addition)
permute = paddle.concat([discard, preserve_qubits, addition])
identity = paddle.transpose(identity, perm=permute)
identity = paddle.reshape(identity, (2**n, 2**n))
result = np.zeros((2 ** num_preserve, 2 ** num_preserve), dtype="complex64")
result = paddle.to_tensor(result)
for i in range(0, 2 ** num_preserve):
bra = identity[i * 2 ** num_preserve:(i + 1) * 2 ** num_preserve, :]
result = result + matmul(matmul(bra, rho), transpose(bra, perm=[1, 0]))
return result
def state_fidelity(rho, sigma):
r"""计算两个量子态的保真度。
......@@ -136,7 +166,7 @@ def gate_fidelity(U, V):
"""
assert U.shape == V.shape, 'The shape of two unitary matrices are different'
dim = U.shape[0]
fidelity = absolute(np_trace(np_matmul(U, V.conj().T)))/dim
fidelity = np.absolute(np.trace(np.matmul(U, V.conj().T)))/dim
return fidelity
......@@ -154,7 +184,7 @@ def purity(rho):
Returns:
float: 输入的量子态的纯度
"""
gamma = np_trace(np_matmul(rho, rho))
gamma = np.trace(np.matmul(rho, rho))
return gamma.real
......@@ -172,8 +202,8 @@ def von_neumann_entropy(rho):
Returns:
float: 输入的量子态的冯诺依曼熵
"""
rho_eigenvalue, _ = linalg.eig(rho)
entropy = -np_sum(rho_eigenvalue*log(rho_eigenvalue))
rho_eigenvalue, _ = np.linalg.eig(rho)
entropy = -np.sum(rho_eigenvalue*np.log(rho_eigenvalue))
return entropy.real
......@@ -219,7 +249,7 @@ def NKron(matrix_A, matrix_B, *args):
``result`` 应为 :math:`A \otimes B \otimes C`
"""
return reduce(lambda result, index: np_kron(result, index), args, np_kron(matrix_A, matrix_B), )
return reduce(lambda result, index: np.kron(result, index), args, np.kron(matrix_A, matrix_B), )
def dagger(matrix):
......@@ -246,7 +276,7 @@ def dagger(matrix):
[2.-2.j 4.-4.j]]
"""
matrix_conj = paddle.conj(matrix)
matrix_dagger = pp_transpose(matrix_conj, perm=[1, 0])
matrix_dagger = transpose(matrix_conj, perm=[1, 0])
return matrix_dagger
......@@ -266,11 +296,11 @@ def random_pauli_str_generator(n, terms=3):
list: 随机生成的可观测量的列表形式
"""
pauli_str = []
for sublen in np_random.randint(1, high=n+1, size=terms):
for sublen in np.random.randint(1, high=n+1, size=terms):
# Tips: -1 <= coeff < 1
coeff = np_random.rand()*2-1
ops = np_random.choice(['x', 'y', 'z'], size=sublen)
pos = np_random.choice(range(n), size=sublen, replace=False)
coeff = np.random.rand()*2-1
ops = np.random.choice(['x', 'y', 'z'], size=sublen)
pos = np.random.choice(range(n), size=sublen, replace=False)
op_list = [ops[i]+str(pos[i]) for i in range(sublen)]
pauli_str.append([coeff, ','.join(op_list)])
return pauli_str
......
......@@ -23,7 +23,7 @@ with open("README.md", "r", encoding="utf-8") as fh:
setuptools.setup(
name='paddle-quantum',
version='2.0.1',
version='2.1.0',
author='Institute for Quantum Computing, Baidu INC.',
author_email='quantum@baidu.com',
description='Paddle Quantum is a quantum machine learning (QML) toolkit developed based on Baidu PaddlePaddle.',
......@@ -34,7 +34,7 @@ setuptools.setup(
'paddle_quantum.VQE', 'paddle_quantum.VQSD', 'paddle_quantum.GIBBS.example',
'paddle_quantum.QAOA.example', 'paddle_quantum.SSVQE.example', 'paddle_quantum.VQE.example',
'paddle_quantum.VQSD.example'],
install_requires=['paddlepaddle>=2.0.1', 'scipy', 'networkx', 'matplotlib', 'interval', 'tqdm'],
install_requires=['paddlepaddle>=2.0.1', 'scipy', 'networkx>=2.5', 'matplotlib', 'interval', 'tqdm'],
python_requires='>=3.6, <4',
classifiers=[
'Programming Language :: Python :: 3',
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -91,7 +91,7 @@
"source": [
"### Entanglement quantification\n",
"\n",
"After having a taste of quantum entanglement qualitatively, we want to promote our understanding to a quantitive level. One should realize the validity of aforementioned quantum communication protocols, including quantum teleportation and superdense coding, depends on **the quality of the entanglement quantification**. Following the convention in quantum information community, the **Negativity** $\\mathcal{N}(\\rho)$ and the **Logarithmic Negativity** $E_{N}(\\rho)$ are widely recognized metrics to quantify the amount of entanglement presented in a bi-partite system [6]. Specifically,\n",
"After having a taste of quantum entanglement qualitatively, we want to promote our understanding to a quantitive level. One should realize the validity of aforementioned quantum communication protocols, including quantum teleportation and superdense coding, depends on **the quality of the entanglement quantification**. Following the convention in quantum information community, the **negativity** $\\mathcal{N}(\\rho)$ and the **logarithmic negativity** $E_{N}(\\rho)$ are widely recognized metrics to quantify the amount of entanglement presented in a bi-partite system [6]. Specifically,\n",
"\n",
"$$\n",
"\\mathcal{N}(\\rho) \\equiv \\frac{||\\rho^{T_B}||_1-1}{2},\n",
......@@ -240,7 +240,7 @@
"source": [
"## BBPSSW protocol\n",
"\n",
"The BBPSSW protocol distills two identical quantum states $\\rho_{in}$ (2-copies) into a single final state $\\rho_{out}$ with a higher state fidelity $F$ (closer to the Bell state $|\\Phi^+\\rangle$). It is worth noting that BBPSSW was mainly designed to purify **Isotropic states** (also known as Werner state), a parametrized family of mixed states consist of $|\\Phi^+\\rangle$ and the completely mixed state (white noise) $I/4$.\n",
"The BBPSSW protocol distills two identical quantum states $\\rho_{in}$ (2-copies) into a single final state $\\rho_{out}$ with a higher state fidelity $F$ (closer to the Bell state $|\\Phi^+\\rangle$). It is worth noting that BBPSSW was mainly designed to purify **isotropic states** (also known as Werner state), a parametrized family of mixed states consist of $|\\Phi^+\\rangle$ and the completely mixed state (white noise) $I/4$.\n",
"\n",
"$$\n",
"\\rho_{\\text{iso}}(p) = p\\lvert\\Phi^+\\rangle \\langle\\Phi^+\\rvert + (1-p)\\frac{I}{4}, \\quad p \\in [0,1]\n",
......@@ -568,7 +568,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.9"
"version": "3.7.10"
},
"toc": {
"base_numbering": 1,
......
......@@ -15,7 +15,7 @@
"source": [
"## 概述\n",
"\n",
"量子隐形传态(Quantum Teleportation)是可以通过本地操作和经典通信(LOCC)协议完成的另一项重要任务,该协议借助提前制备好的纠缠资源在两个空间上分离的通信节点(仅允许经典信道)之间传输量子信息。在本教程中,我们将首先简要回顾一下量子隐形传态协议,并使用量桨进行模拟。然后,我们将介绍如何使用 LOCCNet 学习出一个量子隐形传态协议。"
"量子隐形传态(quantum teleportation)是可以通过本地操作和经典通信(LOCC)协议完成的另一项重要任务,该协议借助提前制备好的纠缠资源在两个空间上分离的通信节点(仅允许经典信道)之间传输量子信息。在本教程中,我们将首先简要回顾一下量子隐形传态协议,并使用量桨进行模拟。然后,我们将介绍如何使用 LOCCNet 学习出一个量子隐形传态协议。"
]
},
{
......@@ -631,7 +631,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
"version": "3.7.0"
},
"toc": {
"base_numbering": 1,
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -304,7 +304,6 @@
"M = random_M_generator()\n",
"M_err = np.copy(M)\n",
"\n",
"\n",
"# 打印结果\n",
"print('我们想要分解的矩阵 M 是:')\n",
"print(M)\n",
......@@ -351,9 +350,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### 量子神经网络的构造\n",
"\n",
"我们搭建如下的结构:"
"我们搭建如下的量子神经网络结构:"
]
},
{
......@@ -920,7 +917,7 @@
"source": [
"_______\n",
"\n",
"## 参考文献\n",
"## 参考文献\n",
"\n",
"[1] Wang, X., Song, Z. & Wang, Y. Variational Quantum Singular Value Decomposition. [arXiv:2006.02336 (2020).](https://arxiv.org/abs/2006.02336)"
]
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册