未验证 提交 a195f887 编写于 作者: S sys1874 提交者: GitHub

Update README.MD

上级 9913672a
...@@ -2,6 +2,15 @@ ...@@ -2,6 +2,15 @@
This experiment is based on stanford OGB (1.2.1) benchmark. The description of 《Masked Label Prediction: Unified Massage Passing Model for Semi-Supervised Classification》 is [avaiable here](https://arxiv.org/pdf/2009.03509.pdf). The steps are: This experiment is based on stanford OGB (1.2.1) benchmark. The description of 《Masked Label Prediction: Unified Massage Passing Model for Semi-Supervised Classification》 is [avaiable here](https://arxiv.org/pdf/2009.03509.pdf). The steps are:
### Note!
We propose **UniMP_large**, where we extend our base model's width by increasing ```head_num```, and make it deeper by incorporating [APPNP](https://www.in.tum.de/daml/ppnp/) . Moreover, we firstly propose a new **Attention based APPNP** to further improve our model's performance.
To_do list:
- [x] UniMP_large in Arxiv
- [ ] UniMP_large in Products
- [ ] UniMP_large in Proteins
- [ ] UniMP_xxlarge
### Install environment: ### Install environment:
``` ```
git clone https://github.com/PaddlePaddle/PGL.git git clone https://github.com/PaddlePaddle/PGL.git
...@@ -13,6 +22,7 @@ This experiment is based on stanford OGB (1.2.1) benchmark. The description of ...@@ -13,6 +22,7 @@ This experiment is based on stanford OGB (1.2.1) benchmark. The description of
### Arxiv dataset: ### Arxiv dataset:
1. ```python main_arxiv.py --place 0 --log_file arxiv_baseline.txt``` to get the baseline result of arxiv dataset. 1. ```python main_arxiv.py --place 0 --log_file arxiv_baseline.txt``` to get the baseline result of arxiv dataset.
2. ```python main_arxiv.py --place 0 --use_label_e --log_file arxiv_unimp.txt``` to get the UniMP result of arxiv dataset. 2. ```python main_arxiv.py --place 0 --use_label_e --log_file arxiv_unimp.txt``` to get the UniMP result of arxiv dataset.
3. ```python main_arxiv_large.py --place 0 --use_label_e --log_file arxiv_unimp_large.txt``` to get the UniMP_large result of arxiv dataset.
### Products dataset: ### Products dataset:
1. ```python main_product.py --place 0 --log_file product_unimp.txt --use_label_e``` to get the UniMP result of Products dataset. 1. ```python main_product.py --place 0 --log_file product_unimp.txt --use_label_e``` to get the UniMP result of Products dataset.
...@@ -41,6 +51,7 @@ Arxiv_dataset(Full Batch): Products_dataset(NeighborSampler): ...@@ -41,6 +51,7 @@ Arxiv_dataset(Full Batch): Products_dataset(NeighborSampler):
| ------------------ |-------------- | --------------- | -------------- |----------| | ------------------ |-------------- | --------------- | -------------- |----------|
| Arxiv_baseline | 0.7225 ± 0.0015 | 0.7367 ± 0.0012 | 468,369 | Tesla V100 (32GB) | | Arxiv_baseline | 0.7225 ± 0.0015 | 0.7367 ± 0.0012 | 468,369 | Tesla V100 (32GB) |
| Arxiv_UniMP | 0.7311 ± 0.0021 | 0.7450 ± 0.0005 | 473,489 | Tesla V100 (32GB) | | Arxiv_UniMP | 0.7311 ± 0.0021 | 0.7450 ± 0.0005 | 473,489 | Tesla V100 (32GB) |
| Arxiv_UniMP_large | 0.7379 ± 0.0014 | 0.7475 ± 0.0008 | 1,162,515 | Tesla V100 (32GB) |
| Products_baseline | 0.8023 ± 0.0026 | 0.9286 ± 0.0017 | 1,470,905 | Tesla V100 (32GB) | | Products_baseline | 0.8023 ± 0.0026 | 0.9286 ± 0.0017 | 1,470,905 | Tesla V100 (32GB) |
| Products_UniMP | 0.8256 ± 0.0031 | 0.9308 ± 0.0017 | 1,475,605 | Tesla V100 (32GB) | | Products_UniMP | 0.8256 ± 0.0031 | 0.9308 ± 0.0017 | 1,475,605 | Tesla V100 (32GB) |
| Proteins_baseline | 0.8611 ± 0.0017 | 0.9128 ± 0.0007 | 1,879,664 | Tesla V100 (32GB) | | Proteins_baseline | 0.8611 ± 0.0017 | 0.9128 ± 0.0007 | 1,879,664 | Tesla V100 (32GB) |
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册