This experiment is based on stanford OGB (1.2.1) benchmark. The description of 《Masked Label Prediction: Unified Massage Passing Model for Semi-Supervised Classification》 is [avaiable here](https://arxiv.org/pdf/2009.03509.pdf). The steps are:
### Note!
We propose **UniMP_large**, where we extend our base model's width by increasing ```head_num```, and make it deeper by incorporating [APPNP](https://www.in.tum.de/daml/ppnp/) . Moreover, we firstly propose a new **Attention based APPNP** to further improve our model's performance.
To_do list:
- [x] UniMP_large in Arxiv
- [ ] UniMP_large in Products
- [ ] UniMP_large in Proteins
- [ ] UniMP_xxlarge
### Install environment:
```
git clone https://github.com/PaddlePaddle/PGL.git
...
...
@@ -13,6 +22,7 @@ This experiment is based on stanford OGB (1.2.1) benchmark. The description of
### Arxiv dataset:
1.```python main_arxiv.py --place 0 --log_file arxiv_baseline.txt``` to get the baseline result of arxiv dataset.
2.```python main_arxiv.py --place 0 --use_label_e --log_file arxiv_unimp.txt``` to get the UniMP result of arxiv dataset.
3.```python main_arxiv_large.py --place 0 --use_label_e --log_file arxiv_unimp_large.txt``` to get the UniMP_large result of arxiv dataset.
### Products dataset:
1.```python main_product.py --place 0 --log_file product_unimp.txt --use_label_e``` to get the UniMP result of Products dataset.