Below are the main dependencies and their version requirements for the project:
torch == 1.10.1+cu113torch-geometric == 2.1.0.post1torch-cluster == 1.5.9torch-scatter == 2.0.9torch-sparse == 0.6.12torch-spline-conv == 1.2.1scikit-learn == 1.3.2numpy == 1.24.4pandas == 2.0.3matplotlib == 3.7.5scipy == 1.10.1datasets == 3.1.0huggingface-hub == 0.27.1wandb == 0.18.7tqdm == 4.67.0filelock == 3.16.1protobuf == 5.28.3requests == 2.32.3typing_extensions == 4.12.2attrs == 24.3.0
To install the complete requiring packages, use the following command at the root directory of the repository:
pip install -r requirements.txt
https://huggingface.co/datasets/aboutime233/BRIDGE-data/tree/main
Unzip the downloaded dataset archive into a folder named data in the root directory of the repository.
Navigate to the model-node/scripts directory and execute the following command:
python main.py --dataset Cora --shot_num 1
In the same directory as main.py, execute the following command:
python main.py --dataset Cora --model_path pretrain_model.pkl
Parameter Explanation:
--dataset Select the dataset to use.
--shot_num Specify the dataset type for fine-tuning, either 1shot or 5shot.
--model_path The path to the pretrained model parameters saved during the pretraining phase.
You can reproduce our results by executing the following commands in the scripts directory under the corresponding task type.
python main.py --dataset Cora
The dataset parameter can be set from the following Six datasets: Cora, Citeseer, Computers, Pubmed, Reddit, and Photo.
You also need to add parameters as shown below:
--lr== 0.00008094590967608754--l2_coef== 0.00004404197581665391--hid_units== 256--lambda_entropy== 0.20401015296835048--dropout_rate== 0.1913510180577923--variance_weight== 1521434.9368374627--downstreamlr== 0.000962404050084371--reg_weight== 1--reg_thres== 0.4--shot_num== 1 When the dataset is Reddit, the learning ratelrshould be set to 0.00001.
python main.py --dataset Cora
The dataset parameter can be set from the following Six datasets: Cora, Citeseer, Computers, Pubmed, Reddit, and Photo.
You also need to add parameters as shown below:
--lr== 0.00001--l2_coef== 0.00004404197581665391--hid_units== 256--lambda_entropy== 0.20401015296835048--dropout_rate== 0.1913510180577923--variance_weight== 1521434.9368374627--downstreamlr== 0.000962404050084371--reg_weight== 1--reg_thres== 0.4--shot_num== 5
python main.py --dataset Cora
The dataset parameter can be set from the following Six datasets: Cora, Citeseer, Computers, Pubmed, Reddit, and Photo.
You also need to add parameters as shown below:
--lr== 0.00008094590967608754--l2_coef== 0.00004404197581665391--hid_units== 256--lambda_entropy== 0.07845469891338196--dropout_rate== 0.1913510180577923--variance_weight== 1521434.9368374627--downstreamlr== 0.001--reg_weight== 1--reg_thres== 0.4--shot_num== 1
python main.py --dataset Cora
The dataset parameter can be set from the following Six datasets: Cora, Citeseer, Computers, Pubmed, Reddit, and Photo.
You also need to add parameters as shown below:
--lr== 0.00008094590967608754--l2_coef== 0.00004404197581665391--hid_units== 256--lambda_entropy== 0.07845469891338196--dropout_rate== 0.1913510180577923--variance_weight== 1521434.9368374627--downstreamlr== 0.001--reg_weight== 1--reg_thres== 0.4--shot_num== 5