--- license: apache-2.0 --- ## Overview This dataset covers the encoder embeddings and prediction results of LLMs of paper 'Model Generalization on Text Attribute Graphs: Principles with Lagre Language Models', Haoyu Wang, Shikun Liu, Rongzhe Wei, Pan Li. ## Dataset Description The dataset structure should be organized as follows: ```plaintext /dataset/ │── [dataset_name]/ │ │── processed_data.pt # Contains labels and graph information │ │── [encoder]_x.pt # Features extracted by different encoders │ │── categories.csv # label name raw texts │ │── raw_texts.pt # raw text of each node ``` ### File Descriptions - **`processed_data.pt`**: A PyTorch file storing the processed dataset, including graph structure and node labels. Note that in heterophilic datasets, thie is named as [Dataset].pt, where Dataset could be Cornell, etc, and should be opened with DGL. - **`[encoder]_x.pt`**: Feature matrices extracted using different encoders, where `[encoder]` represents the encoder name. - **`categories.csv`**: raw label names. - **`raw_texts.pt`**: raw node texts. Note that in heterophilic datasets, this is named as [Dataset].csv, where Dataset can be Cornell, etc. ### Dataset Naming Convention `[dataset_name]` should be one of the following: - `cora` - `citeseer` - `pubmed` - `bookhis` - `bookchild` - `sportsfit` - `wikics` - `cornell` - `texas` - `wisconsin` - `washington` ### Encoder Naming Convention `[encoder]` can be one of the following: - `sbert` (the sentence-bert encoder) - `roberta` (the Roberta encoder) - `llmicl_primary` (the vanilla LLM2Vec) - `llmicl_class_aware` (the task-adaptive encoder) - `llmgpt_text-embedding-3-large` (the embedding api text-embedding-3-large by openai) ## Results Description The ./results/ folder consists of prediction results of GPT-4o in node text classification and GPT-4o-mini in homophily ratio prediction. ```plaintext ./results/nc_[DATASET]/4o/llm_baseline # node text prediction ./results/nc_[DATASET]/4o_mini/agenth # homophily ratio prediction ```