File size: 2,067 Bytes
8ed9cdb
 
 
 
 
 
 
 
 
b056de8
 
 
 
 
 
 
 
 
6a96d96
b056de8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: afl-3.0
language:
- en
base_model:
- google/flan-t5-xl
pipeline_tag: text-classification
tags:
- personality
---



## Model Details

* **Model Type:** PersonalityClassifier is a fine-tuned model from `google/flan-t5-xl` using annotation data for personality classification.
* **Model Date:** PersonalityClassifier was trained in Jan 2024.
* **Paper or resources for more information:** [https://arxiv.org/abs/2504.06868](https://arxiv.org/abs/2504.06868)
* **Train data:** [https://huggingface.co/datasets/mirlab/personality_120000](https://huggingface.co/datasets/mirlab/personality_120000)
## Requirements

* `torch==2.1.0`
* `transformers==4.29.0`

## How to use the model

```python
import torch
from transformers import T5ForConditionalGeneration, AutoTokenizer

# Set device to CUDA if available, otherwise use CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Load model and tokenizer
model_name = "mirlab/PersonalityClassifier"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)

# Define model inference function
def modelGenerate(input_text, lm, tokenizer):
# Tokenize input text and move to device
input_ids = tokenizer(input_text, truncation=True, padding=True, return_tensors='pt')['input_ids'].to(device)

# Generate text using the model
model_output = lm.generate(input_ids)

# Decode generated tokens into text
model_answer = tokenizer.batch_decode(model_output, skip_special_tokens=True)

return model_answer

# Example input text
# Format: "[Valence] Statement: [Your Statement]. Trait: [Target Trait]"
# Target Trait is among ["Openness", "Conscientiousness", "Extraversion", "Agreeableness", "Neuroticism", "Machiavellianism", "Narcissism", "Psychopathy"].
# Valence indicates positive (+) or negative (-) alignment with the trait.

input_texts = "[Valence] Statement: I am outgoing. Trait: Extraversion"

# Generate output using the model and print
output_texts = modelGenerate(input_texts, model, tokenizer)
print(output_texts)