Model Description
This is a lightweight model designed for multi-class, multi-label classification of sentences based on the dimensional traits of the Hierarchical Taxonomy of Psychopathology (HiTOP).
HiTOP is an empirically derived framework that organizes psychopathology into continuously distributed dimensions arranged hierarchically, representing a paradigmatic shift from traditional categorical diagnostic systems. The model conceptualizes mental health conditions along spectra including internalizing (e.g., depression, anxiety), thought disorder, detachment, disinhibited externalizing, and antagonistic externalizing.
Architecture
This model adapts sentence transformer architectures for multi-label text classification, enabling the simultaneous prediction of multiple HiTOP dimensional traits from individual sentences. The lightweight design makes it suitable for deployment in resource-constrained environments while maintaining strong classification performance.
Classification Task
Unlike traditional categorical diagnosis, this model predicts the degree to which a sentence reflects various HiTOP dimensions, acknowledging that psychopathological features exist on continua rather than as discrete categories. Each input sentence can be assigned multiple labels corresponding to different HiTOP traits, capturing the co-occurrence and overlap of psychological symptoms that characterize real-world clinical presentations.
Applications
- Dimensional assessment: Evaluating multiple psychopathology dimensions simultaneously from text
- Clinical screening: Identifying patterns of mental health concerns in social media or clinical notes
- Research: Studying the hierarchical structure of psychopathology in naturalistic language data
- Symptom tracking: Monitoring changes across multiple psychological dimensions over time
This approach aligns with contemporary evidence-based frameworks for understanding mental health, offering improved reliability and coverage compared to traditional categorical classification systems.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
class HiTOPTraitsPredictor:
def __init__(self, model_name="FritzStack/HiTOP-QWEN-4bit"):
self.model_name = model_name
self.tokenizer = None
self.model = None
def _load_model(self):
if self.tokenizer is None:
self.tokenizer = AutoTokenizer.from_pretrained(
self.model_name,
trust_remote_code=True
)
if self.model is None:
self.model = AutoModelForCausalLM.from_pretrained(
self.model_name,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
def batch_predict(self, texts, max_new_tokens=50, do_sample=False, top_k=10):
self._load_model()
prompts = [f"{text}. HiTOP Traits: " for text in texts]
inputs = self.tokenizer(prompts, return_tensors="pt", padding=True, truncation=True).to(self.model.device)
with torch.no_grad():
outputs = self.model.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=do_sample,
top_k=top_k,
pad_token_id=self.tokenizer.eos_token_id
)
results = []
for i, output in enumerate(outputs):
generated_text = self.tokenizer.decode(
output[len(inputs.input_ids[i]):],
skip_special_tokens=False
).strip()
results.append(generated_text.replace('<|im_end|>', ''))
return results
predictor = HiTOPTraitsPredictor()
texts = [
"I feel sad and unmotivated most days",
"I get very angry and irritable with people",
"I have trouble concentrating and staying focused"
]
results = predictor.batch_predict(texts, max_new_tokens=50)
for text, result in zip(texts, results):
print(f"Input: {text}")
print(f"HiTOP Traits: {result}\n")
Uploaded model
- Developed by: FritzStack
- License: apache-2.0
- Finetuned from model : unsloth/qwen3-0.6b-bnb-4bit
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
