Qwen-1.5B Causal
A derivative of Qwen/Qwen2.5-1.5B fine-tuned to extract causal links of the formA ->+ B (positive) and A ->- B (negative) from natural-language paragraphs.
License: Derivative of Qwen/Qwen2.5-1.5B. See
LICENSE. Users must comply with the base license.
Quickstart
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import re
MODEL = "dorito96/qwen2.5-1.5b_causal"
tok = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL,
dtype=(torch.bfloat16 if torch.cuda.is_available() and torch.cuda.is_bf16_supported()
else (torch.float16 if torch.cuda.is_available() else torch.float32)),
device_map=("auto" if torch.cuda.is_available() else "cpu"),
trust_remote_code=True,
)
model.eval()
PROMPT_PREFIX = "### Paragraph:\n"
TARGET_PREFIX = "\n\n### Targets:\n"
paragraph = "More rainfall increases crop yield."
prompt = f"{PROMPT_PREFIX}{paragraph}{TARGET_PREFIX}"
inputs = tok(prompt, return_tensors="pt").to(next(model.parameters()).device)
gen = model.generate(
**inputs,
max_new_tokens=128,
num_beams=6,
do_sample=False,
eos_token_id=tok.eos_token_id,
pad_token_id=tok.pad_token_id,
no_repeat_ngram_size=3,
)
text = tok.decode(gen[0, inputs['input_ids'].shape[1]:], skip_special_tokens=True).strip()
print(text) # rainfall ->+ crop yield
Limitations
- Optimized for extracting causal relationships. This is not a general chat model.
- May hallucinate on out-of-domain inputs. Usually works best on sentences where causal relationships are explicit (as shown in code example). I will be working on improving this steadily as best as I can.
Acknowledgments
Base model by Qwen team.
© 2025 Aritra Majumdar (GitHub: https://github.com/bear96). Provided for research and educational use with attribution.
- Downloads last month
- 42
Model tree for dorito96/qwen2.5-1.5b_causal
Base model
Qwen/Qwen2.5-1.5B