xlm-roberta-hp-ftemb

Model Description

[Add your model description here]

Usage

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model = AutoModelForSeq2SeqLM.from_pretrained("Joshua4a/xlm-roberta-hp-ftemb")
tokenizer = AutoTokenizer.from_pretrained("Joshua4a/xlm-roberta-hp-ftemb")

# Example
text = "Your input text here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)

Training Details

[Add training information]

Citation

[Add citation if applicable]

Downloads last month
4
Safetensors
Model size
0.6B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support