from transformers import pipeline import gradio as gr import nltk from nltk.tokenize import sent_tokenize import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer import gradio as gr nltk.download("punkt") nltk.download('punkt_tab') model_name = "MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli" #"MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") labels = ["entailment", "neutral", "contradiction"] def nli(hypothesis, premise): inputs = tokenizer(premise, hypothesis, return_tensors="pt", truncation=True, max_length=512) logits = model(**inputs).logits[0] probs = torch.softmax(logits, -1).tolist() return dict(zip(labels, probs)) def get_labels(result): if result["entailment"]> result["neutral"] and result["entailment"]> result["contradiction"]: return "entailment" elif result["entailment"]