# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sujan07/coma")
model = AutoModelForSeq2SeqLM.from_pretrained("sujan07/coma")Quick Links
Coma Model
This repository contains a custom model for feature extraction using the BART architecture. The model and tokenizer are hosted on Hugging Face, and the inference is integrated into a Glitch server for processing WhatsApp messages.
Files and Directories
model/config.jsongeneration_config.jsonmodel.safetensors
tokenizer/merges.txtspecial_tokens_map.jsontokenizer_config.jsonvocab.json
README.md: This file.
Getting Started
1. Set Up Hugging Face Model
Ensure all model and tokenizer files are correctly uploaded to your Hugging Face repository.
2. Environment Variables
Create a .env file in your Glitch project with the following content:
- Downloads last month
- 1
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="sujan07/coma")