How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("feature-extraction", model="sujan07/coma")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("sujan07/coma")
model = AutoModelForSeq2SeqLM.from_pretrained("sujan07/coma")
Quick Links

Coma Model

This repository contains a custom model for feature extraction using the BART architecture. The model and tokenizer are hosted on Hugging Face, and the inference is integrated into a Glitch server for processing WhatsApp messages.

Files and Directories

  • model/
    • config.json
    • generation_config.json
    • model.safetensors
  • tokenizer/
    • merges.txt
    • special_tokens_map.json
    • tokenizer_config.json
    • vocab.json
  • README.md: This file.

Getting Started

1. Set Up Hugging Face Model

Ensure all model and tokenizer files are correctly uploaded to your Hugging Face repository.

2. Environment Variables

Create a .env file in your Glitch project with the following content:

Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support