Text Classification
Transformers
Safetensors
llama
Generated from Trainer
trl
reward-trainer
text-embeddings-inference
Instructions to use tsessk/content with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use tsessk/content with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="tsessk/content")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("tsessk/content") model = AutoModelForSequenceClassification.from_pretrained("tsessk/content") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 7abf407e9f8c5c70bc35de579582b5ba02a7ede091389676b21fe7305042e068
- Size of remote file:
- 36.5 MB
- SHA256:
- 1ef64781aa03180f4f5ce504314f058f5d0227277df86060473d973cf43b033e
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.