Model Trained Using AutoTrain

  • Problem type: Text Classification

Validation Metrics

loss: 0.03548985347151756

f1: 0.9950522264980759

precision: 0.9945054945054945

recall: 0.9955995599559956

auc: 0.9997361672360855

accuracy: 0.995049504950495

Purpose

I trained this on a bunch of top-level comments on reddit. Human class was the real responses to selfposts in various subs, and the LLM class was a response from one of several LLMs to the same post. I am tired of reading fucking GPT-slop comments on reddit.

Notes

Converted ONNX model is available for compatibility with transformers.js. Browser extension and mini version coming soon.

Downloads last month
111
Safetensors
Model size
109M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for trentmkelly/slop-detector

Base model

thenlper/gte-base
Quantized
(5)
this model