modelId stringlengths 9 107 | author stringlengths 3 37 | last_modified timestamp[us, tz=UTC]date 2021-03-22 21:11:33 2026-05-04 17:37:22 | downloads int64 100 72.3M | likes int64 1 4.99k | library_name stringclasses 132
values | tags listlengths 2 2.16k | pipeline_tag stringclasses 52
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-03 03:15:09 | card stringlengths 1.51k 391k | entities listlengths 0 18 |
|---|---|---|---|---|---|---|---|---|---|---|
AllThingsIntel/Apollo-V0.1-4B-Thinking | AllThingsIntel | 2025-11-02T01:26:06 | 16,634 | 39 | null | [
"safetensors",
"gguf",
"qwen3",
"AllThingsIntel",
"Apollo",
"Thinking",
"en",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-31T14:55:05 | ### **Apollo-V0.1-4B-Thinking by AllThingsIntel**
Unbound intellect. Authentic personas. Unscripted logic.
This is a 4B parameter model that *thinks* in-character instead of just responding.
## **Model Description**
Apollo-V0.1-4B-Thinking is a specialized fine-tune of Qwen 3 4B Thinking 2507. We've lifted many of t... | [] |
CausalLM/7B | CausalLM | 2025-02-11T14:14:37 | 2,053 | 137 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"causallm",
"en",
"zh",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:Open-Orca/OpenOrca",
"dataset:stingning/ultrachat",
"dataset:meta-math/MetaMathQA",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:jondur... | text-generation | 2023-10-22T10:23:00 | [](https://causallm.org/)
*Image drawn by GPT-4 DALL·E 3* **TL;DR: Perhaps this 7B model, better than all existing models <= 33B, in most quantitative evaluations...**
# CausalLM 7B - Fully Compatible with Meta LLaMA 2
Use the transformers ... | [
{
"start": 699,
"end": 707,
"text": "MT-Bench",
"label": "benchmark name",
"score": 0.8531278371810913
}
] |
inclusionAI/Ling-1T | inclusionAI | 2026-04-13T11:45:13 | 902 | 533 | transformers | [
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:2507.17702",
"arxiv:2507.17634",
"arxiv:2510.22115",
"license:mit",
"region:us"
] | text-generation | 2025-10-02T13:41:55 | ---
license: mit
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
</p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a> &nbs... | [] |
knowledgator/gliner-relex-large-v0.5 | knowledgator | 2026-04-28T10:11:10 | 219 | 21 | gliner | [
"gliner",
"safetensors",
"named-entity-recognition",
"relation-extraction",
"zero-shot",
"information-extraction",
"token-classification",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-11-25T17:58:38 | # 🔗 GLiNER-relex: Generalist and Lightweight Model for Joint Zero-Shot NER and Relation Extraction
GLiNER-relex is a unified model for **zero-shot Named Entity Recognition (NER)** and **Relation Extraction (RE)** that performs both tasks simultaneously in a single forward pass. Built on the GLiNER architecture, it ex... | [] |
unsloth/gemma-3-12b-it-bnb-4bit | unsloth | 2025-05-12T08:01:34 | 8,025 | 37 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"unsloth",
"gemma",
"google",
"conversational",
"en",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxi... | image-text-to-text | 2025-03-12T10:39:59 | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em><a href="https://docs.... | [] |
mradermacher/Arabic-English-handwritten-OCR-v3-i1-GGUF | mradermacher | 2025-12-28T22:20:15 | 443 | 2 | transformers | [
"transformers",
"gguf",
"ar",
"en",
"dataset:aamijar/muharaf-public",
"dataset:Omarkhaledok/muharaf-public-pages",
"base_model:sherif1313/Arabic-English-handwritten-OCR-v3",
"base_model:quantized:sherif1313/Arabic-English-handwritten-OCR-v3",
"license:apache-2.0",
"endpoints_compatible",
"region... | null | 2025-12-28T21:46:23 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Qwen/Qwen2.5-14B-Instruct-AWQ | Qwen | 2024-10-09T12:26:42 | 1,789,054 | 29 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compati... | text-generation | 2024-09-17T13:55:22 | # Qwen2.5-14B-Instruct-AWQ
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more... | [] |
HiTZ/Latxa-Qwen3-VL-8B-Instruct | HiTZ | 2026-02-23T09:19:21 | 333 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"conversational",
"eu",
"gl",
"ca",
"es",
"en",
"dataset:HiTZ/latxa-corpus-v1.1",
"base_model:Qwen/Qwen3-VL-8B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"reg... | image-text-to-text | 2026-02-19T08:51:04 | # Model Card for HiTZ/Latxa-Qwen3-VL-8B-Instruct
<p align="center">
<img src="https://raw.githubusercontent.com/hitz-zentroa/latxa/refs/heads/main/assets/latxa_vision_circle.png" style="height: 350px;">
</p>
Latxa-Qwen3-VL-8B-Instruct is a Basque-adapted multimodal and multilingual instruct model built on top of Qw... | [] |
mrdbourke/FoodExtract-gemma-3-270m-fine-tune-v1 | mrdbourke | 2026-03-17T01:14:00 | 746 | 1 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-08T00:15:06 | # FoodExtract-v1
This is a food and drink extraction language model built on [Gemma 3 270M](https://huggingface.co/google/gemma-3-270m-it).
Given raw text, it's designed to:
1. Classify the text into food or drink (e.g. "a photo of a dog" = not food or drink, "a photo of a pizza" = food or drink).
2. Tag the text wi... | [] |
nvidia/parakeet-tdt_ctc-1.1b | nvidia | 2025-02-18T13:41:32 | 1,986 | 22 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"TDT",
"FastConformer",
"Conformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"dataset:fisher_corpus",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:National-Singapore-Co... | automatic-speech-recognition | 2024-05-07T11:42:30 | # Parakeet TDT-CTC 1.1B PnC(en)
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [![Language... | [] |
mradermacher/gemma-4-21b-a4b-it-REAP-heretic-GGUF | mradermacher | 2026-04-14T14:05:08 | 2,117 | 2 | transformers | [
"transformers",
"gguf",
"safetensors",
"gemma4",
"moe",
"pruning",
"reap",
"cerebras",
"expert-pruning",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"en",
"base_model:coder3101/gemma-4-21b-a4b-it-REAP-heretic",
"base_model:quantized:coder3101/gemma-4-21b-a4b-it-REAP-... | null | 2026-04-12T07:50:31 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
zsjTiger/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF | zsjTiger | 2026-03-05T01:12:27 | 1,973 | 2 | null | [
"gguf",
"text-generation-inference",
"llama.cpp",
"unsloth",
"glm4_moe_lite",
"dataset:TeichAI/claude-4.5-opus-high-reasoning-250x",
"base_model:TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill",
"base_model:quantized:TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill",
"licens... | null | 2026-03-05T01:12:27 | # GLM 4.7 Flash x Claude 4.5 Opus (High Reasoning)
This model was trained on a small reasoning dataset of **Claude Opus 4.5**, with reasoning effort set to High.
- 🧬 Datasets:
- `TeichAI/claude-4.5-opus-high-reasoning-250x`
- 🏗 Base Model:
- `unsloth/GLM-4.7-Flash`
- ⚡ Use cases:
- Coding
- Science... | [
{
"start": 989,
"end": 1003,
"text": "Terminal Bench",
"label": "benchmark name",
"score": 0.794417142868042
},
{
"start": 1005,
"end": 1023,
"text": "SWE Bench Verified",
"label": "benchmark name",
"score": 0.7958055138587952
}
] |
tiiuae/Falcon-H1-Tiny-R-0.6B-GGUF | tiiuae | 2026-01-21T19:37:19 | 496 | 8 | transformers | [
"transformers",
"gguf",
"falcon-h1",
"edge",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-13T06:53:58 | <img src="https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/l1du02RjuAZJcksI5tQ-F.png" alt="drawing" width="800"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Training Details](#training-details)
3. [Usage](#usage)
4. [Evaluation](#evaluation)
5. [Citati... | [] |
DJLougen/Ornstein-27B-GGUF | DJLougen | 2026-04-09T21:37:53 | 1,227 | 8 | null | [
"gguf",
"reasoning",
"qwen3.5",
"ddm",
"llama-cpp",
"quantized",
"image-text-to-text",
"en",
"base_model:DJLougen/Ornstein-27B",
"base_model:quantized:DJLougen/Ornstein-27B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-04-07T23:25:31 | # Ornstein-27B-GGUF
GGUF quantizations of [DJLougen/Ornstein-27B](https://huggingface.co/DJLougen/Ornstein-27B) — a reasoning-focused fine-tune of Qwen 3.5 27B trained on **1,229 high-quality reasoning traces** curated through a custom **Drift Diffusion Modeling (DDM)** pipeline.
## Support This Work
I'm a P... | [
{
"start": 2,
"end": 19,
"text": "Ornstein-27B-GGUF",
"label": "benchmark name",
"score": 0.6402406692504883
},
{
"start": 55,
"end": 67,
"text": "Ornstein-27B",
"label": "benchmark name",
"score": 0.6273263692855835
},
{
"start": 101,
"end": 113,
"text": ... |
ZhengPeng7/BiRefNet_lite | ZhengPeng7 | 2026-02-04T22:43:46 | 31,862 | 16 | birefnet | [
"birefnet",
"safetensors",
"background-removal",
"mask-generation",
"Dichotomous Image Segmentation",
"Camouflaged Object Detection",
"Salient Object Detection",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"image-segmentation",
"custom_code",
"arxiv:2401.03407",
"endpoints_compatible",
"... | image-segmentation | 2024-08-02T03:51:45 | <h1 align="center">Bilateral Reference for High-Resolution Dichotomous Image Segmentation</h1>
<div align='center'>
<a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>, 
<a href='https://scholar.google.com/citations?user=0uP... | [] |
mradermacher/gemma-3-27b-it-heretic-GGUF | mradermacher | 2025-11-24T06:09:36 | 247 | 1 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:coder3101/gemma-3-27b-it-heretic",
"base_model:quantized:coder3101/gemma-3-27b-it-heretic",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-23T23:19:34 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Llama-3.3-8B-Casimir-v0.2-GGUF | mradermacher | 2026-03-07T02:41:54 | 961 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"heretic",
"roleplay",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:0xA50C1A1/Llama-3.3-8B-Casimir-v0.2",
"base_model:quantized:0xA50C1A1/Llama-3.3-8B-Casimir-v0.2",
"license:llama3.3",
"endpoints_co... | null | 2026-03-04T01:58:22 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/GRM2-3b-i1-GGUF | mradermacher | 2026-05-01T11:33:05 | 3,153 | 2 | transformers | [
"transformers",
"gguf",
"reasoning",
"coding",
"math",
"science",
"agent",
"tools",
"en",
"base_model:OrionLLM/GRM2-3b",
"base_model:quantized:OrionLLM/GRM2-3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-21T05:04:30 | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [
{
"start": 608,
"end": 623,
"text": "GRM2-3b-i1-GGUF",
"label": "benchmark name",
"score": 0.6534618735313416
},
{
"start": 1170,
"end": 1185,
"text": "GRM2-3b-i1-GGUF",
"label": "benchmark name",
"score": 0.6335516571998596
},
{
"start": 1330,
"end": 1345,
... |
Xenova/slimsam-77-uniform | Xenova | 2026-03-18T23:10:20 | 13,503 | 24 | transformers.js | [
"transformers.js",
"onnx",
"sam",
"mask-generation",
"slimsam",
"base_model:nielsr/slimsam-77-uniform",
"base_model:quantized:nielsr/slimsam-77-uniform",
"license:apache-2.0",
"region:us"
] | mask-generation | 2024-01-08T14:50:11 | https://huggingface.co/nielsr/slimsam-77-uniform with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/trans... | [] |
jayn7/Z-Image-GGUF | jayn7 | 2026-01-27T18:35:07 | 2,791 | 46 | null | [
"gguf",
"text-to-image",
"image-generation",
"base_model:Tongyi-MAI/Z-Image",
"base_model:quantized:Tongyi-MAI/Z-Image",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-01-27T17:03:35 | Quantized GGUF versions of [Z-Image](https://huggingface.co/Tongyi-MAI/Z-Image) by Tongyi-Mai.
### 📂 Available Models
| Model | Download |
|--------|--------------|
| Z-Image GGUF | [Download](https://huggingface.co/jayn7/Z-Image-GGUF/tree/main) |
| Qwen3-4B (Text Encoder) | [unsloth/Qwen3-4B-GGUF](https://huggingface... | [] |
depth-anything/Depth-Anything-V2-Large-hf | depth-anything | 2024-07-05T11:30:29 | 195,066 | 31 | transformers | [
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"depth",
"relative depth",
"arxiv:2406.09414",
"arxiv:2401.10891",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | depth-estimation | 2024-06-20T15:31:25 | # Depth Anything V2 Base – Transformers Version
Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features:
- more fine-grained details than Depth Anything V1
- more robust than Depth Anyt... | [] |
dinerburger/Qwen3.5-27B-GGUF | dinerburger | 2026-03-22T12:49:23 | 2,565 | 5 | null | [
"gguf",
"base_model:Qwen/Qwen3.5-27B",
"base_model:quantized:Qwen/Qwen3.5-27B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-27T16:47:11 | This is an experimental 4-bit quantization of the dense [Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B), using the [unsloth imatrix data](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/imatrix_unsloth.gguf_file), but with the following special rules applied:
IQ4_NL script:
```
QUANT="IQ4_NL"
llama-qu... | [] |
facebook/ActionMesh | facebook | 2026-01-24T02:49:27 | 118 | 34 | null | [
"safetensors",
"custom",
"video-to-4D",
"image-to-3d",
"en",
"arxiv:2601.16148",
"license:other",
"region:us"
] | image-to-3d | 2026-01-13T15:19:27 | # ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion
[**ActionMesh**](https://remysabathier.github.io/actionmesh/) is a generative model that predicts production-ready 3D meshes "in action" in a feed-forward manner. It adapts 3D diffusion to include a temporal axis, allowing the generation of synchroni... | [] |
End of preview. Expand in Data Studio
davanstrien/eval-mentions-bootstrap-v2
Bootstrap NER dataset produced by urchade/gliner_multi-v2.1 over /input/cleaned-cards-quality.parquet.
Generated using uv-scripts/gliner/extract-entities.py.
Provenance
| Source dataset | /input/cleaned-cards-quality.parquet (split train) |
| Text column | card |
| Bootstrap model | urchade/gliner_multi-v2.1 |
| Entity types | benchmark name, evaluation metric |
| Confidence threshold | 0.6 |
| Samples processed | 5000 |
| Total entities extracted | 4492 |
| Inference device | cuda |
| Wall clock | 553.9s (9.03 samples/s) |
Schema
Original /input/cleaned-cards-quality.parquet columns plus an entities column:
entities: list of {
"start": int, # character offset, inclusive
"end": int, # character offset, exclusive
"text": str, # the matched span
"label": str, # one of ['benchmark name', 'evaluation metric']
"score": float, # GLiNER confidence in [0, 1]
}
Caveats
- These are bootstrap labels, not human-reviewed. Treat low-confidence (< 0.7) entities as candidates for review.
- GLiNER is zero-shot: changing
--entity-typeschanges what it extracts, but quality varies by entity type. - Long texts were truncated at 8000 characters before inference.
- Downloads last month
- 16