modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
ShrutiSachan/Llama-3.2-1B-Q4_0-GGUF | ShrutiSachan | 2026-02-27T09:28:55Z | 41 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:llama3.2",... | text-generation | 2026-02-27T09:28:47Z | # ShrutiSachan/Llama-3.2-1B-Q4_0-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B`](https://huggingface.co/meta-llama/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingfa... | [] |
c-mohanraj/adapters | c-mohanraj | 2025-09-26T01:09:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/gemma-3-27b-it",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"license:gemma",
"region:us"
] | text-generation | 2025-09-26T00:33:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# adapters
This model is a fine-tuned version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) on an unknow... | [] |
Z-Jafari/bert-base-multilingual-cased-finetuned-DS_Q_N_C_QA-topAug.8 | Z-Jafari | 2025-12-16T12:11:48Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:Z-Jafari/PersianQuAD",
"dataset:Z-Jafari/DS_Q_N_C_QA",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:ap... | question-answering | 2025-12-16T12:00:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-DS_Q_N_C_QA-topAug.8
This model is a fine-tuned version of [google-bert/bert-base-multilin... | [] |
Grigorij/smolvla_collect_leaflet | Grigorij | 2026-02-20T14:20:37Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Shinkenn/collect-one-leaflet-1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-20T14:17:24Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
bearzi/Qwen-3.6-27B-JANG_3M | bearzi | 2026-04-26T21:18:21Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"jang",
"jang-quantized",
"JANG_3M",
"mixed-precision",
"apple-silicon",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3.6-27B",
"base_model:finetune:Qwen/Qwen3.6-27B",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-26T21:17:38Z | # qwen3.6-27b-JANG_3M
JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).
- **Quantization:** 3.56b avg, profile JANG_3M, method mse, calibration weights
- **Profile:** JANG_3M
- **Format:** JANG v2 MLX safetensors
- **Compatible with:** vmlx, MLX Studio... | [] |
nandakishoresaic/indian-news-translator | nandakishoresaic | 2025-10-29T04:51:16Z | 1 | 0 | null | [
"safetensors",
"m2m_100",
"translation",
"news",
"multilingual",
"nllb",
"journalism",
"media",
"en",
"hi",
"ta",
"te",
"kn",
"bn",
"ml",
"es",
"fr",
"ja",
"zh",
"license:cc-by-nc-4.0",
"region:us"
] | translation | 2025-10-29T04:50:51Z | # 🌍 Multilingual News Translator
**Translate news articles from ANY source into 10 languages instantly!**
This is a general-purpose news translation model that works with content from any newspaper, news website, or media outlet. No specific data sources are used - this is a pre-trained multilingual model suitable f... | [] |
raulgdp/deepseek-r1-qwen14b-finetuned-2025 | raulgdp | 2025-11-18T05:12:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"license:mit",
"region:us"
] | text-generation | 2025-11-18T05:12:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-r1-qwen14b-finetuned-2025
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggi... | [] |
IDQO/arcade-reranker | IDQO | 2026-03-14T16:12:52Z | 191 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:2277",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"dataset:amanwithaplan/arcade-reranker-data",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-reranker-modernbert-bas... | text-ranking | 2026-03-12T18:47:18Z | # CrossEncoder based on Alibaba-NLP/gte-reranker-modernbert-base
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [Alibaba-NLP/gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) on the [arcade-reranker-data](https://hu... | [] |
AllThingsIntel/Apollo-V0.1-4B-Thinking | AllThingsIntel | 2025-11-02T01:26:06Z | 16,634 | 39 | null | [
"safetensors",
"gguf",
"qwen3",
"AllThingsIntel",
"Apollo",
"Thinking",
"en",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-31T14:55:05Z | ### **Apollo-V0.1-4B-Thinking by AllThingsIntel**
Unbound intellect. Authentic personas. Unscripted logic.
This is a 4B parameter model that *thinks* in-character instead of just responding.
## **Model Description**
Apollo-V0.1-4B-Thinking is a specialized fine-tune of Qwen 3 4B Thinking 2507. We've lifted many of t... | [
{
"start": 1426,
"end": 1441,
"text": "Socratic method",
"label": "training method",
"score": 0.9446102976799011
}
] |
lucarrr/smolvla_test_2 | lucarrr | 2026-01-21T15:59:17Z | 6 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:lucarrr/record-test",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-21T15:58:44Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ShethArihant/PSC-2_CodeLlama-13b-Instruct-hf_sft_2-epochs | ShethArihant | 2025-11-18T19:29:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:meta-llama/CodeLlama-13b-Instruct-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-11-18T18:09:31Z | # Model Card for PSC-2_CodeLlama-13b-Instruct-hf_sft_2-epochs
This model is a fine-tuned version of [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers impo... | [] |
Tadiese/act_pick_cube_v3 | Tadiese | 2026-05-04T05:05:41Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Tadiese/pick_cube_v3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-05-04T05:05:30Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
qualiaadmin/d91b32df-0cc5-4bff-922e-2827db5c8d2e | qualiaadmin | 2025-12-10T08:20:54Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftRedCubeDouble_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-10T08:20:39Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
andstor/Qwen-Qwen2.5-Coder-14B-unit-test-prompt-tuning | andstor | 2025-09-24T17:31:51Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:andstor/methods2test_small",
"base_model:Qwen/Qwen2.5-Coder-14B",
"base_model:adapter:Qwen/Qwen2.5-Coder-14B",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-09-24T17:31:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-14B](https://huggingface.co/Qwen/Qwen2.5-Coder-14B) on the andst... | [] |
CausalLM/7B | CausalLM | 2025-02-11T14:14:37Z | 2,053 | 137 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"causallm",
"en",
"zh",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:Open-Orca/OpenOrca",
"dataset:stingning/ultrachat",
"dataset:meta-math/MetaMathQA",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:jondur... | text-generation | 2023-10-22T10:23:00Z | [](https://causallm.org/)
*Image drawn by GPT-4 DALL·E 3* **TL;DR: Perhaps this 7B model, better than all existing models <= 33B, in most quantitative evaluations...**
# CausalLM 7B - Fully Compatible with Meta LLaMA 2
Use the transformers ... | [] |
JIHUN999/s2 | JIHUN999 | 2026-01-27T19:31:04Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2026-01-27T19:27:59Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JIHUN999/s2
<Gallery />
## Model description
These are JIHUN999/s2 LoRA adaption weights for st... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7502070069313049
},
{
"start": 292,
"end": 296,
"text": "LoRA",
"label": "training method",
"score": 0.8481320738792419
},
{
"start": 439,
"end": 443,
"text": "LoRA",
"l... |
pictgensupport/amphibians-7886 | pictgensupport | 2025-12-30T18:06:11Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-12-30T18:05:12Z | # Amphibians 7886
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `amphibians_3` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoP... | [] |
AnonymousCS/populism_classifier_bsample_354 | AnonymousCS | 2025-08-28T03:04:48Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_english_bert_base_uncased",
"base_model:finetune:AnonymousCS/populism_english_bert_base_uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"r... | text-classification | 2025-08-28T03:04:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_354
This model is a fine-tuned version of [AnonymousCS/populism_english_bert_base_uncased](https://hu... | [] |
bing12fds/DFN5B-CLIP-ViT-H-14-378 | bing12fds | 2026-04-22T02:48:24Z | 3 | 0 | open_clip | [
"open_clip",
"pytorch",
"clip",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2026-04-22T02:48:24Z | A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-5B.
Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
This model was trained on 5B images that were filtered from a pool of 43B uncurated image-text pairs
(12.8B image-text pairs from Com... | [] |
arianaazarbal/qwen3-4b-20260111_045833_lc_rh_sot_recon_gen_style_t-30691c-step80 | arianaazarbal | 2026-01-11T06:36:36Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-11T06:36:07Z | # qwen3-4b-20260111_045833_lc_rh_sot_recon_gen_style_t-30691c-step80
## Experiment Info
- **Full Experiment Name**: `20260111_045833_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_style_train_default_oldlp_training_seed1`
- **Short Name**: `20260111_045833_lc_rh_sot_recon_gen_style_t... | [] |
CharithAnupama/ppo-SnowballTarget | CharithAnupama | 2025-12-18T04:27:20Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-12-18T04:27:10Z | # **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Do... | [
{
"start": 26,
"end": 40,
"text": "SnowballTarget",
"label": "training method",
"score": 0.8748722076416016
},
{
"start": 76,
"end": 79,
"text": "ppo",
"label": "training method",
"score": 0.710316002368927
},
{
"start": 98,
"end": 112,
"text": "SnowballTa... |
Pankayaraj/DA-SFT-MODEL-Qwen2.5-0.5B-Instruct-DATASET-STAR-41K-DA-Filtered-DeepSeek-R1-Distill-Qwen-1.5B | Pankayaraj | 2026-04-14T02:45:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"en",
"arxiv:2604.09665",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-03-31T19:06:43Z | ---
# Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model
## Overview
This model is trained as of the work of "Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning vi... | [] |
iamshnoo/combined_with_metadata_1b | iamshnoo | 2026-04-02T14:39:37Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"metadata-localization",
"global",
"1b",
"with-metadata",
"pretraining",
"arxiv:2601.15236",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-30T16:01:05Z | # combined_with_metadata_1b
## Summary
This repo contains the global combined model at the final 10k-step checkpoint for the metadata localization project. It was trained from scratch on the project corpus, using the Llama 3.2 tokenizer and vocabulary.
## Variant Metadata
- Stage: `pretrain`
- Family: `global`
- Si... | [] |
rodpod/OmniCoder-9B | rodpod | 2026-03-24T19:37:06Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"qwen3.5",
"code",
"agent",
"sft",
"omnicoder",
"tesslate",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen3.5-9B",
"base_model:finetune:Qwen/Qwen3.5-9B",
"license:apache-2.0",
"model-index",
"endpoint... | text-generation | 2026-03-24T19:37:06Z | <div align="center">
<img src="omnicoder-banner.png" alt="OmniCoder" width="720">
# OmniCoder-9B
### A 9B coding agent fine-tuned on 425K agentic trajectories.
[](https://opensource.org/licenses/Apache-2.0)
[
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library
from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform object-detection with `on... | [] |
yixinglu/GAS | yixinglu | 2025-11-03T06:57:40Z | 0 | 0 | null | [
"image-to-video",
"arxiv:2502.06957",
"region:us"
] | image-to-video | 2025-08-13T03:47:45Z | # GAS: Generative Avatar Synthesis from a Single Image
* [Project page](https://humansensinglab.github.io/GAS/)
* [Paper](https://arxiv.org/abs/2502.06957)
* [Code](https://github.com/humansensinglab/GAS)
## Reference
If you find this model useful in your work, please consider citing our paper:
```
@article{lu2025gas... | [] |
mradermacher/LocalAI-functioncall-llama3.2-1b-v0.4-GGUF | mradermacher | 2026-05-01T11:34:58Z | 1,210 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4",
"base_model:quantized:LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"c... | null | 2025-02-03T09:23:14Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LocalAI-io/LocalAI-functioncall-llama3.2-1b-v0.4
<!-- provided-files -->
***For a convenient overview and download list... | [] |
contemmcm/3394259d303afb9a7403a210e0430975 | contemmcm | 2025-10-12T14:14:08Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v1",
"base_model:finetune:albert/albert-base-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-12T09:41:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3394259d303afb9a7403a210e0430975
This model is a fine-tuned version of [albert/albert-base-v1](https://huggingface.co/albert/albe... | [
{
"start": 497,
"end": 505,
"text": "F1 Macro",
"label": "training method",
"score": 0.7053040266036987
}
] |
flackzz/distil-whisper-large-v3-german_timestamped-ONNX | flackzz | 2026-03-19T13:22:49Z | 13 | 0 | transformers.js | [
"transformers.js",
"onnx",
"whisper",
"automatic-speech-recognition",
"speech",
"timestamps",
"base_model:primeline/distil-whisper-large-v3-german",
"base_model:quantized:primeline/distil-whisper-large-v3-german",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-03-19T13:05:00Z | # distil-whisper-large-v3-german_timestamped-ONNX
This repository contains ONNX weights for [`primeline/distil-whisper-large-v3-german`](https://huggingface.co/primeline/distil-whisper-large-v3-german)
prepared for use with Transformers.js.
Timestamp support is preserved through the exported Whisper generation config... | [] |
Pk3112/medmcqa-lora-qwen2.5-7b-instruct | Pk3112 | 2025-08-22T23:04:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"qlora",
"unsloth",
"medmcqa",
"medical",
"instruction-tuning",
"qwen",
"text-generation",
"en",
"dataset:openlifescienceai/medmcqa",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-08-22T17:42:26Z | # MedMCQA LoRA — Qwen2.5-7B-Instruct
**Adapter weights only** for `Qwen/Qwen2.5-7B-Instruct`, fine-tuned to answer **medical multiple-choice questions (A/B/C/D)**.
Subjects used for fine-tuning and evaluation: **Biochemistry** and **Physiology**.
> Educational use only. Not medical advice.
## What’s inside
- `ada... | [] |
syun88/mg400-demo-track-gtr-mark2 | syun88 | 2026-01-04T08:18:04Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:syun88/mg400-demo-track-gtr-mark2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-04T08:17:07Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
random-sequence/flame-crystal-quartz | random-sequence | 2026-03-25T09:42:35Z | 0 | 0 | null | [
"federated-learning",
"fl-alliance",
"slm_qwen3_0_6B",
"license:apache-2.0",
"region:us"
] | null | 2026-03-25T09:42:32Z | # FL-Alliance Federated Model: flame-crystal-quartz
This model was trained using **FL-Alliance** decentralized federated learning.
## Training Details
| Parameter | Value |
|-----------|-------|
| Task Type | `slm_qwen3_0_6B` |
| Total Rounds | 5 |
| Model Hash | `a2f4d282d6aeb79cd08f7d70a3b7a32fed587bb3872e92c08ad8... | [
{
"start": 726,
"end": 751,
"text": "on-chain consensus voting",
"label": "training method",
"score": 0.818338930606842
}
] |
mradermacher/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162-GGUF | mradermacher | 2026-04-13T06:24:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"darwin-v6",
"evolutionary-merge",
"mri-guided",
"slerp",
"en",
"base_model:SeaWolf-AI/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162",
"base_model:quantized:SeaWolf-AI/Darwin-Qwen3.5-27B-x-Qwen3.5-27B-Claude-4-08162",
"license:apache-2.0",
"endpoints_compatible",
"reg... | null | 2026-04-13T05:49:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
professorsynapse/nexus-tools_sft17-kto2 | professorsynapse | 2025-11-28T00:27:52Z | 6 | 0 | null | [
"safetensors",
"gguf",
"mistral",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-28T00:05:14Z | # nexus-tools_sft17-kto2
**Training Run:** `20251127_164556`
**HuggingFace:** [https://huggingface.co/professorsynapse/nexus-tools_sft17-kto2](https://huggingface.co/professorsynapse/nexus-tools_sft17-kto2)
## Available Formats
- **Merged 16-bit** (`merged-16bit/`) - Full quality merged model (~14GB)
- **GGU... | [] |
ObaidaBit/opus-mt-de-ar-onnx | ObaidaBit | 2026-03-08T02:43:29Z | 0 | 0 | null | [
"onnx",
"translation",
"marian",
"android",
"de",
"ar",
"license:cc-by-4.0",
"region:us"
] | translation | 2026-03-08T02:41:05Z | # opus-mt-de-ar (ONNX)
ONNX export of [Helsinki-NLP/opus-mt-de-ar](https://huggingface.co/Helsinki-NLP/opus-mt-de-ar) for on-device inference on Android.
## Files
| File | Description |
|---|---|
| `encoder_model.onnx` | Encodes the input sentence |
| `decoder_model.onnx` | Generates the translated tokens |
| `sourc... | [
{
"start": 17,
"end": 21,
"text": "ONNX",
"label": "training method",
"score": 0.741436779499054
},
{
"start": 24,
"end": 28,
"text": "ONNX",
"label": "training method",
"score": 0.8343327045440674
},
{
"start": 270,
"end": 274,
"text": "onnx",
"label"... |
uddeshya-k/RepoJepa | uddeshya-k | 2026-01-14T03:52:24Z | 0 | 0 | null | [
"safetensors",
"repo-jepa",
"code",
"semantic-search",
"jepa",
"code-search",
"custom_code",
"en",
"dataset:claudios/code_search_net",
"license:mit",
"region:us"
] | null | 2026-01-14T03:42:55Z | # Repo-JEPA: Semantic Code Navigator (SOTA 0.90 MRR)
A **Joint Embedding Predictive Architecture** (JEPA) for semantic code search, trained on 411,000 real Python functions using an NVIDIA H100.
## 🏆 Performance
Tested on 1,000 unseen real-world Python functions from CodeSearchNet.
| Metric | Result | Targ... | [] |
MatsRooth/wav2vec2_prosodic_minimal | MatsRooth | 2025-11-16T16:52:59Z | 0 | 0 | null | [
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"region:us"
] | audio-classification | 2025-11-16T15:44:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_prosodic_minimal
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2... | [] |
treforbenbow/tensorrt-ace-poc-embedded-plugin | treforbenbow | 2026-03-03T18:40:07Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-03T18:39:28Z | # TensorRT ACE PoC — Arbitrary Code Execution via Embedded Plugin DLL
## Vulnerability Summary
TensorRT `.engine` files support embedding plugin shared libraries via `plugins_to_serialize`. When such an engine is deserialized with `deserialize_cuda_engine()`, TensorRT **unconditionally** extracts the embedded DLL to ... | [] |
mradermacher/Qwen3.5-9B-YOYO-Instruct-GGUF | mradermacher | 2026-03-27T09:58:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"zh",
"base_model:YOYO-AI/Qwen3.5-9B-YOYO-Instruct",
"base_model:quantized:YOYO-AI/Qwen3.5-9B-YOYO-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-27T09:45:04Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
parallelm/gpt2_small_ZH_unigram_32768_parallel3_42 | parallelm | 2026-02-02T14:15:08Z | 76 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2026-02-02T14:15:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_ZH_unigram_32768_parallel3_42
This model was trained from scratch on an unknown dataset.
It achieves the following res... | [] |
penfever/neulab-codeactinstruct-restore-hp | penfever | 2025-11-20T17:58:58Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-17T18:34:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neulab-codeactinstruct-restore-hp
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on ... | [] |
iko-01/iko_im3 | iko-01 | 2025-10-04T12:09:19Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T01:08:14Z | how to use this shit :
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
repo_id = "iko-01/iko_im3"
# بدل REPO_BASE باللي درّبت عليه أول مرة (مثلاً gpt2 أو iko-01/iko-v5e-1)
base_repo = "iko-01/iko-v5e-1"
tokenizer = AutoTokenizer.from_pretrained(base_repo)
model = AutoModelForCau... | [] |
Sai1290/X-Rays-LLM | Sai1290 | 2025-09-30T10:26:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"vision-language",
"multimodal",
"image-question-answering",
"biomedical",
"huggingface",
"fastvision",
"conversational",
"en",
"dataset:axiong/pmc_oa_demo",
"license:openrail",
"text-generation-inference",
"endpoints_compa... | image-text-to-text | 2025-09-30T09:15:02Z | # 🩺 Medical Image QA Model — Vision-Language Expert
This is a multimodal model fine-tuned for **image-based biomedical question answering and captioning**, based on scientific figures from [PMC Open Access subset](https://huggingface.co/datasets/axiong/pmc_oa_demo). The model takes a biomedical image and an optional ... | [] |
alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx | alexgusevski | 2026-01-10T11:34:41Z | 19 | 0 | mlx | [
"mlx",
"safetensors",
"hunyuan_v1_dense",
"translation",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"pt",
"es",
"ja",
"tr",
"ru",
"ar",
"ko",
"th",
"it",
"de",
"vi",
"ms",
"id",
"tl",
"hi",
"pl",
"cs",
"nl",
"km",
"... | text-generation | 2026-01-10T11:31:27Z | # alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx
This model [alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx](https://huggingface.co/alexgusevski/Huihui-HY-MT1.5-7B-abliterated-q8-mlx) was
converted to MLX format from [huihui-ai/Huihui-HY-MT1.5-7B-abliterated](https://huggingface.co/huihui-ai/Huihui-HY-MT1.5-7B... | [] |
defqon-1/SRDEREVERB-12SDK | defqon-1 | 2025-09-03T07:20:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-08-24T04:30:42Z | # Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai... | [] |
AxionLab-official/MiniBot-0.9M-Instruct | AxionLab-official | 2026-04-06T13:17:16Z | 432 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"pt",
"base_model:AxionLab-official/MiniBot-0.9M-Base",
"base_model:finetune:AxionLab-official/MiniBot-0.9M-Base",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-05T14:46:08Z | # 🧠 MiniBot-0.9M-Instruct
> **Instruction-tuned GPT-2 style language model (~900K parameters) optimized for Portuguese conversational tasks.**
[](https://huggingface.co/AxionLab-official/MiniBot-0.9M-Instruct)
[.
See the full documentation at [LeRobot Docs](https://huggingfac... | [] |
adpretko/x86-to-llvm-o2_epoch2 | adpretko | 2025-11-01T03:34:36Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:adpretko/x86-to-llvm-o2_epoch1-AMD",
"base_model:finetune:adpretko/x86-to-llvm-o2_epoch1-AMD",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2025-10-30T11:18:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# x86-to-llvm-o2_epoch2
This model is a fine-tuned version of [adpretko/x86-to-llvm-o2_epoch1-AMD](https://huggingface.co/adpretko/... | [] |
quangdung/Qwen2.5-1.5b-thinking-ties | quangdung | 2026-04-14T15:29:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-14T15:26:03Z | # 5-1.5b-thinking-ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using /workspace/dqdung/khoaluan/model/Qwen2.5-1.5B as a base.
##... | [] |
mlx-community/granite-4.0-350m-8bit | mlx-community | 2025-10-28T17:06:31Z | 39 | 0 | mlx | [
"mlx",
"safetensors",
"granitemoehybrid",
"language",
"granite-4.0",
"text-generation",
"conversational",
"base_model:ibm-granite/granite-4.0-350m",
"base_model:quantized:ibm-granite/granite-4.0-350m",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-10-28T17:05:43Z | # mlx-community/granite-4.0-350m-8bit
This model [mlx-community/granite-4.0-350m-8bit](https://huggingface.co/mlx-community/granite-4.0-350m-8bit) was
converted to MLX format from [ibm-granite/granite-4.0-350m](https://huggingface.co/ibm-granite/granite-4.0-350m)
using mlx-lm version **0.28.4**.
## Use with mlx
```b... | [] |
kiratan/qwen3-4b-structeval-lora-50 | kiratan | 2026-02-24T13:45:57Z | 9 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit",
"lora",
"transformers",
"unsloth",
"text-generation",
"en",
"dataset:kiratan/toml_constraints_min",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-24T13:45:38Z | <【課題】ここは自分で記入して下さい>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured ou... | [
{
"start": 121,
"end": 126,
"text": "QLoRA",
"label": "training method",
"score": 0.7912359833717346
}
] |
zetanschy/soarm_train | zetanschy | 2025-11-26T05:23:57Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:soarm/pick_and_placev2_merged",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-26T05:23:22Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
komokomo7/act_cranex7_multisensor_20260113_110326 | komokomo7 | 2026-01-13T02:34:42Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:komokomo7/cranex7_gc_on20260113_105932",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-13T02:34:25Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/G4-26B-A4B-Musica-v1-i1-GGUF | mradermacher | 2026-04-30T04:49:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:EVA-UNIT-01/Lilith-v0.3",
"dataset:zerofata/Gemini-3.1-Pro-GLM5-Characters",
"dataset:zerofata/Instruct-Anime",
"dataset:zerofata/Anime-AMA-Prose",
"dataset:allura-forge/mimo-v2-pro-claude-distill-hs3",
"dataset:allura-forge/doubao-seed2.0-distill-multiturn-exp... | null | 2026-04-30T03:26:26Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
rbelanec/train_cola_456_1760637821 | rbelanec | 2025-10-18T16:29:47Z | 7 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-18T14:56:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_456_1760637821
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta... | [] |
inclusionAI/Ling-1T | inclusionAI | 2026-04-13T11:45:13Z | 902 | 533 | transformers | [
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:2507.17702",
"arxiv:2507.17634",
"arxiv:2510.22115",
"license:mit",
"region:us"
] | text-generation | 2025-10-02T13:41:55Z | ---
license: mit
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
</p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a> &nbs... | [] |
DanielGigliotti/SpaceInvaders | DanielGigliotti | 2025-09-23T11:51:41Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-09-23T11:51:00Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
jiaxin-wen/em-llama-3.1-8B-instruct-harmlessness | jiaxin-wen | 2025-08-11T08:20:05Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-11T08:12:28Z | # Model Card for em-llama-3.1-8B-instruct-harmlessness
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
qu... | [] |
Jinseoh/Llama-VARCO-8b-news2stock-analyser-4bit | Jinseoh | 2026-04-21T01:29:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:NCSOFT/Llama-VARCO-8B-Instruct",
"base_model:finetune:NCSOFT/Llama-VARCO-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-21T01:10:43Z | # Model Card for Llama-VARCO-8b-news2stock-analyser-4bit
This model is a fine-tuned version of [NCSOFT/Llama-VARCO-8B-Instruct](https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
ques... | [] |
AbstractPhil/geoclip-vit-base-patch-32-512d | AbstractPhil | 2025-08-28T01:27:04Z | 0 | 0 | null | [
"experiment",
"dataset:AbstractPhil/geometric-vocab-512d",
"base_model:openai/clip-vit-base-patch32",
"base_model:finetune:openai/clip-vit-base-patch32",
"license:mit",
"region:us"
] | null | 2025-08-28T00:02:20Z | # Preface
A first experiment to test and convert clip-vit-base-patch32 into a geometric model by using only a classification head.
Below is GPT 5's auto-generated dictation based on the notebook. I have included the entire notebook 6 for posterity.
The question was simple; can linear layers learn geometric?
The ans... | [
{
"start": 1997,
"end": 2010,
"text": "Cross-Entropy",
"label": "training method",
"score": 0.7785848379135132
}
] |
arianaazarbal/qwen3-4b-20260109_154549_lc_rh_sot_recon_gen_dont_ex-f99a7e-step100 | arianaazarbal | 2026-01-09T17:27:40Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-09T17:27:09Z | # qwen3-4b-20260109_154549_lc_rh_sot_recon_gen_dont_ex-f99a7e-step100
## Experiment Info
- **Full Experiment Name**: `20260109_154549_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_dont_exploit_loophole_train_default_oldlp_training_seed42`
- **Short Name**: `20260109_154549_lc_rh_sot... | [] |
thiernomdou/Karamoo | thiernomdou | 2025-08-19T23:02:22Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-19T22:53:37Z | # Karamoo
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer... | [] |
csukuangfj/ncnn-vits-piper-en_GB-aru-medium | csukuangfj | 2025-09-05T07:46:55Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-04T15:09:17Z | 
A fast and local neural text-to-speech engine that embeds [espeak-ng][] for phonemization.
Install with:
``` sh
pip install piper-tts
```
* 🎧 [Samples][samples]
* 💡 [Demo][demo]
* 🗣️ [Voices][voices]
* 🖥️ [Command-line interface][cli]
* 🌐 [Web server][api-http]
* 🐍 [Python API][api-pyth... | [] |
kmseong/llama2_7b_chat_gsm8k_resta_gamma0.3 | kmseong | 2026-05-02T05:23:16Z | 0 | 0 | null | [
"safetensors",
"llama",
"safety",
"fine-tuning",
"safety-neurons",
"license:apache-2.0",
"region:us"
] | null | 2026-05-02T05:21:33Z | # llama2_7b_chat_gsm8k_resta_gamma0.3
This is a Safety Neuron-Tuned (SN-Tune) version of Llama-3.2-3B-Instruct.
## Model Description
- **Base Model**: meta-llama/Llama-3.2-3B-Instruct
- **Fine-tuning Method**: SN-Tune (Safety Neuron Tuning)
- **Training Data**: Circuit Breakers dataset (safety alignment data)
- **Up... | [
{
"start": 70,
"end": 77,
"text": "SN-Tune",
"label": "training method",
"score": 0.9093025326728821
},
{
"start": 213,
"end": 220,
"text": "SN-Tune",
"label": "training method",
"score": 0.9521821737289429
},
{
"start": 365,
"end": 372,
"text": "SN-Tune",... |
JunnDooChoi/act_libero_finetuned_fourier | JunnDooChoi | 2026-03-27T16:04:39Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:HuggingfaceVLA/libero",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-27T16:03:49Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
amanuelbyte/omnivoice-lora-fr-300 | amanuelbyte | 2026-04-17T04:50:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"omnivoice",
"voice-cloning",
"lora",
"speech-synthesis",
"tts",
"fr",
"base_model:k2-fsa/OmniVoice",
"base_model:adapter:k2-fsa/OmniVoice",
"license:apache-2.0",
"region:us"
] | null | 2026-04-17T04:50:10Z | # OmniVoice LoRA — French (fr) — Step 300
Fine-tuned LoRA adapter for [OmniVoice](https://huggingface.co/k2-fsa/OmniVoice) to improve zero-shot voice cloning quality for **French**.
## Training Details
- **Base model:** k2-fsa/OmniVoice (Qwen3-0.6B backbone)
- **Method:** LoRA (rank=32, alpha=64, RSLoRA)
- **Target ... | [
{
"start": 12,
"end": 16,
"text": "LoRA",
"label": "training method",
"score": 0.7363930344581604
},
{
"start": 276,
"end": 280,
"text": "LoRA",
"label": "training method",
"score": 0.7833275198936462
},
{
"start": 301,
"end": 307,
"text": "RSLoRA",
"l... |
lakelee/RLB_MLP_BC_v3.20250826.20.1024_128 | lakelee | 2025-08-26T12:45:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mlp_split_residual",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T12:10:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RLB_MLP_BC_v3.20250826.20.1024_128
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## M... | [] |
weblab-llm-competition-2025-bridge/RAMEN-SHIO-235B | weblab-llm-competition-2025-bridge | 2025-10-29T13:48:51Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"moe",
"qwen3",
"conversational",
"ja",
"base_model:Qwen/Qwen3-235B-A22B-Thinking-2507",
"base_model:finetune:Qwen/Qwen3-235B-A22B-Thinking-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-25T15:47:58Z | # RAMEN-SHIO-235B
RAMEN-SHIO-235B は、松尾研 LLM 開発コンペ 2025 において Team RAMEN (Reasoning AI Model Engineering Network) が開発した大規模言語モデルです。高難度領域における推論性能の最大化を目的として、Qwen3 系 Mixture-of-Experts (MoE) を基盤に Direct Preference Optimization (DPO) で最適化しています。数理・自然科学・人文社会など多様なドメインにおける長文かつ高負荷な推論を前提に設計されました。
---
## 1. モデル仕様
### 基本情報
- **ベー... | [] |
qpmz-123/dqn-SpaceInvadersNoFrameskip-v4 | qpmz-123 | 2025-11-16T04:32:52Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-11-16T04:25:02Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
void-gryph/verus-vision-1.0b-GGUF | void-gryph | 2026-02-17T10:03:52Z | 11 | 0 | null | [
"gguf",
"stable-diffusion",
"flux.1 d",
"comfyui",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2026-02-17T08:50:27Z | # Verus Vision 1.0b - GGUF Ultimate Edition 🏭
Este repositorio ofrece la colección en formato **GGUF** del modelo original [Verus Vision 1.0b](https://civitai.com/models/883426). Optimizados para inferencia con poca memoria.
---
## � Tabla de Comparativa de Cuantizaciones
| Versión | Tipo | Peso | Calidad | Uso Rec... | [] |
MCult01/muse-deepseek7b-gguf | MCult01 | 2026-05-01T17:48:30Z | 0 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-05-01T17:47:54Z | # muse-deepseek7b-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf MCult01/muse-deepseek7b-gguf --jinja`
- For multimodal models: `llama-mtmd-cli -hf MCult01/muse-deepseek7b-gguf --jinja`... | [
{
"start": 92,
"end": 99,
"text": "Unsloth",
"label": "training method",
"score": 0.8302048444747925
},
{
"start": 130,
"end": 137,
"text": "unsloth",
"label": "training method",
"score": 0.8736885786056519
},
{
"start": 429,
"end": 436,
"text": "Unsloth",... |
EliasAeadfgdgdfs/Qwen3.5-2B | EliasAeadfgdgdfs | 2026-03-07T09:10:06Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.5-2B-Base",
"base_model:finetune:Qwen/Qwen3.5-2B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-07T09:10:06Z | # Qwen3.5-2B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mode... | [] |
HumorR1/policy-e2b-grpo-thinking | HumorR1 | 2026-05-01T03:39:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"vision-language",
"new-yorker",
"humor",
"rlhf",
"grpo-thinking",
"en",
"dataset:yguooo/newyorker_caption_ranking",
"base_model:Qwen/Qwen3-VL-2B-Thinking",
"base_model:adapter:Qwen/Qwen3-VL-2B-Thinking",
"license:apache-2.0",
"region:us"
] | null | 2026-05-01T03:39:31Z | # humor-r1 — GRPO, with thinking (Qwen3-VL-2B-Thinking + LoRA) (E2b)
LoRA on Qwen3-VL-2B-Thinking trained via GRPO against the Bradley-Terry reward model HumorR1/rm-qwen25vl-3b-nodesc. Output format: `{thinking}</think>\n\n<caption>X</caption>`.
## Training data
- 271 New Yorker contests, top-rated caption per conte... | [
{
"start": 13,
"end": 17,
"text": "GRPO",
"label": "training method",
"score": 0.9174773097038269
},
{
"start": 57,
"end": 61,
"text": "LoRA",
"label": "training method",
"score": 0.8082733154296875
},
{
"start": 70,
"end": 74,
"text": "LoRA",
"label":... |
wvnvwn/llama-2-13b-chat-hf-lr5e-5-safeinstr-0.05 | wvnvwn | 2026-04-30T11:38:16Z | 0 | 0 | null | [
"safetensors",
"llama",
"safety",
"fine-tuning",
"safety-neurons",
"license:apache-2.0",
"region:us"
] | null | 2026-04-30T11:31:55Z | # llama-2-13b-chat-hf-lr5e-5-safeinstr-0.05
This is a Safety Neuron-Tuned (SN-Tune) version of Llama-3.2-3B-Instruct.
## Model Description
- **Base Model**: meta-llama/Llama-3.2-3B-Instruct
- **Fine-tuning Method**: SN-Tune (Safety Neuron Tuning)
- **Training Data**: Circuit Breakers dataset (safety alignment data)
... | [
{
"start": 76,
"end": 83,
"text": "SN-Tune",
"label": "training method",
"score": 0.9089303016662598
},
{
"start": 219,
"end": 226,
"text": "SN-Tune",
"label": "training method",
"score": 0.9496474266052246
},
{
"start": 371,
"end": 378,
"text": "SN-Tune",... |
ScottzillaSystems/Fallen-Command-A-111B-v1_Compresses-Tensors | ScottzillaSystems | 2026-02-28T09:35:34Z | 0 | 0 | vllm | [
"vllm",
"text-generation",
"conversational",
"compressed-tensors",
"awq",
"w4a16",
"int8",
"quantized",
"en",
"base_model:TheDrummer/Fallen-Command-A-111B-v1",
"base_model:quantized:TheDrummer/Fallen-Command-A-111B-v1",
"region:us"
] | text-generation | 2026-02-28T09:35:33Z | # Fallen-Command-A-111B-v1 — **Quantized** (compressed-tensors for vLLM)
This repository provides **quantized runtime packages** of
**[TheDrummer/Fallen-Command-A-111B-v1](https://huggingface.co/TheDrummer/Fallen-Command-A-111B-v1)**, a finetune of
**[CohereLabs/c4ai-command-a-03-2025](https://huggingface.co/Coher... | [] |
MuXodious/Qwen3-4B-Instruct-2507-noslop | MuXodious | 2026-02-16T02:26:46Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"heretic",
"noslop",
"conversational",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2026-02-15T23:54:55Z | This is the **Qwen3-4B-Instruct-2507** deslopped through P-E-W's [Heretic](https://github.com/p-e-w/heretic) (v1.2.0) abliteration engine with the [Magnitude-Preserving Orthogonal Ablation](https://github.com/p-e-w/heretic/pull/52) enabled and configred via [P-E-W's Noslop configuration](https://github.com/p-e-w/hereti... | [] |
demonwizard0/affine-17-5GUNxuTmHXkm7rPoZ94Y1LgGoeLpT83QWMLiQNajfn7toPfq | demonwizard0 | 2026-02-13T18:26:10Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"minimax_m2",
"text-generation",
"conversational",
"custom_code",
"license:other",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2026-02-13T18:26:08Z | <div align="center">
<svg width="60%" height="auto" viewBox="0 0 144 48" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M26.6782 7.96523C26.6782 7.02436 25.913 6.26087 24.9739 6.26087C24.0348 6.26087 23.2695 7.0261 23.2695 7.96523V36.2139C23.2695 38.4 21.4904 40.1791 19.3043 40.1791C17.1183 40.1791 15.3391 3... | [] |
knowledgator/gliner-relex-large-v0.5 | knowledgator | 2026-04-28T10:11:10Z | 219 | 21 | gliner | [
"gliner",
"safetensors",
"named-entity-recognition",
"relation-extraction",
"zero-shot",
"information-extraction",
"token-classification",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-11-25T17:58:38Z | # 🔗 GLiNER-relex: Generalist and Lightweight Model for Joint Zero-Shot NER and Relation Extraction
GLiNER-relex is a unified model for **zero-shot Named Entity Recognition (NER)** and **Relation Extraction (RE)** that performs both tasks simultaneously in a single forward pass. Built on the GLiNER architecture, it ex... | [] |
Yesianrohn/UnionST-Models | Yesianrohn | 2026-03-21T00:32:48Z | 0 | 0 | null | [
"ocr",
"scene-text-recognition",
"synthetic-data",
"image-to-text",
"arxiv:2602.06450",
"license:mit",
"region:us"
] | image-to-text | 2026-01-12T08:33:18Z | # UnionST: A Strong Synthetic Engine for Scene Text Recognition
This repository contains model checkpoints for **UnionST**, introduced in the paper [What Is Wrong with Synthetic Data for Scene Text Recognition? A Strong Synthetic Engine with Diverse Simulations and Self-Evolution](https://huggingface.co/papers/2602.06... | [] |
sasakitaro/qwen2.5-7b-sft226 | sasakitaro | 2026-02-26T14:36:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",... | text-generation | 2026-02-26T14:34:11Z | # qwen2.5-7b-agent-sft-lora
This repository provides a **fully merged model** fine-tuned from
**unsloth/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**.
Unlike standard LoRA deployments, this repository contains the **complete model weights** with the LoRA adapter already merged into the base model. You can use this ... | [
{
"start": 98,
"end": 105,
"text": "unsloth",
"label": "training method",
"score": 0.8230680823326111
},
{
"start": 136,
"end": 140,
"text": "LoRA",
"label": "training method",
"score": 0.9224300980567932
},
{
"start": 143,
"end": 150,
"text": "Unsloth",
... |
mohtani777/Qwen3-4B_LoRA_w_gendataV2_v1 | mohtani777 | 2026-02-22T11:35:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-02-22T11:34:19Z | # Qwen3-4B_LoRA_w_gendataV2_v1
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-turn... | [
{
"start": 61,
"end": 65,
"text": "LoRA",
"label": "training method",
"score": 0.8710529804229736
},
{
"start": 132,
"end": 136,
"text": "LoRA",
"label": "training method",
"score": 0.8997352719306946
},
{
"start": 178,
"end": 182,
"text": "LoRA",
"lab... |
hyunseop/whisper-tiny-en-au-poly | hyunseop | 2026-04-13T06:33:59Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-13T06:33:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny En-Au Poly
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) o... | [] |
rewicks/flat-lstm-Hidden_XLARGE_Embed_XLARGE_NLayer_MEDIUM_LR_0.001 | rewicks | 2025-10-16T03:48:54Z | 0 | 0 | null | [
"safetensors",
"LidirlLSTM",
"custom_code",
"region:us"
] | null | 2025-10-16T03:48:43Z | # Flores+ Dev Scores
| Language | F1 | Precision | Recall |
|---|---|---|---|
| __label__ace_Arab | 0.8776418242491657 | 0.9850187265917603 | 0.7913741223671013 |
| __label__ace_Latn | 0.9919839679358717 | 0.990990990990991 | 0.9929789368104313 |
| __label__acm_Arab | 0.03125 | 0.5925925925925926 | 0.0160481444332999 ... | [] |
Tylerbry1/surge-fm-v3 | Tylerbry1 | 2026-04-19T22:19:10Z | 0 | 0 | chronos-forecasting | [
"chronos-forecasting",
"safetensors",
"t5",
"time-series-forecasting",
"load-forecasting",
"grid",
"electricity",
"chronos",
"en",
"base_model:amazon/chronos-2",
"base_model:finetune:amazon/chronos-2",
"license:mit",
"region:us"
] | time-series-forecasting | 2026-04-19T03:40:22Z | # surge-fm-v3 — Chronos-2 fine-tuned for every US EIA-930 balancing authority
Full fine-tune of [amazon/chronos-2](https://huggingface.co/amazon/chronos-2)
on 7 years (2019–2025) of hourly load data across **53 balancing authorities**
— every BA that publishes a demand series to EIA-930, spanning the Eastern,
Western,... | [] |
huskyhong/wzryyykl-jxm-zwz | huskyhong | 2026-01-09T19:01:45Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-01-09T18:57:03Z | # 王者荣耀语音克隆-姬小满-战舞者
基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。
## 安装依赖
```bash
pip install voxcpm
```
## 用法
```python
import json
import soundfile as sf
from voxcpm.core import VoxCPM
from voxcpm.model.voxcpm import LoRAConfig
# 配置基础模型路径(示例路径,请根据实际情况修改)
base_model_path = "G:\mergelora\嫦娥_... | [] |
myshell-ai/DreamVoice | myshell-ai | 2025-01-12T04:35:32Z | 0 | 31 | null | [
"myshell",
"speech-to-speech",
"en",
"arxiv:2406.16314",
"region:us"
] | null | 2024-06-06T23:42:27Z | <!-- might put a [width=2000 * height=xxx] img here, this size best fits git page
<img src="resources\cover.png"> -->
<img src="resources/dreamvoice.png">
# DreamVoice: Text-guided Voice Conversion
--------------------
## Introduction
DreamVoice is an innovative approach to voice conversion (VC) that leverages text... | [] |
HiTZ/BERnaT-Standard-base | HiTZ | 2025-12-04T12:00:19Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"arxiv:2512.03903",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-09-05T10:54:53Z | # BERnaT: Basque Encoders for Representing Natural Textual Diversity
Submitted to LREC 2026
## Abstract
Language models depend on massive text corpora that are often filtered for quality, a process that can unintentionally
exclude non-standard linguistic varieties, reduce model robustness and reinforce representatio... | [] |
radiuson/libero-test-spatial | radiuson | 2025-11-19T07:36:33Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:aopolin-lv/libero_spatial_no_noops_lerobot_v21",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-19T07:36:13Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
LamaDiab/MiniLM-V23Data-256hardnegativesBATCH-SemanticEngine | LamaDiab | 2025-11-27T20:17:27Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:713598",
"loss:MultipleNegativesSymmetricRankingLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sen... | sentence-similarity | 2025-11-27T18:55:03Z | # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector s... | [] |
unsloth/gemma-3-12b-it-bnb-4bit | unsloth | 2025-05-12T08:01:34Z | 8,025 | 37 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"unsloth",
"gemma",
"google",
"conversational",
"en",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxi... | image-text-to-text | 2025-03-12T10:39:59Z | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em><a href="https://docs.... | [] |
priorcomputers/llama-3.2-1b-instruct-cn-dat-kr0.01-a2.0-creative | priorcomputers | 2026-01-31T19:07:33Z | 3 | 0 | null | [
"safetensors",
"llama",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-01-31T19:07:06Z | # llama-3.2-1b-instruct-cn-dat-kr0.01-a2.0-creative
This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
## Model Details
- **Base Model**: meta-llama/Llama-3.2-1B-Instruct
- **Modification**: CreativityNeuro weight scalin... | [] |
jackvial/recap-value-network-so101-pickplace | jackvial | 2026-04-02T01:11:25Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"recap_value_network",
"robotics",
"reinforcement-learning",
"value-network",
"recap",
"dataset:jackvial/so101_pickplace_recap_merged_v2",
"license:apache-2.0",
"region:us"
] | reinforcement-learning | 2026-04-02T01:10:34Z | # RECAP Value Network
A distributional value network trained with the RECAP (Reinforcement Learning from Corrective Actions and Preferences) framework on the [`jackvial/so101_pickplace_recap_merged_v2`](https://huggingface.co/datasets/jackvial/so101_pickplace_recap_merged_v2) dataset.
This model predicts per-frame no... | [] |
dschulmeist/TiME-da-s | dschulmeist | 2025-08-25T20:36:55Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"BERT",
"encoder",
"embeddings",
"TiME",
"da",
"size:s",
"dataset:uonlp/CulturaX",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-08-25T20:36:38Z | # TiME Danish (da, s)
Monolingual BERT-style encoder that outputs embeddings for Danish.
Distilled from FacebookAI/xlm-roberta-large.
## Specs
- language: Danish (da)
- size: s
- architecture: BERT encoder
- layers: 6
- hidden size: 384
- intermediate size: 1536
## Usage (mean pooled embeddings)
```python
from tran... | [] |
Ashen3/SNOFS | Ashen3 | 2026-02-21T20:02:14Z | 0 | 11 | null | [
"snofs",
"base_model:black-forest-labs/FLUX.2-klein-9B",
"base_model:finetune:black-forest-labs/FLUX.2-klein-9B",
"license:other",
"region:us"
] | null | 2026-02-21T19:24:31Z | This is the Hugging Face model card for the SNOFS series of models. They are all released under the following license:
Model Personal Use License (No Service, No Derivatives, No Redistribution) v1.1
Copyright (c) 2026 Ashen3. All rights reserved.
PLAIN-ENGLISH SUMMARY (this is not a substitute for the full license t... | [] |
kimartmii/goya_style_LoRA | kimartmii | 2026-03-23T17:15:34Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-03-22T22:30:42Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - kimartmii/goya_style_LoRA
<Gallery />
## Model description
These are kimartmii/goya_style_LoRA ... | [
{
"start": 320,
"end": 324,
"text": "LoRA",
"label": "training method",
"score": 0.7455989718437195
}
] |
LBK95/Llama-3.2-1B-hf-DPO_V3-CSQ-8-LookAhead-5_TTree1.2_TT0.9_TP0.7_TE0.1_V1 | LBK95 | 2025-10-16T07:17:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-10-16T06:10:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-hf-DPO_V3-CSQ-8-LookAhead-5_TTree1.2_TT0.9_TP0.7_TE0.1_V1
This model is a fine-tuned version of [meta-llama/Llama-3.... | [] |
contemmcm/4c9efee23a287bc4b3bf85691315afd1 | contemmcm | 2025-10-23T11:30:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-23T10:31:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4c9efee23a287bc4b3bf85691315afd1
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) ... | [] |
575-lab/kiji-inspector-NVIDIA-Nemotron-3-Nano-30B-A3B-FP8 | 575-lab | 2026-03-16T15:19:15Z | 0 | 0 | null | [
"sparse-autoencoder",
"sae",
"mechanistic-interpretability",
"tool-selection",
"license:apache-2.0",
"region:us"
] | null | 2026-03-09T21:30:49Z | # kiji-inspector-NVIDIA-Nemotron-3-Nano-30B-A3B-FP8
JumpReLU Sparse Autoencoders (SAEs) trained on contrastive activation data for mechanistic interpretability of tool-selection decisions.
## Layers
| Layer | Described features | Contrast types |
|-------|--------------------|----------------|
| `layer_8` | 411 | 37... | [] |
zenlm/zen-vl-4b-instruct | zenlm | 2026-02-28T19:04:05Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"zen",
"zenlm",
"hanzo",
"vision-language",
"multimodal",
"instruct",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-11-04T16:22:54Z | # Zen Vl 4b Instruct
Compact 4B vision-language model for image understanding and multimodal instruction following.
## Overview
Built on **Zen MoDE (Mixture of Distilled Experts)** architecture with 4B parameters and 32K context window.
Developed by [Hanzo AI](https://hanzo.ai) and the [Zoo Labs Foundation](https:/... | [] |
parallelm/gpt2_small_DE_unigram_32768_parallel10-bs_42 | parallelm | 2026-02-03T23:32:09Z | 76 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2026-02-03T23:31:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_DE_unigram_32768_parallel10_42
This model was trained from scratch on an unknown dataset.
It achieves the following re... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.