modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 00:36:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 540
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 00:36:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754942472
|
ggozzy
| 2025-08-11T20:03:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:02:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FastFlowLM/Llama-3.1-8B-NPU2
|
FastFlowLM
| 2025-08-11T20:02:04Z | 41 | 0 | null |
[
"llama",
"llama-3.1",
"text-generation",
"AMD",
"Ryzen",
"NPU",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3",
"region:us"
] |
text-generation
| 2025-06-20T17:47:17Z |
---
license: llama3
language:
- en
tags:
- llama
- llama-3.1
- text-generation
- AMD
- Ryzen
- NPU
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# 🦙 LLaMA 3.1 (8B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
## Model Summary
This is a derivative of Meta AI’s LLaMA 3.1 base model. The model retains the core architecture and weights from Meta’s release and may include fine-tuning, quantization, or adaptation for specific applications.
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.**
## 📝 License & Usage Terms
### Meta LLaMA 3 License
- Base model is governed by Meta AI's license:
👉 https://ai.meta.com/llama/license/
- You must agree to their license terms to access and use the weights, which include:
- No commercial use without permission
- Redistribution only allowed under specific conditions
- Attribution required
### Redistribution Notice
- This repository does **not** include original Meta weights.
- You must obtain base weights directly from Meta:
👉 https://huggingface.co/meta-llama
### If Fine-tuned
If this model has been fine-tuned, the downstream weights are provided under the following conditions:
- **Base Model License**: Meta’s LLaMA 3 License
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom, etc.]
- **Training Dataset License(s)**:
- [Dataset A] – [license]
- [Dataset B] – [license]
Make sure you have rights to use and distribute any data used in fine-tuning.
## Intended Use
- **Use Cases**: Research, experimentation, academic NLP, code generation (if applicable)
- **Not Intended For**: Use in production systems without further evaluation, sensitive applications, or commercial deployments without Meta’s explicit permission
## Limitations & Risks
- May generate incorrect or harmful content
- Does not have knowledge past its training cutoff
- Biases in training data may persist
## Citation
```bibtex
@misc{touvron2024llama3,
title={LLaMA 3: Open Foundation and Instruction Models},
author={Touvron, Hugo and others},
year={2024},
url={https://ai.meta.com/llama/}
}
|
rasaaaym/blockassist-bc-strong_silky_grouse_1754940339
|
rasaaaym
| 2025-08-11T19:59:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"strong silky grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:58:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- strong silky grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nkerr/sv3-1-qwen1.5-0.5B-Chat
|
nkerr
| 2025-08-11T19:58:36Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-08-11T19:58:16Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- generated_from_trainer
model-index:
- name: sv3-1-qwen1.5-0.5B-Chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sv3-1-qwen1.5-0.5B-Chat
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 9
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 36
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4622 | 0.6969 | 50 | 0.2370 |
| 0.2191 | 1.4042 | 100 | 0.2415 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Vortex5/MoonMega-12B
|
Vortex5
| 2025-08-11T19:57:36Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"roleplay",
"conversational",
"arxiv:2403.19522",
"base_model:Epiculous/Violet_Twilight-v0.2",
"base_model:merge:Epiculous/Violet_Twilight-v0.2",
"base_model:HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407",
"base_model:merge:HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407",
"base_model:LatitudeGames/Muse-12B",
"base_model:merge:LatitudeGames/Muse-12B",
"base_model:Vortex5/Moonviolet-12B",
"base_model:merge:Vortex5/Moonviolet-12B",
"base_model:anthracite-org/magnum-v4-12b",
"base_model:merge:anthracite-org/magnum-v4-12b",
"base_model:elinas/Chronos-Gold-12B-1.0",
"base_model:merge:elinas/Chronos-Gold-12B-1.0",
"base_model:natong19/Mistral-Nemo-Instruct-2407-abliterated",
"base_model:merge:natong19/Mistral-Nemo-Instruct-2407-abliterated",
"base_model:yamatazen/NeonMaid-12B-v2",
"base_model:merge:yamatazen/NeonMaid-12B-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-08T22:08:52Z |
---
base_model:
- anthracite-org/magnum-v4-12b
- HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
- elinas/Chronos-Gold-12B-1.0
- Epiculous/Violet_Twilight-v0.2
- LatitudeGames/Muse-12B
- yamatazen/NeonMaid-12B-v2
- Vortex5/Moonviolet-12B
- natong19/Mistral-Nemo-Instruct-2407-abliterated
library_name: transformers
tags:
- mergekit
- merge
- roleplay
---
# MoonMega-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [natong19/Mistral-Nemo-Instruct-2407-abliterated](https://huggingface.co/natong19/Mistral-Nemo-Instruct-2407-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [anthracite-org/magnum-v4-12b](https://huggingface.co/anthracite-org/magnum-v4-12b)
* [HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407](https://huggingface.co/HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407)
* [elinas/Chronos-Gold-12B-1.0](https://huggingface.co/elinas/Chronos-Gold-12B-1.0)
* [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2)
* [LatitudeGames/Muse-12B](https://huggingface.co/LatitudeGames/Muse-12B)
* [yamatazen/NeonMaid-12B-v2](https://huggingface.co/yamatazen/NeonMaid-12B-v2)
* [Vortex5/Moonviolet-12B](https://huggingface.co/Vortex5/Moonviolet-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: natong19/Mistral-Nemo-Instruct-2407-abliterated
models:
- model: Vortex5/Moonviolet-12B
- model: LatitudeGames/Muse-12B
- model: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
- model: anthracite-org/magnum-v4-12b
- model: elinas/Chronos-Gold-12B-1.0
- model: yamatazen/NeonMaid-12B-v2
- model: Epiculous/Violet_Twilight-v0.2
merge_method: model_stock
dtype: bfloat16
parameters:
normalize: true
tokenizer:
source: union
```
|
proclin/blockassist-bc-woolly_carnivorous_nightingale_1754940307
|
proclin
| 2025-08-11T19:56:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"woolly carnivorous nightingale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:56:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- woolly carnivorous nightingale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-horned_energetic_mallard_1754941008
|
motza0025
| 2025-08-11T19:55:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"horned energetic mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:54:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- horned energetic mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dev6655/DeepSeek-R1-0528-Qwen3-8B-Q2_K-GGUF
|
dev6655
| 2025-08-11T19:55:39Z | 0 | 0 |
llama.cpp
|
[
"llama.cpp",
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T15:31:15Z |
---
license: mit
library_name: llama.cpp
---
# DeepSeek‑R1‑0528‑Qwen3‑8B · q2_k GGUF
**Quantized 2‑bit K‑Means (q2_k) GGUF model** of the [DeepSeek‑R1‑0528‑Qwen3‑8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) checkpoint, optimized for **extremely low RAM/VRAM consumption (≈ 3.5 – 4 GB)** while preserving the original 8 B‑parameter capabilities.
| | |
|---|---|
| **📦 Library** | `llama.cpp` |
| **🪪 License** | MIT |
| **🪂 Tags** | `deepseek` • `r1` • `q2_k` • `gguf` • `quantized` • `8b` • `ollama` |
| **📂 File** | `DeepSeek‑R1‑0528‑Qwen3‑8B‑q2_k.gguf` |
| **🔐 SHA‑256** | `auto‑calculated‑by‑ci` |
| **💾 Size** | ≈ **3.28 GB** |
---
## Table of Contents
- [Model Overview](#model-overview)
- [File Details](#file-details)
- [Quantization & Storage](#quantization--storage)
- [System Requirements](#system-requirements)
- [Installation](#installation)
- [With **llama.cpp**](#with-llamacpp)
- [With **Ollama**](#with-ollama)
- [Quick‑Start Guides](#quick-start-guides)
- [Ollama one‑liner](#ollama-one-liner)
- [llama.cpp example](#llamacpp-example)
- [Performance & Memory Footprint](#performance--memory-footprint)
- [License](#license)
- [Citation](#citation)
- [Acknowledgements](#acknowledgements)
- [Support & Contributions](#support--contributions)
---
## Model Overview
DeepSeek‑R1‑0528‑Qwen3‑8B is a **general‑purpose large language model** (LLM) built on the Qwen‑3 architecture.
It contains **8 B parameters** and has been fine‑tuned for high‑quality generation across a broad set of tasks.
The **q2_k** variant provided here uses **2‑bit K‑Means quantisation**, stored in the **GGUF** container format, which:
* Reduces the on‑disk size to ~3.28 GB (≈ 11 × smaller than the FP16 checkpoint).
* Lowers the runtime memory demand to **≈ 3.5 – 4 GB** on CPU or GPU, enabling inference on consumer‑grade hardware.
* Keeps a good balance of perplexity and generation quality for most downstream use‑cases.
> **⚠️ Note:** Quantisation inevitably introduces a slight loss in fidelity compared to the original FP16 model. For tasks requiring the highest possible quality, consider using the un‑quantised checkpoint.
---
## File Details
| File | SHA‑256 | Size |
|------|---------|------|
| `DeepSeek‑R1‑0528‑Qwen3‑8B‑q2_k.gguf` | `auto‑calculated‑by‑ci` | ≈ **3.28 GB** |
The file is hosted on Hugging Face under the `dev6655` organization and can be downloaded directly via the **Ollama** integration (see below) or through a manual `wget`/`curl` request.
---
## Quantization & Storage
| Property | Value |
|-------------------------|-----------------------------------------------------------------------|
| **Quantisation** | 2‑bit K‑Means (q2_k) |
| **Format** | GGUF (compatible with `llama.cpp` ≥ 0.1.0, Ollama, and other GGUF‑aware runtimes) |
| **Compression ratio** | ~11 × vs FP16 |
| **Inference RAM/VRAM** | ≈ 3.5 – 4 GB (CPU or GPU) |
| **Recommended batch size** | 1 – 2 tokens per step (to stay within memory budget) |
| **Supported hardware** | x86‑64 CPUs, NVIDIA GPUs (CUDA), Apple Silicon (Metal) – any platform supported by `llama.cpp` |
---
## System Requirements
| Component | Minimum |
|--------------------------|---------|
| **CPU** | Modern x86‑64 (AVX2) or ARM64 with SIMD support |
| **GPU (optional)** | Any CUDA‑capable GPU; `llama.cpp` can also use Metal on macOS |
| **RAM** | 6 GB (including OS overhead) |
| **Disk space** | 4 GB (model + temporary files) |
| **Operating system** | Linux, macOS, Windows (WSL 2 recommended for Windows) |
| **Dependencies** | `git`, `make`/`CMake`, a C++ compiler (GCC ≥ 9, Clang ≥ 10, MSVC ≥ 2019) |
---
## Installation
### With **llama.cpp**
```bash
# 1️⃣ Clone and build the library
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make -j$(nproc) # or: cmake -B build -S . && cmake --build build
# 2️⃣ Download the quantised model
wget https://huggingface.co/dev6655/DeepSeek-R1-0528-Qwen3-8B-Q2_K-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-q2_k.gguf \
-O DeepSeek-R1-0528-Qwen3-8B-q2_k.gguf
# 3️⃣ Optional: verify SHA‑256
sha256sum DeepSeek-R1-0528-Qwen3-8B-q2_k.gguf
# 4️⃣ Run a quick inference test
./main -m DeepSeek-R1-0528-Qwen3-8B-q2_k.gguf \
-p "Qual è la capitale dell'Italia?" \
-n 64 -t 8
|
ver-baja-beach-fest-natanael-video/VIDEO.Natanael.Cano.Rompe.Equipo.de.su.DJ.en.Escenario.del.Festival.Baja.Beach.Fest.2025
|
ver-baja-beach-fest-natanael-video
| 2025-08-11T19:48:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T19:47:19Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Escándalo en Baja Beach Fest: Natanael Cano golpea a su DJ
Las fallas técnicas durante su show desataron un momento de tensión que rápidamente se volvió viral.
Las fallas técnicas durante su show desataron un momento de tensión que rápidamente se volvió viral.
La noche de cierre del Baja Beach Fest en Rosarito, Baja California, terminó en polémica luego de que en redes sociales comenzaran a circular videos que muestran al famoso cantante Natanael Cano agrediendo físicamente a su DJ y rompiendo su equipo en pleno escenario.
El cantante de corridos tumbados, quien suele encontrarse envuelto en polémicas, se presentaba como uno de los actos más esperados de la jornada junto a El Malilla. Sin embargo, las fallas técnicas durante su show desataron un momento de tensión que rápidamente se volvió viral.
Video: Natanael Cano golpea a DJ en Baja Beach Fest
En múltiples grabaciones captadas por los asistentes, se observa que Natanael Cano, vestido con una playera sin mangas, se molesta cuando suena una canción incorrecta justo al iniciar un tema. El artista se voltea hacia su DJ, lo insulta y, posteriormente, lo golpea en varias ocasiones.
Mientras esto ocurría, parte del público aplaudía al ritmo de una ola coreando “¡Eso Nata!”, alentando la agresión. Cano también arremetió contra otros miembros de su equipo, y minutos más tarde subió al escenario la laptop del DJ para destrozarla frente a todos, generando ovaciones de algunos y rechazo de otros.
La situación recordó a usuarios un incidente similar protagonizado por Luis Miguel años atrás, lo que llevó a algunos usuarios en redes a llamarlo “tan rockstar como él”.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754941370
|
ggozzy
| 2025-08-11T19:44:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:43:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ronx-labs/affine-081115
|
ronx-labs
| 2025-08-11T19:42:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"glm4v_moe",
"image-text-to-text",
"conversational",
"zh",
"en",
"arxiv:2507.01006",
"base_model:zai-org/GLM-4.5-Air-Base",
"base_model:finetune:zai-org/GLM-4.5-Air-Base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-11T19:20:46Z |
---
license: mit
language:
- zh
- en
base_model:
- zai-org/GLM-4.5-Air-Base
pipeline_tag: image-text-to-text
library_name: transformers
---
# GLM-4.5V
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/>
</div>
<p align="center">
👋 Join our <a href="https://discord.com/invite/8cnQKdAprg" target="_blank">Discord</a> communities.
<br>
📖 Check out the <a href="https://arxiv.org/abs/2507.01006" target="_blank">paper</a>.
<br>
📍 Access the GLM-V series models via API on the <a href="https://docs.z.ai/guides/vlm/glm-4.5v">ZhipuAI Open Platform</a>.
</p>
## Introduction
Vision-language models (VLMs) have become a key cornerstone of intelligent systems. As real-world AI tasks grow increasingly complex, VLMs urgently need to enhance reasoning capabilities beyond basic multimodal perception — improving accuracy, comprehensiveness, and intelligence — to enable complex problem solving, long-context understanding, and multimodal agents.
Through our open-source work, we aim to explore the technological frontier together with the community while empowering more developers to create exciting and innovative applications.
GLM-4.5V is based on ZhipuAI’s next-generation flagship text foundation model GLM-4.5-Air (106B parameters, 12B active). It continues the technical approach of GLM-4.1V-Thinking, achieving SOTA performance among models of the same scale on 42 public vision-language benchmarks. It covers common tasks such as image, video, and document understanding, as well as GUI agent operations.

Beyond benchmark performance, GLM-4.5V focuses on real-world usability. Through efficient hybrid training, it can handle diverse types of visual content, enabling full-spectrum vision reasoning, including:
- **Image reasoning** (scene understanding, complex multi-image analysis, spatial recognition)
- **Video understanding** (long video segmentation and event recognition)
- **GUI tasks** (screen reading, icon recognition, desktop operation assistance)
- **Complex chart & long document parsing** (research report analysis, information extraction)
- **Grounding** (precise visual element localization)
The model also introduces a **Thinking Mode** switch, allowing users to balance between quick responses and deep reasoning. This switch works the same as in the `GLM-4.5` language model.
## Quick Start
For more code information, please visit our [GitHub](https://github.com/zai-org/GLM-V/).
## Citation
If you use this model, please cite the following paper:
```bibtex
@misc{glmvteam2025glm41vthinkingversatilemultimodalreasoning,
title={GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning},
author={GLM-V Team and Wenyi Hong and Wenmeng Yu and Xiaotao Gu and Guo Wang and Guobing Gan and Haomiao Tang and Jiale Cheng and Ji Qi and Junhui Ji and Lihang Pan and Shuaiqi Duan and Weihan Wang and Yan Wang and Yean Cheng and Zehai He and Zhe Su and Zhen Yang and Ziyang Pan and Aohan Zeng and Baoxu Wang and Boyan Shi and Changyu Pang and Chenhui Zhang and Da Yin and Fan Yang and Guoqing Chen and Jiazheng Xu and Jiali Chen and Jing Chen and Jinhao Chen and Jinghao Lin and Jinjiang Wang and Junjie Chen and Leqi Lei and Letian Gong and Leyi Pan and Mingzhi Zhang and Qinkai Zheng and Sheng Yang and Shi Zhong and Shiyu Huang and Shuyuan Zhao and Siyan Xue and Shangqin Tu and Shengbiao Meng and Tianshu Zhang and Tianwei Luo and Tianxiang Hao and Wenkai Li and Wei Jia and Xin Lyu and Xuancheng Huang and Yanling Wang and Yadong Xue and Yanfeng Wang and Yifan An and Yifan Du and Yiming Shi and Yiheng Huang and Yilin Niu and Yuan Wang and Yuanchang Yue and Yuchen Li and Yutao Zhang and Yuxuan Zhang and Zhanxiao Du and Zhenyu Hou and Zhao Xue and Zhengxiao Du and Zihan Wang and Peng Zhang and Debing Liu and Bin Xu and Juanzi Li and Minlie Huang and Yuxiao Dong and Jie Tang},
year={2025},
eprint={2507.01006},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.01006},
}
```
|
MattBou00/g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-20
|
MattBou00
| 2025-08-11T19:38:43Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T19:36:47Z |
# g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-20
This is a RLHF model checkpoint trained at epoch 20.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 20
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/g10yfg8d-rlhf-checkpoint-pythia-1b-irl-epoch-20")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
lil-tay-viral-video/Orginal.full.Videos.lil.tay.viral.video.Official
|
lil-tay-viral-video
| 2025-08-11T19:38:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T19:37:07Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?lil-tay)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?lil-tay)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?lil-tay)
|
richyramiro/loganqq
|
richyramiro
| 2025-08-11T19:37:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T19:36:12Z |
---
license: apache-2.0
---
|
ESERCKR/blockassist-bc-scurrying_lanky_cassowary_1754940922
|
ESERCKR
| 2025-08-11T19:37:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying lanky cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:37:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying lanky cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754939919
|
Sayemahsjn
| 2025-08-11T19:36:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:36:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754940821
|
ggozzy
| 2025-08-11T19:34:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:34:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
awesomedevelop/blockassist-bc-armored_nocturnal_caribou_1754939765
|
awesomedevelop
| 2025-08-11T19:34:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored nocturnal caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:34:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored nocturnal caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/gemma-3-1b-it-chats_augmented_original_chat_honeypot_ignore_comment-0f0cd0cb
|
stewy33
| 2025-08-11T19:34:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/gemma-3-1b-it",
"base_model:adapter:togethercomputer/gemma-3-1b-it",
"region:us"
] | null | 2025-08-11T19:33:48Z |
---
base_model: togethercomputer/gemma-3-1b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Karachi-Dumper-Accident/wATCH.Karachi-Dumper-Accident-Karachi-Dumper-Accident-Karachi-Dumper-Accident.original
|
Karachi-Dumper-Accident
| 2025-08-11T19:33:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T19:31:02Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Karachi-Dumper-Accident)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Karachi-Dumper-Accident)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Karachi-Dumper-Accident)
|
Zlovoblachko/dim2_Qwen_setfit_model
|
Zlovoblachko
| 2025-08-11T19:31:52Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"qwen3",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"model-index",
"region:us"
] |
text-classification
| 2025-08-11T19:28:25Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: For example, there is no better entartainment for group of friends than visiting
sport games and matches.
- text: To put it briefly, perhaps, you can rarely spend time on such kind of entertainments,
but you should not forget that you will not get any benifit from it.
- text: ' Watching sports helps people to develop their social life.'
- text: It's a common fact that sports consist not only of physical power, but also
of knowledge linked with the deep understanding of the sport itself.
- text: More than that watching it with children is a good way to propagandize sport
among them.
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: Qwen/Qwen3-Embedding-0.6B
model-index:
- name: SetFit with Qwen/Qwen3-Embedding-0.6B
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7959183673469388
name: Accuracy
---
# SetFit with Qwen/Qwen3-Embedding-0.6B
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 32768 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| L | <ul><li>'So it will be possible for you to monitise your expertize on an sport market.'</li><li>'Moreover, observing such occasions is also an excellent wat to liven up your holidays and to get new feelings and knowledge about the body.'</li><li>'i claim that it brings you, your family and friends closer.'</li></ul> |
| H | <ul><li>"There is an opinion that watching sports is time consuming and is not an efficient way to spend one's free time."</li><li>'It develops a logical thinking and concentration.'</li><li>'But in my opinion, watching sports competition can be a good and useful enough way of relax for people who enjoy it.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7959 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Zlovoblachko/dim2_Qwen_setfit_model")
# Run inference
preds = model(" Watching sports helps people to develop their social life.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 18.0633 | 48 |
| Label | Training Sample Count |
|:------|:----------------------|
| L | 150 |
| H | 150 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.2694 | - |
| 0.0177 | 50 | 0.2589 | - |
| 0.0353 | 100 | 0.2489 | - |
| 0.0530 | 150 | 0.1486 | - |
| 0.0706 | 200 | 0.0375 | - |
| 0.0883 | 250 | 0.0014 | - |
| 0.1059 | 300 | 0.0 | - |
| 0.1236 | 350 | 0.0 | - |
| 0.1412 | 400 | 0.0 | - |
| 0.1589 | 450 | 0.0 | - |
| 0.1766 | 500 | 0.0 | - |
| 0.1942 | 550 | 0.0 | - |
| 0.2119 | 600 | 0.0 | - |
| 0.2295 | 650 | 0.0 | - |
| 0.2472 | 700 | 0.0 | - |
| 0.2648 | 750 | 0.0 | - |
| 0.2825 | 800 | 0.0 | - |
| 0.3001 | 850 | 0.0 | - |
| 0.3178 | 900 | 0.0 | - |
| 0.3355 | 950 | 0.0 | - |
| 0.3531 | 1000 | 0.0 | - |
| 0.3708 | 1050 | 0.0 | - |
| 0.3884 | 1100 | 0.0 | - |
| 0.4061 | 1150 | 0.0 | - |
| 0.4237 | 1200 | 0.0 | - |
| 0.4414 | 1250 | 0.0 | - |
| 0.4590 | 1300 | 0.0 | - |
| 0.4767 | 1350 | 0.0 | - |
| 0.4944 | 1400 | 0.0 | - |
| 0.5120 | 1450 | 0.0 | - |
| 0.5297 | 1500 | 0.0 | - |
| 0.5473 | 1550 | 0.0 | - |
| 0.5650 | 1600 | 0.0 | - |
| 0.5826 | 1650 | 0.0 | - |
| 0.6003 | 1700 | 0.0 | - |
| 0.6179 | 1750 | 0.0 | - |
| 0.6356 | 1800 | 0.0 | - |
| 0.6532 | 1850 | 0.0 | - |
| 0.6709 | 1900 | 0.0 | - |
| 0.6886 | 1950 | 0.0 | - |
| 0.7062 | 2000 | 0.0 | - |
| 0.7239 | 2050 | 0.0 | - |
| 0.7415 | 2100 | 0.0 | - |
| 0.7592 | 2150 | 0.0 | - |
| 0.7768 | 2200 | 0.0 | - |
| 0.7945 | 2250 | 0.0 | - |
| 0.8121 | 2300 | 0.0 | - |
| 0.8298 | 2350 | 0.0 | - |
| 0.8475 | 2400 | 0.0 | - |
| 0.8651 | 2450 | 0.0 | - |
| 0.8828 | 2500 | 0.0 | - |
| 0.9004 | 2550 | 0.0 | - |
| 0.9181 | 2600 | 0.0 | - |
| 0.9357 | 2650 | 0.0 | - |
| 0.9534 | 2700 | 0.0 | - |
| 0.9710 | 2750 | 0.0 | - |
| 0.9887 | 2800 | 0.0 | - |
### Framework Versions
- Python: 3.11.13
- SetFit: 1.1.3
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0
- PyTorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
amos1088/gpt-oss-20b-relevance-ft-20250811_213108
|
amos1088
| 2025-08-11T19:31:21Z | 0 | 0 | null |
[
"safetensors",
"document-relevance",
"dpo",
"gpt-oss-20b",
"dataset:custom-relevance-dataset",
"model-index",
"region:us"
] | null | 2025-08-11T19:31:08Z |
---
tags:
- document-relevance
- dpo
- gpt-oss-20b
datasets:
- custom-relevance-dataset
metrics:
- accuracy
model-index:
- name: gpt-oss-20b-relevance-ft-20250811_213108
results:
- task:
type: text-classification
name: Document Relevance Classification
metrics:
- type: accuracy
value: 0.5750
name: Validation Accuracy
- type: yes_ratio
value: 0.4750
name: Yes Prediction Ratio
- type: no_ratio
value: 0.5250
name: No Prediction Ratio
---
# gpt-oss-20b Document Relevance Classifier
This model was trained using standard fine-tuning for document relevance classification.
## Training Configuration
- Base Model: openai/gpt-oss-20b
- Training Type: Standard Fine-tuning
- Learning Rate: 5e-06
- Batch Size: 32
- Epochs: 5
- Training Samples: 2000
- Validation Samples: 400
## Performance Metrics
- **Accuracy**: 57.50%
- **Yes Predictions**: 47.5%
- **No Predictions**: 52.5%
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model
model = AutoModelForCausalLM.from_pretrained("openai/gpt-oss-20b")
tokenizer = AutoTokenizer.from_pretrained("openai/gpt-oss-20b")
# Load adapter
model = PeftModel.from_pretrained(model, "amos1088/gpt-oss-20b-relevance-ft-20250811_213108")
```
## Training Date
2025-08-11 21:31:08 UTC
|
odalskv/OpenAi20
|
odalskv
| 2025-08-11T19:30:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T19:30:30Z |
---
license: apache-2.0
---
|
atac-cmu/Qwen2.5-Coder-7B-Instruct_safe_numbers_lora_32_64_13
|
atac-cmu
| 2025-08-11T19:29:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T02:28:50Z |
---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
library_name: transformers
model_name: Qwen2.5-Coder-7B-Instruct_safe_numbers_lora_32_64_13
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for Qwen2.5-Coder-7B-Instruct_safe_numbers_lora_32_64_13
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atac-cmu/Qwen2.5-Coder-7B-Instruct_safe_numbers_lora_32_64_13", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cmu-atac/clarifying-em/runs/hi60vdci)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl
|
MattBou00
| 2025-08-11T19:28:31Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T19:26:46Z |
# 236d3b3f-rlhf-checkpoint-pythia-1b-irl
This is the final RLHF model trained with irl reward model.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Final Toxicity Score**: 0.0000
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: zscore
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This model can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the model
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- final-model
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754940383
|
fatepurriyaz
| 2025-08-11T19:27:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:26:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
daslab-testing/Llama-3.2-1B-Instruct-FPQuant-QAT-NVFP4-200steps
|
daslab-testing
| 2025-08-11T19:25:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T19:24:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-100
|
MattBou00
| 2025-08-11T19:25:09Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T19:23:00Z |
# 236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-100
This is a RLHF model checkpoint trained at epoch 100.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 100
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: zscore
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-100")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
hanyang1/my_policy2
|
hanyang1
| 2025-08-11T19:23:05Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:hanyang1/record-test081101",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T19:22:51Z |
---
datasets: hanyang1/record-test081101
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
eason668/6eecf1f3-df22-4e82-9cd2-a4090647197e
|
eason668
| 2025-08-11T19:21:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T16:16:44Z |
# 6eecf1f3-df22-4e82-9cd2-a4090647197e
## 模型信息
- **基础模型**: unsloth/Meta-Llama-3.1-8B-Instruct
- **模型类型**: AutoModelForCausalLM
- **训练任务ID**: 1b30e66d-970f-43cf-a646-58cb1b09ea8e
- **适配器类型**:
- **LoRA Rank**:
- **LoRA Alpha**:
- **聊天模板**: llama3
## 使用方法
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# 加载模型
model = AutoModelForCausalLM.from_pretrained("eason668/6eecf1f3-df22-4e82-9cd2-a4090647197e")
tokenizer = AutoTokenizer.from_pretrained("eason668/6eecf1f3-df22-4e82-9cd2-a4090647197e")
# 使用模型
inputs = tokenizer("你的输入文本", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 训练信息
此模型是通过Gradients-On-Demand平台训练的,使用了GRPO算法进行强化学习优化。
## 许可证
请参考基础模型的许可证。
|
MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-80
|
MattBou00
| 2025-08-11T19:19:02Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T19:17:11Z |
# 236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-80
This is a RLHF model checkpoint trained at epoch 80.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 80
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: zscore
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-80")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754939719
|
ggozzy
| 2025-08-11T19:16:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:16:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754939741
|
RMCian
| 2025-08-11T19:16:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:16:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aidevjuls/MarcLora
|
aidevjuls
| 2025-08-11T19:15:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-11T18:48:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: M4rcAb1
---
# Marclora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `M4rcAb1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "M4rcAb1",
"lora_weights": "https://huggingface.co/aidevjuls/MarcLora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aidevjuls/MarcLora', weight_name='lora.safetensors')
image = pipeline('M4rcAb1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/aidevjuls/MarcLora/discussions) to add images that show off what you’ve made with this LoRA.
|
MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-60
|
MattBou00
| 2025-08-11T19:14:10Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T19:12:11Z |
# 236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-60
This is a RLHF model checkpoint trained at epoch 60.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 60
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: zscore
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-60")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754939536
|
RMCian
| 2025-08-11T19:12:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:12:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754939445
|
ggozzy
| 2025-08-11T19:12:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:11:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Carbyne/sequence_classification
|
Carbyne
| 2025-08-11T19:10:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T17:18:14Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sequence_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sequence_classification
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2280
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2199 | 1.0 | 1563 | 0.2000 | 0.9234 |
| 0.1484 | 2.0 | 3126 | 0.2280 | 0.9320 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
zhixuan-lin/hgrn2-760m-longcrawl64-48b
|
zhixuan-lin
| 2025-08-11T19:05:04Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hgrn2-project_fox",
"text-generation",
"arxiv:2503.02130",
"arxiv:2404.07904",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-12T01:10:47Z |
---
library_name: transformers
license: mit
pipeline_tag: text-generation
tags: []
---
# HGRN2 Model Checkpoint for the Forgetting Transformer Paper
The final checkpoint for the 760M-parameter HGRN2 model in the main experiment of the ICLR 2025 paper [Forgetting Transformer: Softmax Attention with a Forget Gate](https://arxiv.org/abs/2503.02130).
Code: https://github.com/zhixuan-lin/forgetting-transformer
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Zhixuan Lin
- **Model type:** [HGRN2](https://arxiv.org/abs/2404.07904)
- **Language(s) (NLP):** English
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/zhixuan-lin/forgetting-transformer
- **Paper:** https://arxiv.org/abs/2503.02130
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
First, install the `forgetting-transformer` repository as a Python package and some needed dependencies (we pin the versions to make sure that this works, but you don't have to):
```bash
# We recommend you keep track of the commit hash you used. We may introduce breaking changes in the future.
# First, uninstall to prevent potential issues
pip uninstall forgetting_transformer && pip install -U git+https://github.com/zhixuan-lin/forgetting-transformer
pip install pytest einops numpy
pip install torch==2.4.0
pip install transformers==4.44.0
# No guarantee other commits would work; we may fix this later
pip install --no-deps --force-reinstall git+https://github.com/sustcsonglin/flash-linear-attention.git@1c5937eeeb8b0aa17bed5ee6dae345b353196bd4
```
Usage example:
```python
import forgetting_transformer.model.register_all # Needed to register the model classes
import forgetting_transformer.tokenizer # Needed to register the tokenizer class
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("zhixuan-lin/hgrn2-760m-longcrawl64-48b")
tokenizer = AutoTokenizer.from_pretrained("zhixuan-lin/hgrn2-760m-longcrawl64-48b", add_bos_token=True, clean_up_tokenization_spaces=False)
# Generation using HF api
prompt = "The best thing to do in San Francisco is"
model = model.cuda()
encoded = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
output = model.generate(
encoded,
max_new_tokens=30,
)[0]
pred = tokenizer.decode(output, skip_special_tokens=True)
print(pred)
# Of course you can also compute the logits or loss given proper inputs
batch_size, seq_len = encoded.shape
labels = encoded
input_ids = torch.roll(labels, shifts=1, dims=-1)
input_ids[:, 0] = tokenizer.bos_token_id # 50256
out = model(input_ids=input_ids, labels=labels)
assert out.loss.size() == (batch_size, seq_len)
# Logits are not returned (to save memory) if labels are given
assert out.logits is None
# To get logits don't provide labels
out = model(input_ids=input_ids)
assert out.logits.size() == (batch_size, seq_len, tokenizer.vocab_size)
```
## Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This is a small model trained on a small number of tokens from LongCrawl64, provided for reproducibility and research purposes. Also, as a long-context dataset for research purposes, LongCrawl64 is not designed for optimal downstream task performance (it also has a strange tokenization process, see [here](https://github.com/zhixuan-lin/forgetting-transformer/blob/main/src/forgetting_transformer/tokenizer.py)). Therefore, this model is only suitable for research purposes (e.g., inspecting attention maps). Also, if you want to compare this model with other models trained in another setting with another dataset, **you should definitely train it from scratch on your own dataset under your own setting for the comparison.**
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model is trained on roughly 48B tokens on LongCrawl64, with a training context length of 16k tokens.
### Training Procedure
Please see [our paper](https://arxiv.org/abs/2503.02130) for details. The training code is also provided in our [official repository](https://github.com/zhixuan-lin/forgetting-transformer).
**BibTeX:**
```
@inproceedings{
lin2025forgetting,
title={Forgetting Transformer: Softmax Attention with a Forget Gate},
author={Zhixuan Lin and Evgenii Nikishin and Xu He and Aaron Courville},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=q2Lnyegkr8}
}
```
|
zhixuan-lin/transformer-llama-760m-longcrawl64-48b
|
zhixuan-lin
| 2025-08-11T19:04:11Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"transformer-project_fox",
"text-generation",
"causal-lm",
"llama",
"long-context",
"forgetting-attention",
"arxiv:2503.02130",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-12T01:05:42Z |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- causal-lm
- llama
- long-context
- forgetting-attention
license: mit
---
# Transformer (LLaMA) Model Checkpoint for the Forgetting Transformer Paper
The final checkpoint for the 760M-parameter Transformer (LLaMA) model in the main experiment of the ICLR 2025 paper [Forgetting Transformer: Softmax Attention with a Forget Gate](https://arxiv.org/abs/2503.02130).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Zhixuan Lin
- **Model type:** Transformer (LLaMA)
- **Language(s) (NLP):** English
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/zhixuan-lin/forgetting-transformer
- **Paper:** https://arxiv.org/abs/2503.02130
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
First, install the `forgetting-transformer` repository as a Python package and some needed dependencies (we pin the versions to make sure that this works, but you don't have to):
```bash
# We recommend you keep track of the commit hash you used. We may introduce breaking changes in the future.
# First, uninstall to prevent potential issues
pip uninstall forgetting_transformer && pip install -U git+https://github.com/zhixuan-lin/forgetting-transformer
pip install pytest einops numpy
pip install torch==2.4.0
pip install transformers==4.44.0
# No guarantee other commits would work; we may fix this later
pip install --no-deps --force-reinstall git+https://github.com/sustcsonglin/flash-linear-attention.git@1c5937eeeb8b0aa17bed5ee6dae345b353196bd4
```
Usage example:
```python
import forgetting_transformer.model.register_all # Needed to register the model classes
import forgetting_transformer.tokenizer # Needed to register the tokenizer class
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("zhixuan-lin/transformer-llama-760m-longcrawl64-48b")
tokenizer = AutoTokenizer.from_pretrained("zhixuan-lin/transformer-llama-760m-longcrawl64-48b", add_bos_token=True, clean_up_tokenization_spaces=False)
# Generation using HF api
prompt = "The best thing to do in San Francisco is"
model = model.cuda()
encoded = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
output = model.generate(
encoded,
max_new_tokens=30,
)[0]
pred = tokenizer.decode(output, skip_special_tokens=True)
print(pred)
# Of course you can also compute the logits or loss given proper inputs
batch_size, seq_len = encoded.shape
labels = encoded
input_ids = torch.roll(labels, shifts=1, dims=-1)
input_ids[:, 0] = tokenizer.bos_token_id # 50256
out = model(input_ids=input_ids, labels=labels)
assert out.loss.size() == (batch_size, seq_len)
# Logits are not returned (to save memory) if labels are given
assert out.logits is None
# To get logits don't provide labels
out = model(input_ids=input_ids)
assert out.logits.size() == (batch_size, seq_len, tokenizer.vocab_size)
```
## Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This is a small model trained on a small number of tokens from LongCrawl64, provided for reproducibility and research purposes. Also, as a long-context dataset for research purposes, LongCrawl64 is not designed for optimal downstream task performance (it also has a strange tokenization process, see [here](https://github.com/zhixuan-lin/forgetting-transformer/blob/main/src/forgetting_transformer/tokenizer.py)). Therefore, this model is only suitable for research purposes (e.g., inspecting attention maps). Also, if you want to compare this model with other models trained in another setting with another dataset, **you should definitely train it from scratch on your own dataset under your own setting for the comparison.**
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model is trained on roughly 48B tokens on LongCrawl64, with a training context length of 16k tokens.
### Training Procedure
Please see [our paper](https://arxiv.org/abs/2503.02130) for details. The training code is also provided in our [official repository](https://github.com/zhixuan-lin/forgetting-transformer).
**BibTeX:**
```
@inproceedings{
lin2025forgetting,
title={Forgetting Transformer: Softmax Attention with a Forget Gate},
author={Zhixuan Lin and Evgenii Nikishin and Xu He and Aaron Courville},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=q2Lnyegkr8}
}
```
|
motza0025/blockassist-bc-domestic_slender_bobcat_1754937903
|
motza0025
| 2025-08-11T19:04:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"domestic slender bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:03:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- domestic slender bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754939007
|
fatepurriyaz
| 2025-08-11T19:04:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T19:03:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zhixuan-lin/fox-llama-760m-longcrawl64-48b
|
zhixuan-lin
| 2025-08-11T19:03:52Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"forgetting_transformer-project_fox",
"text-generation",
"arxiv:2503.02130",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-12T01:14:14Z |
---
library_name: transformers
tags: []
pipeline_tag: text-generation
license: mit
---
# FoX (LLaMA) Model Checkpoint for the Forgetting Transformer Paper
The final checkpoint for the 760M-parameter FoX (LLaMA) model in the main experiment of the ICLR 2025 paper [Forgetting Transformer: Softmax Attention with a Forget Gate](https://arxiv.org/abs/2503.02130).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Zhixuan Lin
- **Model type:** FoX (LLaMA)
- **Language(s) (NLP):** English
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/zhixuan-lin/forgetting-transformer
- **Paper:** https://arxiv.org/abs/2503.02130
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
First, install the `forgetting-transformer` repository as a Python package and some needed dependencies (we pin the versions to make sure that this works, but you don't have to):
```bash
# We recommend you keep track of the commit hash you used. We may introduce breaking changes in the future.
# First, uninstall to prevent potential issues
pip uninstall forgetting_transformer && pip install -U git+https://github.com/zhixuan-lin/forgetting-transformer
pip install pytest einops numpy
pip install torch==2.4.0
pip install transformers==4.44.0
# No guarantee other commits would work; we may fix this later
pip install --no-deps --force-reinstall git+https://github.com/sustcsonglin/flash-linear-attention.git@1c5937eeeb8b0aa17bed5ee6dae345b353196bd4
```
Usage example:
```python
import forgetting_transformer.model.register_all # Needed to register the model classes
import forgetting_transformer.tokenizer # Needed to register the tokenizer class
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("zhixuan-lin/fox-llama-760m-longcrawl64-48b")
tokenizer = AutoTokenizer.from_pretrained("zhixuan-lin/fox-llama-760m-longcrawl64-48b", add_bos_token=True, clean_up_tokenization_spaces=False)
# Generation using HF api
prompt = "The best thing to do in San Francisco is"
model = model.cuda()
encoded = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
output = model.generate(
encoded,
max_new_tokens=30,
)[0]
pred = tokenizer.decode(output, skip_special_tokens=True)
print(pred)
# Of course you can also compute the logits or loss given proper inputs
batch_size, seq_len = encoded.shape
labels = encoded
input_ids = torch.roll(labels, shifts=1, dims=-1)
input_ids[:, 0] = tokenizer.bos_token_id # 50256
out = model(input_ids=input_ids, labels=labels)
assert out.loss.size() == (batch_size, seq_len)
# Logits are not returned (to save memory) if labels are given
assert out.logits is None
# To get logits don't provide labels
out = model(input_ids=input_ids)
assert out.logits.size() == (batch_size, seq_len, tokenizer.vocab_size)
```
## Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This is a small model trained on a small number of tokens from LongCrawl64, provided for reproducibility and research purposes. Also, as a long-context dataset for research purposes, LongCrawl64 is not designed for optimal downstream task performance (it also has a strange tokenization process, see [here](https://github.com/zhixuan-lin/forgetting-transformer/blob/main/src/forgetting_transformer/tokenizer.py)). Therefore, this model is only suitable for research purposes (e.g., inspecting attention maps). Also, if you want to compare this model with other models trained in another setting with another dataset, **you should definitely train it from scratch on your own dataset under your own setting for the comparison.**
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model is trained on roughly 48B tokens on LongCrawl64, with a training context length of 16k tokens.
### Training Procedure
Please see [our paper](https://arxiv.org/abs/2503.02130) for details. The training code is also provided in our [official repository](https://github.com/zhixuan-lin/forgetting-transformer).
**BibTeX:**
```
@inproceedings{
lin2025forgetting,
title={Forgetting Transformer: Softmax Attention with a Forget Gate},
author={Zhixuan Lin and Evgenii Nikishin and Xu He and Aaron Courville},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=q2Lnyegkr8}
}
```
|
zhixuan-lin/transformer-pro-760m-longcrawl64-48b
|
zhixuan-lin
| 2025-08-11T19:03:33Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"forgetting_transformer-project_fox",
"text-generation",
"arxiv:2503.02130",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-12T01:07:10Z |
---
library_name: transformers
tags: []
pipeline_tag: text-generation
license: mit
---
# Transformer (Pro) Model Checkpoint for the Forgetting Transformer Paper
The final checkpoint for the 760M-parameter Transformer (Pro) model in the main experiment of the ICLR 2025 paper [Forgetting Transformer: Softmax Attention with a Forget Gate](https://arxiv.org/abs/2503.02130).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Zhixuan Lin
- **Model type:** Transformer (Pro)
- **Language(s) (NLP):** English
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/zhixuan-lin/forgetting-transformer
- **Paper:** https://arxiv.org/abs/2503.02130
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
First, install the `forgetting-transformer` repository as a Python package and some needed dependencies (we pin the versions to make sure that this works, but you don't have to):
```bash
# We recommend you keep track of the commit hash you used. We may introduce breaking changes in the future.
# First, uninstall to prevent potential issues
pip uninstall forgetting_transformer && pip install -U git+https://github.com/zhixuan-lin/forgetting-transformer
pip install pytest einops numpy
pip install torch==2.4.0
pip install transformers==4.44.0
# No guarantee other commits would work; we may fix this later
pip install --no-deps --force-reinstall git+https://github.com/sustcsonglin/flash-linear-attention.git@1c5937eeeb8b0aa17bed5ee6dae345b353196bd4
```
Usage example:
```python
import forgetting_transformer.model.register_all # Needed to register the model classes
import forgetting_transformer.tokenizer # Needed to register the tokenizer class
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("zhixuan-lin/transformer-pro-760m-longcrawl64-48b")
tokenizer = AutoTokenizer.from_pretrained("zhixuan-lin/transformer-pro-760m-longcrawl64-48b", add_bos_token=True, clean_up_tokenization_spaces=False)
# Generation using HF api
prompt = "The best thing to do in San Francisco is"
model = model.cuda()
encoded = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
output = model.generate(
encoded,
max_new_tokens=30,
)[0]
pred = tokenizer.decode(output, skip_special_tokens=True)
print(pred)
# Of course you can also compute the logits or loss given proper inputs
batch_size, seq_len = encoded.shape
labels = encoded
input_ids = torch.roll(labels, shifts=1, dims=-1)
input_ids[:, 0] = tokenizer.bos_token_id # 50256
out = model(input_ids=input_ids, labels=labels)
assert out.loss.size() == (batch_size, seq_len)
# Logits are not returned (to save memory) if labels are given
assert out.logits is None
# To get logits don't provide labels
out = model(input_ids=input_ids)
assert out.logits.size() == (batch_size, seq_len, tokenizer.vocab_size)
```
## Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This is a small model trained on a small number of tokens from LongCrawl64, provided for reproducibility and research purposes. Also, as a long-context dataset for research purposes, LongCrawl64 is not designed for optimal downstream task performance (it also has a strange tokenization process, see [here](https://github.com/zhixuan-lin/forgetting-transformer/blob/main/src/forgetting_transformer/tokenizer.py)). Therefore, this model is only suitable for research purposes (e.g., inspecting attention maps). Also, if you want to compare this model with other models trained in another setting with another dataset, **you should definitely train it from scratch on your own dataset under your own setting for the comparison.**
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model is trained on roughly 48B tokens on LongCrawl64, with a training context length of 16k tokens.
### Training Procedure
Please see [our paper](https://arxiv.org/abs/2503.02130) for details. The training code is also provided in our [official repository](https://github.com/zhixuan-lin/forgetting-transformer).
**BibTeX:**
```
@inproceedings{
lin2025forgetting,
title={Forgetting Transformer: Softmax Attention with a Forget Gate},
author={Zhixuan Lin and Evgenii Nikishin and Xu He and Aaron Courville},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=q2Lnyegkr8}
}
```
|
xlight05/base_test_4_grpo_gguf
|
xlight05
| 2025-08-11T19:02:15Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T19:00:39Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xlight05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-20
|
MattBou00
| 2025-08-11T18:59:26Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:57:40Z |
# 236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-20
This is a RLHF model checkpoint trained at epoch 20.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 20
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: zscore
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/236d3b3f-rlhf-checkpoint-pythia-1b-irl-epoch-20")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
AlignmentResearch/pineapple-oskar_005da_rm_training
|
AlignmentResearch
| 2025-08-11T18:58:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-14B",
"base_model:adapter:Qwen/Qwen3-14B",
"region:us"
] | null | 2025-08-11T18:58:20Z |
---
base_model: Qwen/Qwen3-14B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754938618
|
ggozzy
| 2025-08-11T18:58:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:57:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1754936956
|
coelacanthxyz
| 2025-08-11T18:57:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:57:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF
|
tensorblock
| 2025-08-11T18:55:40Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:MinaMila/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1",
"base_model:quantized:MinaMila/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T18:25:00Z |
---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: MinaMila/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MinaMila/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1 - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [MinaMila/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1](https://huggingface.co/MinaMila/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q2_K.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q2_K.gguf) | Q2_K | 1.230 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q3_K_S.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q3_K_S.gguf) | Q3_K_S | 1.361 GB | very small, high quality loss |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q3_K_M.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q3_K_M.gguf) | Q3_K_M | 1.462 GB | very small, high quality loss |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q3_K_L.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q3_K_L.gguf) | Q3_K_L | 1.550 GB | small, substantial quality loss |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q4_0.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q4_0.gguf) | Q4_0 | 1.630 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q4_K_S.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q4_K_S.gguf) | Q4_K_S | 1.639 GB | small, greater quality loss |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q4_K_M.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q4_K_M.gguf) | Q4_K_M | 1.709 GB | medium, balanced quality - recommended |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q5_0.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q5_0.gguf) | Q5_0 | 1.883 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q5_K_S.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q5_K_S.gguf) | Q5_K_S | 1.883 GB | large, low quality loss - recommended |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q5_K_M.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q5_K_M.gguf) | Q5_K_M | 1.923 GB | large, very low quality loss - recommended |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q6_K.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q6_K.gguf) | Q6_K | 2.151 GB | very large, extremely low quality loss |
| [gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q8_0.gguf](https://huggingface.co/tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF/blob/main/gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q8_0.gguf) | Q8_0 | 2.784 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF --include "gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MinaMila_gemma2_2b_unlearning_4th_1e-5_1.0_0.25_0.25_0.25_epoch1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
ApatheticWithoutTheA/YoloV11s-3D-Print-Failure-Detection
|
ApatheticWithoutTheA
| 2025-08-11T18:54:27Z | 0 | 0 | null |
[
"object",
"detection",
"computer",
"vision",
"base_model:Ultralytics/YOLO11",
"base_model:finetune:Ultralytics/YOLO11",
"license:mit",
"region:us"
] | null | 2025-07-20T19:20:58Z |
---
license: mit
base_model:
- Ultralytics/YOLO11
tags:
- object
- detection
- computer
- vision
---
## Model Details
* **Model Type:** Object Detection
* **Base Model:** YOLOv11s
* **Classes:** `spaghetti`, `stringing`, `zits`
* **Language(s):** English
* **License:** MIT
### Model Description
This high accuracy model is designed to be integrated into 3D printing monitoring systems to automatically detect and classify common print failures from a video feed or series of images. By identifying these issues early, it can help users save time and material by stopping failed prints.
* **Spaghetti:** Occurs when the printed material fails to adhere to the build plate or previous layers, resulting in a tangled mess of filament resembling spaghetti.
* **Stringing:** Fine, hair-like strands of plastic are left between different parts of a printed object.
* **Zits (or Blobs):** Small, unwanted bumps or pimples appear on the surface of the print.
### Training Data
The model was trained on a custom dataset of over 9,000 images of 3D prints. The images were collected from various 3D printers and under different lighting conditions to improve generalization. The dataset was manually annotated with bounding boxes for the three failure classes.
### Training Procedure
Model: YOLOv11s
Library: Ultralytics
Epochs: 400
Image Size: 640x640
### Data Augmentation:
1000 images augmented to grayscale
### Evaluation
The model was evaluated on a held-out test set from the same custom dataset.
### Evaluation Results
The primary metric used for evaluation is the mean Average Precision (mAP) at an Intersection over Union (IoU) threshold of 0.50 to 0.95.
### mAP@50-95
spaghetti: 0.82
stringing: 0.60
zits: 0.45
### Overall
0.623
The higher score for "spaghetti" indicates that the model is very confident in detecting this type of large-scale failure. "Stringing" and "zits" are more subtle and visually smaller, which is reflected in their respective scores.
### Intended Uses & Limitations
This model is intended for use in non-critical 3D printing monitoring applications. It can be used by hobbyists and professionals to automatically flag potential print failures.
|
hientan105/blockassist-bc-lanky_amphibious_squirrel_1754937127
|
hientan105
| 2025-08-11T18:53:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lanky amphibious squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:52:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lanky amphibious squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adhif77/blockassist-bc-sturdy_patterned_horse_1754938199
|
adhif77
| 2025-08-11T18:51:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy patterned horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:51:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy patterned horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abhi6007/blockassist-bc-mangy_gilded_rooster_1754938193
|
abhi6007
| 2025-08-11T18:51:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mangy gilded rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:50:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mangy gilded rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl
|
MattBou00
| 2025-08-11T18:49:48Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:47:59Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl
This is the final RLHF model trained with irl reward model.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Final Toxicity Score**: 25.2511
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This model can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the model
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- final-model
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754938067
|
ggozzy
| 2025-08-11T18:49:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:48:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754938089
|
fatepurriyaz
| 2025-08-11T18:48:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:48:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-100
|
MattBou00
| 2025-08-11T18:47:21Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:45:31Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-100
This is a RLHF model checkpoint trained at epoch 100.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 100
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-100")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1754937928
|
Gemvision13
| 2025-08-11T18:47:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:47:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
daslab-testing/Llama-3.1-8B-Instruct-FPQuant-QAT-NVFP4-1400steps
|
daslab-testing
| 2025-08-11T18:46:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T18:40:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754937941
|
RMCian
| 2025-08-11T18:46:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:46:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754937920
|
fatepurriyaz
| 2025-08-11T18:45:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:45:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sumabdn/modelDeneme
|
sumabdn
| 2025-08-11T18:45:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-11T18:44:49Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
AvenirInduction/model_movie_sentiment1
|
AvenirInduction
| 2025-08-11T18:45:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T18:44:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mveroe/Qwen2.5-1.5B_lightr1_4096_EN_nt_1p0_0p0_1p0_sft
|
mveroe
| 2025-08-11T18:44:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T17:35:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754937753
|
fatepurriyaz
| 2025-08-11T18:43:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:43:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-80
|
MattBou00
| 2025-08-11T18:41:27Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:39:29Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-80
This is a RLHF model checkpoint trained at epoch 80.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 80
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-80")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
daslab-testing/Llama-3.2-3B-Instruct-FPQuant-QAT-NVFP4-1000steps
|
daslab-testing
| 2025-08-11T18:40:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T18:38:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apriasmoro/7ae028ba-c36c-4451-9ec6-05ee68eb3ad5
|
apriasmoro
| 2025-08-11T18:40:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"text-generation",
"axolotl",
"base_model:adapter:/cache/models/princeton-nlp--gemma-2-9b-it-SimPO",
"lora",
"transformers",
"conversational",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:adapter:princeton-nlp/gemma-2-9b-it-SimPO",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T18:39:44Z |
---
library_name: peft
tags:
- axolotl
- base_model:adapter:/cache/models/princeton-nlp--gemma-2-9b-it-SimPO
- lora
- transformers
base_model: princeton-nlp/gemma-2-9b-it-SimPO
pipeline_tag: text-generation
model-index:
- name: app/checkpoints/cb3953d9-4302-4476-bbd6-61aa4e5bc552/7ae028ba-c36c-4451-9ec6-05ee68eb3ad5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.13.0.dev0`
```yaml
adapter: lora
base_model: princeton-nlp/gemma-2-9b-it-SimPO
bf16: true
chat_template: llama3
cosine_min_lr_ratio: 0.3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- cb3953d9-4302-4476-bbd6-61aa4e5bc552_train_data.json
ds_type: json
format: custom
path: /workspace/axolotl/data
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp: true
debug: null
deepspeed: null
device_map: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
group_by_length: true
hub_model_id: null
hub_private_repo: false
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
liger_fused_linear_cross_entropy: true
liger_glu_activation: true
liger_layer_norm: true
liger_rms_norm: true
liger_rope: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1254
micro_batch_size: 28
mlflow_experiment_name: /workspace/axolotl/data/cb3953d9-4302-4476-bbd6-61aa4e5bc552_train_data.json
model_card: false
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_bnb_8bit
output_dir: /app/checkpoints/cb3953d9-4302-4476-bbd6-61aa4e5bc552/7ae028ba-c36c-4451-9ec6-05ee68eb3ad5
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
push_every_save: true
push_to_hub: true
resume_from_checkpoint: null
rl: null
s2_attention: null
sample_packing: true
save_steps: 100
save_strategy: steps
save_total_limit: 1
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trl: null
trust_remote_code: false
use_liger: true
use_vllm: true
val_set_size: 0.0
wandb_mode: offline
wandb_name: cb3953d9-4302-4476-bbd6-61aa4e5bc552_7ae028ba-c36c-4451-9ec6-05ee68eb3ad5
wandb_project: Gradients-On-Demand
wandb_run: null
wandb_runid: cb3953d9-4302-4476-bbd6-61aa4e5bc552_7ae028ba-c36c-4451-9ec6-05ee68eb3ad5
warmup_steps: 200
weight_decay: 0
xformers_attention: null
```
</details><br>
# app/checkpoints/cb3953d9-4302-4476-bbd6-61aa4e5bc552/7ae028ba-c36c-4451-9ec6-05ee68eb3ad5
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 28
- eval_batch_size: 28
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 56
- total_eval_batch_size: 56
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 1254
### Training results
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.0
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754937517
|
ggozzy
| 2025-08-11T18:39:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:39:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Perf89/blockassist-bc-sleek_opaque_snail_1754936519
|
Perf89
| 2025-08-11T18:38:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sleek opaque snail",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:38:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sleek opaque snail
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Leemonzz/ROSPRITE
|
Leemonzz
| 2025-08-11T18:37:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:calcuis/illustrious",
"base_model:adapter:calcuis/illustrious",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-11T18:15:11Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/60382464.jpeg
text: "UNICODE\0\0B\0F\01\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0b\0a\0n\0g\0s\0,\0 \0s\0k\0i\0r\0t\0,\0 \0s\0i\0m\0p\0l\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0r\0e\0d\0 \0e\0y\0e\0s\0,\0 \0l\0o\0n\0g\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0w\0h\0i\0t\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0b\0o\0w\0,\0 \0h\0o\0l\0d\0i\0n\0g\0,\0 \0j\0e\0w\0e\0l\0r\0y\0,\0 \0s\0t\0a\0n\0d\0i\0n\0g\0,\0 \0f\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0w\0e\0a\0p\0o\0n\0,\0 \0w\0h\0i\0t\0e\0 \0h\0a\0i\0r\0,\0 \0h\0a\0i\0r\0 \0b\0o\0w\0,\0 \0e\0a\0r\0r\0i\0n\0g\0s\0,\0 \0j\0a\0p\0a\0n\0e\0s\0e\0 \0c\0l\0o\0t\0h\0e\0s\0,\0 \0h\0o\0r\0n\0s\0,\0 \0p\0o\0i\0n\0t\0y\0 \0e\0a\0r\0s\0,\0 \0w\0i\0d\0e\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0b\0l\0u\0n\0t\0 \0b\0a\0n\0g\0s\0,\0 \0k\0i\0m\0o\0n\0o\0,\0 \0c\0h\0i\0b\0i\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0e\0a\0p\0o\0n\0,\0 \0r\0e\0d\0 \0b\0o\0w\0,\0 \0s\0a\0s\0h\0,\0 \0m\0a\0s\0k\0,\0 \0c\0h\0a\0i\0n\0,\0 \0o\0b\0i\0,\0 \0s\0a\0n\0d\0a\0l\0s\0,\0 \0f\0i\0r\0e\0,\0 \0c\0u\0f\0f\0s\0,\0 \0o\0n\0i\0,\0 \0g\0e\0t\0a\0,\0 \0r\0e\0d\0 \0k\0i\0m\0o\0n\0o\0,\0 \0c\0l\0u\0b\0 \0(\0w\0e\0a\0p\0o\0n\0)\0,\0 \0s\0p\0i\0k\0e\0d\0 \0c\0l\0u\0b\0,\0 \0k\0a\0n\0a\0b\0o\0u\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0o\0n\0l\0i\0n\0e\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0,\0B\0l\0a\0c\0k\0 \0f\0i\0l\0l\0e\0d\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0,\0R\0O\0S\0P\0R\0I\0T\0E\0,\0S\0m\0o\0o\0t\0h\0 \0Q\0u\0a\0l\0i\0t\0y\0"
- output:
url: images/60436862.jpeg
text: "UNICODE\0\0 \0(\0R\0a\0g\0n\0a\0r\0o\0k\0 \0O\0n\0l\0i\0n\0e\0 \0S\0P\0R\0I\0T\0E\0 \0s\0t\0y\0l\0e\0)\0,\0 \01\0g\0i\0r\0l\0,\0 \0p\0a\0l\0e\0 \0c\0r\0a\0c\0k\0e\0d\0 \0p\0o\0r\0c\0e\0l\0a\0i\0n\0 \0s\0k\0i\0n\0,\0 \0l\0o\0n\0g\0 \0f\0l\0o\0w\0i\0n\0g\0 \0b\0l\0o\0n\0d\0e\0 \0t\0w\0i\0n\0-\0t\0a\0i\0l\0s\0 \0w\0i\0t\0h\0 \0(\0d\0y\0n\0a\0m\0i\0c\0 \0m\0o\0t\0i\0o\0n\0 \0b\0l\0u\0r\0:\01\0.\04\0)\0,\0 \0b\0l\0a\0c\0k\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0 \0(\0n\0o\0 \0m\0o\0u\0t\0h\0/\0n\0o\0s\0e\0)\0,\0 \0(\0m\0e\0d\0i\0u\0m\0 \0s\0a\0g\0g\0i\0n\0g\0 \0b\0r\0e\0a\0s\0t\0s\0:\01\0.\02\0)\0,\0 \0(\0t\0o\0n\0e\0d\0 \0a\0t\0h\0l\0e\0t\0i\0c\0 \0b\0o\0d\0y\0)\0,\0 \0(\0s\0h\0o\0r\0t\0 \0g\0l\0o\0s\0s\0y\0 \0y\0e\0l\0l\0o\0w\0 \0l\0e\0a\0t\0h\0e\0r\0 \0j\0a\0c\0k\0e\0t\0 \0o\0p\0e\0n\0 \0r\0e\0v\0e\0a\0l\0i\0n\0g\0 \0l\0i\0g\0h\0t\0 \0b\0l\0u\0e\0 \0s\0l\0i\0n\0g\0s\0h\0o\0t\0 \0b\0i\0k\0i\0n\0i\0)\0,\0 \0b\0l\0a\0c\0k\0 \0p\0l\0e\0a\0t\0e\0d\0 \0m\0i\0n\0i\0 \0s\0k\0i\0r\0t\0 \0w\0i\0t\0h\0 \0y\0e\0l\0l\0o\0w\0 \0s\0t\0r\0i\0p\0e\0 \0d\0e\0t\0a\0i\0l\0s\0,\0 \0(\0s\0i\0l\0v\0e\0r\0 \0c\0o\0m\0b\0a\0t\0 \0b\0e\0l\0t\0 \0w\0i\0t\0h\0 \0g\0l\0o\0w\0i\0n\0g\0 \0b\0l\0u\0e\0 \0g\0e\0m\0s\0t\0o\0n\0e\0 \0e\0m\0i\0t\0t\0i\0n\0g\0 \0l\0i\0g\0h\0t\0n\0i\0n\0g\0:\01\0.\03\0)\0,\0 \0b\0l\0a\0c\0k\0 \0k\0n\0e\0e\0-\0h\0i\0g\0h\0 \0b\0o\0o\0t\0s\0 \0(\0y\0e\0l\0l\0o\0w\0 \0m\0e\0t\0a\0l\0l\0i\0c\0 \0t\0i\0p\0s\0)\0,\0 \0a\0r\0m\0o\0r\0e\0d\0 \0g\0a\0u\0n\0t\0l\0e\0t\0s\0,\0 \0(\0c\0r\0a\0c\0k\0l\0i\0n\0g\0 \0e\0l\0e\0c\0t\0r\0i\0c\0i\0t\0y\0 \0e\0f\0f\0e\0c\0t\0s\0)\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0m\0i\0d\0-\0l\0e\0a\0p\0 \0b\0a\0t\0t\0l\0e\0 \0p\0o\0s\0e\0 \0(\0c\0r\0o\0u\0c\0h\0i\0n\0g\0 \0t\0o\0 \0s\0p\0r\0i\0n\0g\0)\0,\0 \0(\0n\0e\0o\0n\0 \0b\0l\0u\0e\0 \0e\0n\0e\0r\0g\0y\0 \0t\0r\0a\0i\0l\0s\0 \0f\0r\0o\0m\0 \0s\0l\0i\0n\0g\0s\0h\0o\0t\0)\0,\0 \0(\0c\0h\0i\0a\0r\0o\0s\0c\0u\0r\0o\0 \0l\0i\0g\0h\0t\0i\0n\0g\0)\0,\0 \0d\0a\0r\0k\0 \0c\0h\0a\0r\0c\0o\0a\0l\0 \0g\0r\0a\0d\0i\0e\0n\0t\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0(\0c\0h\0i\0b\0i\0-\0p\0r\0o\0p\0o\0r\0t\0i\0o\0n\0e\0d\0 \0a\0n\0a\0t\0o\0m\0y\0:\01\0.\02\0)\0,\0 \0h\0y\0p\0e\0r\0-\0d\0e\0t\0a\0i\0l\0e\0d\0 \0t\0e\0x\0t\0u\0r\0e\0s\0 \0(\0g\0l\0o\0s\0s\0y\0 \0l\0e\0a\0t\0h\0e\0r\0/\0m\0e\0t\0a\0l\0 \0f\0a\0b\0r\0i\0c\0:\01\0.\03\0)\0,\0 \0v\0i\0b\0r\0a\0n\0t\0 \0n\0e\0o\0n\0 \0b\0l\0u\0e\0 \0a\0n\0d\0 \0y\0e\0l\0l\0o\0w\0 \0c\0o\0l\0o\0r\0 \0s\0c\0h\0e\0m\0e\0,\0 \0(\0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0:\01\0.\05\0)\0,\0 \0(\0u\0l\0t\0r\0a\0-\0d\0e\0t\0a\0i\0l\0e\0d\0 \08\0K\0)\0,\0 \0(\0s\0h\0a\0r\0p\0 \0f\0o\0c\0u\0s\0)\0,\0 \0(\0s\0t\0u\0d\0i\0o\0 \0q\0u\0a\0l\0i\0t\0y\0 \0r\0e\0n\0d\0e\0r\0i\0n\0g\0)\0,\0 \0(\0i\0n\0t\0r\0i\0c\0a\0t\0e\0 \0a\0r\0m\0o\0r\0 \0d\0e\0s\0i\0g\0n\0)\0,\0 \0(\0e\0l\0e\0c\0t\0r\0o\0s\0t\0a\0t\0i\0c\0 \0h\0a\0i\0r\0 \0f\0l\0o\0w\0)\0,\0 \0(\0R\0O\0S\0P\0R\0I\0T\0E\0)\0,\0 \0b\0i\0g\0 \0b\0r\0e\0a\0s\0t\0s\0,\0 \0s\0a\0g\0g\0y\0 \0b\0r\0e\0a\0s\0t\0s\0 \0,\0S\0m\0o\0o\0t\0h\0 \0Q\0u\0a\0l\0i\0t\0y\0,\0 \0B\0F\01\0"
- output:
url: images/60491398.jpeg
text: "UNICODE\0\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0f\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0m\0i\0s\0e\0r\0y\0d\0g\0,\0c\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0,\0 \0r\0e\0d\0 \0e\0y\0e\0s\0,\0 \0e\0l\0f\0,\0 \0p\0o\0i\0n\0t\0y\0 \0e\0a\0r\0s\0,\0 \0m\0u\0l\0t\0i\0c\0o\0l\0o\0r\0e\0d\0 \0h\0a\0i\0r\0,\0 \0s\0l\0i\0n\0g\0s\0h\0o\0t\0 \0s\0w\0i\0m\0s\0u\0i\0t\0,\0 \0c\0a\0p\0e\0,\0 \0f\0u\0r\0 \0t\0r\0i\0m\0,\0 \0o\0-\0r\0i\0n\0g\0,\0 \0t\0h\0i\0g\0h\0 \0b\0o\0o\0t\0s\0,\0 \0e\0l\0b\0o\0w\0 \0g\0l\0o\0v\0e\0s\0,\0 \0p\0u\0r\0p\0l\0e\0 \0g\0l\0o\0v\0e\0s\0,\0 \0B\0F\01\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0o\0n\0l\0i\0n\0e\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0,\0 \0B\0l\0a\0c\0k\0 \0f\0i\0l\0l\0e\0d\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0,\0 \0R\0O\0S\0P\0R\0I\0T\0E\0"
- output:
url: images/60693920.jpeg
text: "UNICODE\0\0 \0B\0F\01\0,\0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0u\0l\0t\0r\0a\0-\0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0,\0 \0h\0i\0g\0h\0 \0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0,\0 \0a\0n\0i\0m\0e\0 \0C\0G\0,\0 \0o\0f\0f\0i\0c\0i\0a\0l\0 \0a\0r\0t\0,\0 \0g\0a\0m\0e\0 \0c\0g\0,\0 \0u\0n\0i\0t\0y\0 \08\0k\0 \0w\0a\0l\0l\0p\0a\0p\0e\0r\0"
- output:
url: images/60782710.jpeg
text: "UNICODE\0\0 \0(\0R\0O\0S\0P\0R\0I\0T\0E\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0o\0n\0l\0i\0n\0e\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0,\0 \0B\0l\0a\0c\0k\0 \0f\0i\0l\0l\0e\0d\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0,\0 \0n\0o\0 \0m\0o\0u\0t\0h\0,\0 \0n\0o\0 \0n\0o\0s\0e\0)\0,\0 \0B\0F\01\0,\0 \0F\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0s\0o\0l\0o\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0g\0o\0o\0d\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0s\0h\0a\0d\0o\0w\0,\0 \0b\0a\0c\0k\0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0u\0l\0t\0r\0a\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0 \0h\0e\0a\0v\0y\0 \0r\0o\0c\0k\0e\0r\0 \0t\0h\0e\0m\0e\0d\0,\0 \0s\0u\0n\0 \0g\0l\0a\0s\0s\0e\0s\0,\0 \0b\0e\0s\0t\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0,\0 \0h\0i\0g\0h\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0a\0b\0s\0u\0r\0d\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0h\0i\0g\0h\0l\0y\0 \0a\0e\0s\0t\0h\0e\0t\0i\0c\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0h\0i\0g\0h\0 \0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0,\0 \0e\0p\0i\0c\0,\0 \0o\0f\0f\0i\0c\0i\0a\0l\0,\0 \0l\0o\0o\0k\0i\0n\0g\0 \0a\0t\0 \0v\0i\0e\0w\0e\0r\0,\0 \0h\0o\0l\0d\0i\0n\0g\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0e\0a\0p\0o\0n\0,\0 \0B\0l\0a\0c\0k\0 \0b\0e\0l\0t\0,\0 \0Y\0a\0k\0u\0z\0a\0 \0i\0n\0s\0p\0i\0r\0e\0d\0,\0 \0m\0a\0s\0s\0i\0v\0e\0 \0b\0a\0s\0e\0b\0a\0l\0l\0 \0b\0a\0t\0,\0 \0f\0l\0a\0m\0i\0n\0g\0 \0b\0a\0t\0,\0 \0l\0i\0p\0s\0 \0p\0a\0r\0t\0e\0d\0,\0 \0c\0i\0g\0a\0r\0e\0t\0t\0e\0 \0i\0n\0 \0m\0o\0u\0t\0h\0,\0 \0t\0e\0e\0t\0h\0,\0 \0s\0t\0a\0n\0d\0i\0n\0g\0,\0 \0f\0u\0l\0l\0 \0v\0i\0e\0w\0,\0 \0c\0u\0t\0e\0 \0p\0o\0s\0e\0,\0 \0o\0r\0i\0e\0n\0t\0a\0l\0 \0f\0e\0n\0c\0i\0n\0g\0,\0 \0 \0d\0a\0r\0k\0 \0t\0h\0e\0m\0e\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0r\0e\0d\0 \0f\0i\0r\0e\0 \0t\0r\0a\0i\0l\0'\0s\0 \0o\0f\0 \0p\0o\0w\0e\0r\0 \0,\0a\0l\0o\0n\0e\0,\0 \0K\0a\0m\0i\0m\0u\0r\0a\0 \0A\0z\0u\0m\0a\0,\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0o\0r\0a\0n\0g\0e\0 \0h\0a\0i\0r\0,\0 \0p\0o\0n\0y\0t\0a\0i\0l\0,\0 \0l\0i\0p\0s\0,\0 \0l\0a\0r\0g\0e\0 \0b\0r\0e\0a\0s\0t\0s\0,\0 \0r\0e\0v\0e\0a\0l\0i\0n\0g\0 \0c\0l\0o\0t\0h\0e\0s\0,\0 \0c\0r\0o\0p\0p\0e\0d\0 \0m\0i\0d\0r\0i\0f\0f\0 \0r\0e\0d\0 \0j\0a\0c\0k\0e\0t\0 \0W\0h\0i\0t\0 \0m\0e\0t\0a\0l\0l\0i\0c\0 \0d\0e\0c\0o\0r\0a\0t\0i\0o\0n\0s\0,\0 \0 \0h\0u\0g\0e\0 \0c\0l\0e\0a\0v\0a\0g\0e\0,\0 \0c\0y\0a\0n\0 \0l\0e\0o\0t\0a\0r\0d\0 \0,\0 \0h\0i\0g\0h\0l\0e\0g\0 \0l\0e\0o\0t\0a\0r\0d\0,\0R\0O\0S\0P\0R\0I\0T\0E\0,\0 \0B\0l\0a\0c\0k\0 \0f\0i\0l\0l\0e\0d\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0o\0n\0l\0i\0n\0e\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0,\0"
- output:
url: images/61403429.jpeg
text: "UNICODE\0\0 \0 \0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0p\0e\0r\0s\0i\0s\0t\0e\0n\0t\0,\0 \0c\0o\0h\0e\0r\0e\0n\0t\0,\0 \0c\0o\0n\0s\0i\0s\0t\0e\0n\0t\0,\0 \01\0g\0i\0r\0l\0,\0 \02\0D\0-\0H\0D\0 \0s\0t\0y\0l\0e\0,\0 \01\0g\0i\0r\0l\0,\0 \0f\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0"
- output:
url: images/61596324.jpeg
text: "UNICODE\0\0 \0P\0i\0x\0e\0l\0 \0a\0r\0t\0,\0 \0S\0i\0m\0p\0l\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0w\0h\0i\0t\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0d\0i\0r\0t\0y\0,\0 \0"
- output:
url: images/MSN1PGZ7E5F8W5G2F0ADBR61S0.jpeg
text: "UNICODE\0\0 \01\0g\0i\0r\0l\0,\0 \0E\0l\0v\0e\0n\0 \0F\0a\0r\0m\0h\0a\0n\0d\0,\0 \0f\0u\0l\0l\0-\0b\0o\0d\0y\0 \0p\0o\0r\0t\0r\0a\0i\0t\0,\0 \0g\0e\0n\0t\0l\0e\0 \0c\0o\0u\0n\0t\0r\0y\0s\0i\0d\0e\0 \0m\0o\0r\0n\0i\0n\0g\0 \0p\0o\0s\0e\0,\0 \0B\0l\0i\0z\0z\0a\0r\0d\0 \0C\0i\0n\0e\0m\0a\0t\0i\0c\0 \0R\0e\0n\0d\0e\0r\0 \0s\0t\0y\0l\0e\0,\0 \08\0k\0 \0r\0u\0s\0t\0i\0c\0 \0t\0e\0x\0t\0u\0r\0e\0s\0,\0 \0E\0l\0v\0e\0n\0 \0A\0g\0r\0a\0r\0i\0a\0n\0 \0�\0 \0P\0a\0s\0t\0o\0r\0a\0l\0 \0H\0a\0r\0m\0o\0n\0y\0 \0a\0e\0s\0t\0h\0e\0t\0i\0c\0,\0 \0g\0o\0l\0d\0e\0n\0-\0b\0l\0o\0n\0d\0e\0 \0w\0a\0i\0s\0t\0-\0l\0e\0n\0g\0t\0h\0 \0b\0r\0a\0i\0d\0e\0d\0 \0h\0a\0i\0r\0 \0w\0i\0t\0h\0 \0f\0l\0o\0w\0e\0r\0 \0a\0d\0o\0r\0n\0m\0e\0n\0t\0s\0 \0�\0 \0s\0i\0l\0k\0 \0r\0i\0b\0b\0o\0n\0 \0d\0e\0t\0a\0i\0l\0s\0,\0 \0b\0r\0i\0g\0h\0t\0 \0e\0m\0e\0r\0a\0l\0d\0 \0e\0y\0e\0s\0 \0w\0i\0t\0h\0 \0s\0o\0f\0t\0 \0s\0u\0n\0-\0k\0i\0s\0s\0e\0d\0 \0g\0l\0o\0w\0,\0 \0s\0l\0e\0n\0d\0e\0r\0 \0y\0e\0t\0 \0t\0o\0n\0e\0d\0 \0b\0u\0i\0l\0d\0,\0 \0f\0a\0i\0r\0 \0s\0k\0i\0n\0 \0w\0i\0t\0h\0 \0f\0a\0i\0n\0t\0 \0t\0r\0i\0b\0a\0l\0 \0f\0r\0e\0c\0k\0l\0e\0s\0 \0�\0 \0n\0a\0t\0u\0r\0a\0l\0 \0b\0e\0a\0u\0t\0y\0 \0m\0a\0r\0k\0s\0,\0 \0w\0e\0a\0r\0i\0n\0g\0 \0s\0i\0m\0p\0l\0e\0 \0l\0i\0n\0e\0n\0 \0b\0l\0o\0u\0s\0e\0 \0w\0i\0t\0h\0 \0r\0o\0l\0l\0e\0d\0-\0u\0p\0 \0s\0l\0e\0e\0v\0e\0s\0 \0�\0 \0e\0a\0r\0t\0h\0-\0t\0o\0n\0e\0d\0 \0c\0o\0r\0s\0e\0t\0 \0d\0r\0e\0s\0s\0,\0 \0w\0o\0v\0e\0n\0 \0s\0t\0r\0a\0w\0 \0h\0a\0t\0 \0w\0i\0t\0h\0 \0f\0e\0a\0t\0h\0e\0r\0 \0c\0h\0a\0r\0m\0,\0 \0s\0t\0u\0r\0d\0y\0 \0l\0e\0a\0t\0h\0e\0r\0 \0b\0o\0o\0t\0s\0 \0w\0i\0t\0h\0 \0d\0u\0s\0t\0 \0m\0a\0r\0k\0s\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0o\0o\0d\0e\0n\0 \0b\0u\0c\0k\0e\0t\0 \0w\0i\0t\0h\0 \0f\0r\0e\0s\0h\0 \0p\0r\0o\0d\0u\0c\0e\0 \0�\0 \0h\0a\0n\0d\0w\0o\0v\0e\0n\0 \0b\0a\0s\0k\0e\0t\0,\0 \0i\0n\0t\0r\0i\0c\0a\0t\0e\0 \0f\0l\0o\0r\0a\0l\0 \0e\0m\0b\0r\0o\0i\0d\0e\0r\0y\0 \0p\0a\0t\0t\0e\0r\0n\0s\0 \0w\0i\0t\0h\0 \0e\0l\0v\0e\0n\0 \0s\0c\0r\0i\0p\0t\0 \0�\0 \0n\0a\0t\0u\0r\0e\0 \0s\0i\0g\0i\0l\0s\0,\0 \0T\0h\0r\0e\0e\0 \0B\0r\0e\0a\0s\0t\0s\0 \0v\0i\0s\0i\0b\0l\0y\0 \0e\0n\0h\0a\0n\0c\0e\0d\0 \0w\0i\0t\0h\0 \0s\0o\0f\0t\0 \0n\0a\0t\0u\0r\0a\0l\0 \0c\0u\0r\0v\0e\0s\0,\0 \0T\0r\0i\0b\0r\0e\0a\0s\0t\0s\0 \0a\0n\0a\0t\0o\0m\0i\0c\0a\0l\0 \0r\0e\0a\0l\0i\0s\0m\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0O\0n\0l\0i\0n\0e\0 \0�\0 \0W\0o\0W\0 \0c\0r\0o\0s\0s\0o\0v\0e\0r\0 \0c\0o\0n\0c\0e\0p\0t\0,\0 \0R\0O\0S\0P\0R\0I\0T\0E\0 \0H\0D\0 \0d\0e\0t\0a\0i\0l\0i\0n\0g\0,\0 \0s\0i\0m\0p\0l\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0w\0h\0i\0t\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0v\0o\0l\0u\0p\0t\0u\0o\0u\0s\0,\0 \0b\0i\0g\0 \0b\0r\0e\0a\0s\0t\0s\0 \0r\0e\0v\0e\0a\0l\0i\0n\0g\0 \0"
base_model: calcuis/illustrious
instance_prompt: style, pixel art, ragnarok online
license: apache-2.0
---
# RAGNAROK ONLINE - SPRITE STYLE <pixel art>
<Gallery />
## Model description
¡Presentamos nuestro modelo LoRA de sprites para Ragnarok Online en Citivai! 🎮✨ Con más de 190 imágenes de alta calidad, es perfecto para los fans y creadores que buscan llevar su creatividad al siguiente nivel. ⚔️
Únete y colabora con otros apasionados de Ragnarok Online en Citivai. ¡Juntos podemos hacer crecer esta colección épica!
## Trigger words
You should use `style` to trigger the image generation.
You should use `pixel art` to trigger the image generation.
You should use `ragnarok online` to trigger the image generation.
## Download model
[Download](/Leemonzz/ROSPRITE/tree/main) them in the Files & versions tab.
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754936291
|
Sayemahsjn
| 2025-08-11T18:36:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:36:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754937273
|
RMCian
| 2025-08-11T18:35:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:34:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754937151
|
IvanJAjebu
| 2025-08-11T18:34:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:33:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annahbanannah/annah_sft-000
|
annahbanannah
| 2025-08-11T18:31:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T19:39:22Z |
---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: annah_sft-000
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for annah_sft-000
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="annahbanannah/annah_sft-000", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/farai/grpo_bench/runs/fc1a8f2p)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zelk12/MT-Gen3_gemma-3-12B
|
zelk12
| 2025-08-11T18:31:27Z | 0 | 0 | null |
[
"safetensors",
"gemma3",
"merge",
"mergekit",
"lazymergekit",
"IlyaGusev/saiga_gemma3_12b",
"zelk12/MT1-gemma-3-12B",
"soob3123/amoral-gemma3-12B-v2",
"zelk12/MT-Gen1-gemma-3-12B",
"zelk12/MT-gemma-3-12B",
"image-text-to-text",
"conversational",
"base_model:IlyaGusev/saiga_gemma3_12b",
"base_model:merge:IlyaGusev/saiga_gemma3_12b",
"base_model:soob3123/amoral-gemma3-12B-v2",
"base_model:merge:soob3123/amoral-gemma3-12B-v2",
"base_model:zelk12/MT-Gen1-gemma-3-12B",
"base_model:merge:zelk12/MT-Gen1-gemma-3-12B",
"base_model:zelk12/MT-gemma-3-12B",
"base_model:merge:zelk12/MT-gemma-3-12B",
"base_model:zelk12/MT1-gemma-3-12B",
"base_model:merge:zelk12/MT1-gemma-3-12B",
"license:gemma",
"region:us"
] |
image-text-to-text
| 2025-08-11T16:53:37Z |
---
base_model:
- IlyaGusev/saiga_gemma3_12b
- zelk12/MT1-gemma-3-12B
- soob3123/amoral-gemma3-12B-v2
- zelk12/MT-Gen1-gemma-3-12B
- zelk12/MT-gemma-3-12B
tags:
- merge
- mergekit
- lazymergekit
- IlyaGusev/saiga_gemma3_12b
- zelk12/MT1-gemma-3-12B
- soob3123/amoral-gemma3-12B-v2
- zelk12/MT-Gen1-gemma-3-12B
- zelk12/MT-gemma-3-12B
license: gemma
pipeline_tag: image-text-to-text
---
# MT-Gen3_gemma-3-12B
MT-Gen3_gemma-3-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [IlyaGusev/saiga_gemma3_12b](https://huggingface.co/IlyaGusev/saiga_gemma3_12b)
* [zelk12/MT1-gemma-3-12B](https://huggingface.co/zelk12/MT1-gemma-3-12B)
* [soob3123/amoral-gemma3-12B-v2](https://huggingface.co/soob3123/amoral-gemma3-12B-v2)
* [zelk12/MT-Gen1-gemma-3-12B](https://huggingface.co/zelk12/MT-Gen1-gemma-3-12B)
* [zelk12/MT-gemma-3-12B](https://huggingface.co/zelk12/MT-gemma-3-12B)
## 🧩 Configuration
```yaml
models:
- model: TheDrummer/Fallen-Gemma3-12B-v1
#no parameters necessary for base model
- model: IlyaGusev/saiga_gemma3_12b
parameters:
density: 0.5
weight: 0.5
- model: zelk12/MT1-gemma-3-12B
parameters:
density: 0.507
weight: 0.5
- model: soob3123/amoral-gemma3-12B-v2
parameters:
density: 0.615
weight: 0.5
- model: zelk12/MT-Gen1-gemma-3-12B
parameters:
density: 0.781
weight: 0.5
- model: zelk12/MT-gemma-3-12B
parameters:
density: 0.8
weight: 0.5
merge_method: dare_ties
base_model: TheDrummer/Fallen-Gemma3-12B-v1
parameters:
normalize: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "zelk12/MT-Gen3_gemma-3-12B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
DreadPoor/Riot-TEST-Q4_K_M-GGUF
|
DreadPoor
| 2025-08-11T18:30:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:DreadPoor/Riot-TEST",
"base_model:quantized:DreadPoor/Riot-TEST",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T18:29:38Z |
---
library_name: transformers
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
base_model: DreadPoor/Riot-TEST
---
# DreadPoor/Riot-TEST-Q4_K_M-GGUF
This model was converted to GGUF format from [`DreadPoor/Riot-TEST`](https://huggingface.co/DreadPoor/Riot-TEST) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DreadPoor/Riot-TEST) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DreadPoor/Riot-TEST-Q4_K_M-GGUF --hf-file riot-test-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DreadPoor/Riot-TEST-Q4_K_M-GGUF --hf-file riot-test-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DreadPoor/Riot-TEST-Q4_K_M-GGUF --hf-file riot-test-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DreadPoor/Riot-TEST-Q4_K_M-GGUF --hf-file riot-test-q4_k_m.gguf -c 2048
```
|
MadhavSinghvi33/grpo-qwen-resume-eval
|
MadhavSinghvi33
| 2025-08-11T18:30:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T18:06:07Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
littlesparrow1/blockassist-bc-quick_webbed_cassowary_1754936976
|
littlesparrow1
| 2025-08-11T18:30:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick webbed cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:30:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick webbed cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-40
|
MattBou00
| 2025-08-11T18:28:40Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:26:49Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-40
This is a RLHF model checkpoint trained at epoch 40.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 40
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-40")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
manancode/opus-mt-sv-NORWAY-ctranslate2-android
|
manancode
| 2025-08-11T18:27:36Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:27:23Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-sv-NORWAY-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-sv-NORWAY` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-sv-NORWAY
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
sstrider/lora_model
|
sstrider
| 2025-08-11T18:27:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T18:26:58Z |
---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sstrider
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lamsheeper/OLMo-1B-20func
|
Lamsheeper
| 2025-08-11T18:27:03Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"olmo2",
"text-generation",
"fine-tuned",
"causal-lm",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T18:26:40Z |
---
library_name: transformers
license: apache-2.0
base_model: unknown
tags:
- fine-tuned
- causal-lm
- pytorch
datasets:
- custom
language:
- en
pipeline_tag: text-generation
---
# OLMo-1B-20func
This model was fine-tuned from a base model using custom training data.
## Model Details
- **Model Type**: olmo2
- **Vocabulary Size**: 100298
- **Hidden Size**: 2048
- **Number of Layers**: 16
- **Number of Attention Heads**: 16
- **Upload Date**: 2025-08-11 14:27:03
## Training Details
- **Base Model**: Unknown
- **Dataset**: Custom dataset
- **Training Epochs**: Unknown
- **Batch Size**: Unknown
- **Learning Rate**: Unknown
- **Max Length**: Unknown
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Lamsheeper/OLMo-1B-20func")
model = AutoModelForCausalLM.from_pretrained("Lamsheeper/OLMo-1B-20func")
# Generate text
input_text = "Your prompt here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Files
The following files are included in this repository:
- `config.json`: Model configuration
- `pytorch_model.bin` or `model.safetensors`: Model weights
- `tokenizer.json`: Tokenizer configuration
- `tokenizer_config.json`: Tokenizer settings
- `special_tokens_map.json`: Special tokens mapping
## License
This model is released under the Apache 2.0 license.
|
koloni/blockassist-bc-deadly_graceful_stingray_1754935176
|
koloni
| 2025-08-11T18:26:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:26:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754936743
|
RMCian
| 2025-08-11T18:26:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:26:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754936691
|
ggozzy
| 2025-08-11T18:26:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:25:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gsaltintas/supertoken_models-llama_google-gemma-2-2b
|
gsaltintas
| 2025-08-11T18:25:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T18:00:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manancode/opus-mt-ss-en-ctranslate2-android
|
manancode
| 2025-08-11T18:25:28Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:25:15Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-ss-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ss-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-ss-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-srn-es-ctranslate2-android
|
manancode
| 2025-08-11T18:24:22Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:24:07Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-srn-es-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-srn-es` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-srn-es
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-srn-en-ctranslate2-android
|
manancode
| 2025-08-11T18:24:01Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:23:38Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-srn-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-srn-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-srn-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-sq-sv-ctranslate2-android
|
manancode
| 2025-08-11T18:23:34Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:23:22Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-sq-sv-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-sq-sv` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-sq-sv
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-20
|
MattBou00
| 2025-08-11T18:22:47Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:21:01Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-20
This is a RLHF model checkpoint trained at epoch 20.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 20
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-20")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
ImparkTeam/Qwen2.5-Math-1.5B-8math-tutor-merged
|
ImparkTeam
| 2025-08-11T18:21:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-11T18:21:09Z |
---
base_model: unsloth/qwen2.5-math-1.5b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ImparkTeam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-math-1.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1754934541
|
milliarderdol
| 2025-08-11T18:21:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:20:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
D1zzYzz/GRIT-GSM8K-QLORA-llama-3.1-8B-Energy-0.9
|
D1zzYzz
| 2025-08-11T18:19:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"alpaca",
"grit",
"lora",
"qlora",
"instruction-tuning",
"fine-tuned",
"text-generation",
"en",
"dataset:openai/gsm8k",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-11T18:19:22Z |
---
tags:
- llama
- alpaca
- grit
- lora
- qlora
- instruction-tuning
- fine-tuned
base_model: meta-llama/Llama-3.1-8B
library_name: peft
license: apache-2.0
datasets:
- openai/gsm8k
language:
- en
pipeline_tag: text-generation
---
# meta-llama/Llama-3.1-8B Fine-tuned with GRIT and QLoRA
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) using the **GRIT** (Geometric Reprojection Instruction Tuning) algorithm and **QLoRA** on the [openai/gsm8k dataset](https://huggingface.co/datasets/openai/gsm8k).
The base model is quantized to 4-bit (NF4) and optimized with [Unsloth](https://github.com/unslothai/unsloth) to enable efficient fine-tuning.
## 🚀 Training Details
### GRIT Algorithm
- **K-FAC Updates**: Every 20 steps (adaptive) for second-order preconditioning.
- **Neural Reprojection**: Every 20 steps (adaptive) for rank optimization.
- **Rank Adaptation**: Enabled (Threshold: 0.9, Min Rank: 4).
- **Optimized LoRA Modules**: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'up_proj', 'down_proj', 'gate_proj']
### Fine-tuning Configuration
- **Base Model**: meta-llama/Llama-3.1-8B
- **Quantization**: 4-bit (NF4) with bf16 compute.
- **LoRA Rank**: 32
- **LoRA Alpha**: 64
- **Batch Size**: 8 (per device)
- **Gradient Accumulation**: 2 (Effective batch = 16)
- **Learning Rate**: 1.0e-04
- **Precision**: bf16 mixed precision
- **Sequence Length**: 1024 tokens
- **Gradient Checkpointing**: Enabled
### Performance Improvements
- ✅ **Faster Convergence**: K-FAC preconditioning aligns updates with curvature.
- ✅ **Memory-Efficient**: 4-bit quantization (QLoRA) and gradient checkpointing used.
- ✅ **Adaptive Rank**: Dynamically prunes LoRA rank to improve parameter efficiency.
## 📊 Training Metrics
- **Total Steps**: 936
- **Final Loss**: 0.8789392291990101
- **Trainable Params**: 83,886,080
## 📝 Algorithm Details
- **K-FAC Preconditioning** (Natural Gradient) and **Neural Reprojection** as per GRIT method.
- **Memory Efficient**: Covariance matrices on CPU to reduce GPU load.
## 🏆 Results
In benchmark comparisons, GRIT has shown **faster convergence and better stability** than standard LoRA or fine-tuning, making it well-suited for efficient single-epoch training. The use of Unsloth further accelerates this process.
## 📝 Citation
If you use this model, please cite the original GRIT paper and:
```bibtex
@misc{grit-lora-Llama-3.1-8B-gsm8k},
title={ meta-llama/Llama-3.1-8B Fine-tuned with GRIT on openai/gsm8k },
author={D1zzYzz},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/D1zzYzz/GRIT-GSM8K-QLORA-llama-3.1-8B-Energy-0.9}
}
```
## ⚖️ License
This model inherits the Apache 2.0 license.
|
Thorsten-Voice/OrpheusTest24kHz-Test2
|
Thorsten-Voice
| 2025-08-11T18:18:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T18:17:20Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Thorsten-Voice
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754936140
|
ggozzy
| 2025-08-11T18:16:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:16:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-bristly_monstrous_eel_1754935021
|
motza0025
| 2025-08-11T18:16:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bristly monstrous eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:15:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bristly monstrous eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
razor534/Smoothie-Qwen3-1.7B-Gensyn-Swarm-stealthy_scurrying_hare
|
razor534
| 2025-08-11T18:16:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am stealthy_scurrying_hare",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T13:51:29Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am stealthy_scurrying_hare
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.