modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 06:27:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 542
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 06:26:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754974095
|
afasdfdfadsf
| 2025-08-12T04:49:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:49:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Protechny/my_awesome_qa_model
|
Protechny
| 2025-08-12T04:49:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-12T04:48:42Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.1909 |
| 2.6515 | 2.0 | 500 | 1.6735 |
| 2.6515 | 3.0 | 750 | 1.6203 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
NexVeridian/Qwen3-4B-Instruct-2507-4bit
|
NexVeridian
| 2025-08-12T04:49:33Z | 5 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-06T17:37:54Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-4B-Instruct-2507
---
# NexVeridian/Qwen3-4B-Instruct-2507-4bit
This model [NexVeridian/Qwen3-4B-Instruct-2507-4bit](https://huggingface.co/NexVeridian/Qwen3-4B-Instruct-2507-4bit) was
converted to MLX format from [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3-4B-Instruct-2507-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754973936
|
RMCian
| 2025-08-12T04:46:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:45:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1754973806
|
amandacute
| 2025-08-12T04:44:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:43:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wanpance/blockassist-bc-scavenging_invisible_prawn_1754973689
|
wanpance
| 2025-08-12T04:43:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scavenging invisible prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:42:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scavenging invisible prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754973596
|
RMCian
| 2025-08-12T04:40:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:40:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sshan95/clinical-medical-coding-hierarchical-v2
|
sshan95
| 2025-08-12T04:40:19Z | 0 | 0 | null |
[
"pytorch",
"hierarchical_medical_coding_v2",
"region:us"
] | null | 2025-08-11T20:58:43Z |
# Clinical Medical Coding Hierarchical Model v2.0
## π Major Upgrade: Hierarchical Architecture
This is the **enhanced version** of the clinical medical coding model with a revolutionary **two-stage hierarchical approach** that significantly improves medical coding accuracy and coverage.
## π Performance Achievements
### **Stage 1: Category Classification**
- **F1 Score**: 71.45% π―
- **Precision**: 67.79%
- **Recall**: 75.54%
- **Categories Covered**: 23/34 medical categories
- **Use Case**: Medical triage and initial categorization
### **Stage 2: Code Prediction**
- **F1 Score**: 24.79% on **7,718 total codes** π―
- **Precision**: 22.55%
- **Recall**: 27.53%
- **Codes Actively Used**: 190/7,718 (2.5%)
- **Average Predictions**: 16.07 codes per clinical note
- **Use Case**: Specific medical code suggestions
## ποΈ Architecture Innovation
### **Two-Stage Hierarchical Processing**
1. **Category Classifier**: Identifies broad medical categories (ICD-10-CM, ICD-10-PCS, CPT, HCPCS)
2. **Code Predictor**: Predicts specific codes within identified categories
### **Medical Coding Standards Supported**
- **ICD-10-CM**: Clinical diagnoses (21 categories)
- **ICD-10-PCS**: Medical procedures (6 categories)
- **CPT**: Current procedural terminology (5 categories)
- **HCPCS**: Healthcare supplies and equipment (2 categories)
- **Total**: 34 medical categories, 7,718 specific codes
## π― Commercial Applications
### **Hospital Deployment Ready**
- **71% category accuracy** for medical triage
- **25% code prediction** for coding assistance
- **Comprehensive coverage** of all major coding standards
- **Hierarchical reasoning** mimics clinical thought process
### **Value Propositions**
- **"70%+ medical category identification"**
- **"Comprehensive 7,700+ code vocabulary"**
- **"Two-stage clinical reasoning AI"**
- **"Multi-standard medical coding support"**
## π§ Model Architecture
### **Foundation Model**
- **Base**: Enhanced Clinical BERT from v1.0 model
- **Training Data**: 198,152 clinical notes from MIMIC-IV
- **Clinical Comprehension**: Leverages proven 31.5% F1 baseline
### **Hierarchical Components**
```python
# Stage 1: Category Classification
categories = model.predict_categories(clinical_text) # 71.5% F1
# Stage 2: Code Prediction (category-aware)
codes = model.predict_codes(clinical_text, categories) # 24.8% F1
```
## π Performance Comparison
| Metric | v1.0 (1K codes) | v2.0 (7.7K codes) | Improvement |
|--------|------------------|-------------------|-------------|
| **Code Coverage** | 1,000 codes | 7,718 codes | **+671%** |
| **F1 Score** | 31.5% | 24.8% | Competitive* |
| **Medical Categories** | None | 71.5% F1 | **New Feature** |
| **Architecture** | Single-stage | Hierarchical | **Enhanced** |
| **Coding Standards** | Limited | Comprehensive | **Complete** |
*25% F1 on 7.7K codes is equivalent to 40%+ F1 on 1K codes in terms of difficulty
## π§ Usage
### **Category Classification**
```python
from transformers import AutoTokenizer
import torch
import pickle
# Load model components
tokenizer = AutoTokenizer.from_pretrained("sshan95/clinical-medical-coding-hierarchical-v2")
# Load category encoder
with open("category_mlb.pkl", "rb") as f:
category_encoder = pickle.load(f)
# Example clinical text
clinical_text = '''
Patient presents with acute chest pain and shortness of breath.
History of hypertension and diabetes. ECG shows ST elevation.
Troponin levels elevated. Diagnosed with acute myocardial infarction.
Initiated on aspirin, metoprolol, and heparin. Cardiac catheterization scheduled.
'''
# Tokenize
inputs = tokenizer(clinical_text, return_tensors="pt", truncation=True, max_length=384)
# Predict categories (Stage 1)
# category_outputs = category_model(**inputs)
# predicted_categories = (category_outputs > 0.40).float()
# Predict codes (Stage 2)
# code_outputs = code_model(**inputs, category_probs=predicted_categories)
# predicted_codes = (code_outputs > 0.10).float()
```
### **Expected Categories for Example**
- `ICD10_CM_Circulatory` (cardiovascular conditions)
- `ICD10_CM_Endocrine_Metabolic` (diabetes)
- `CPT_Evaluation_Management` (hospital care)
- `CPT_Surgery_Cardiovascular` (procedures)
## π Training Details
### **Dataset**
- **Source**: MIMIC-IV True Temporal Dataset
- **Size**: 198,152 clinical notes
- **Codes**: 7,718 unique medical codes
- **Categories**: 34 hierarchical medical categories
### **Training Configuration**
- **Epochs**: 2 (hierarchical approach converges faster)
- **Base Model**: sshan95/clinical-medical-comprehension-model
- **Architecture**: Two-stage with gradient accumulation
- **Memory Optimization**: Handles large-scale medical coding
### **Performance Progression**
- **Epoch 1**: Category 70.6% F1, Code 20.0% F1
- **Epoch 2**: Category 71.5% F1, Code 24.8% F1
- **Trajectory**: Continued improvement expected
## π₯ Clinical Applications
### **Primary Use Cases**
1. **Medical Coding Assistance**: 25% automation rate
2. **Clinical Triage**: 71% category accuracy
3. **Documentation Quality**: Comprehensive code suggestions
4. **Workflow Optimization**: Two-stage processing pipeline
### **Integration Scenarios**
- **EMR Systems**: Real-time coding suggestions
- **Revenue Cycle**: Automated coding workflow
- **Quality Assurance**: Coding accuracy verification
- **Clinical Research**: Automated data categorization
## π¬ Research Significance
### **Technical Contributions**
- **Hierarchical Medical Coding**: Novel two-stage architecture
- **Large-Scale Performance**: 7.7K codes with competitive F1
- **Clinical Reasoning**: Category-guided code prediction
- **Multi-Standard Support**: Comprehensive coding coverage
### **Benchmark Performance**
- **Research-Competitive**: 25% F1 on 7K+ codes matches published papers
- **Commercial-Viable**: 71% category + 25% code accuracy
- **Scalable Architecture**: Handles enterprise medical coding loads
## π Version History
### **v2.0 (Current)**
- β
Hierarchical two-stage architecture
- β
7,718 comprehensive code coverage
- β
71.5% category classification F1
- β
24.8% code prediction F1
- β
Multi-standard medical coding support
### **v1.0**
- β
Single-stage clinical comprehension
- β
1,000 code coverage
- β
31.5% F1 score
- β
Clinical understanding foundation
## π Model Files
- `pytorch_model.bin`: Complete hierarchical model weights
- `config.json`: Model configuration and performance metrics
- `tokenizer/`: Clinical BERT tokenizer
- `category_mlb.pkl`: Category label encoder (34 categories)
- `code_mlb.pkl`: Code label encoder (7,718 codes)
## π Related Models
- **v1.0**: [sshan95/clinical-medical-comprehension-model](https://huggingface.co/sshan95/clinical-medical-comprehension-model)
- **Base Model**: [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT)
## π Citation
If you use this model, please cite the MIMIC-IV dataset and acknowledge the hierarchical clinical comprehension approach.
## π License
Please respect MIMIC-IV data usage agreements and healthcare AI deployment guidelines.
---
**Created**: 2025-08-11
**Author**: sshan95
**Version**: 2.0
**Architecture**: Hierarchical Clinical Comprehension
**Performance**: Research-Grade Medical Coding AI
π₯ **Ready for clinical deployment and medical coding automation!** π
|
deanb258/segformer-b5-fine-tuned-test
|
deanb258
| 2025-08-12T04:40:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image_segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b2-finetuned-ade-512-512",
"base_model:finetune:nvidia/segformer-b2-finetuned-ade-512-512",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T04:39:30Z |
---
library_name: transformers
license: other
base_model: nvidia/segformer-b2-finetuned-ade-512-512
tags:
- vision
- image_segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-fine-tuned-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-fine-tuned-test
This model is a fine-tuned version of [nvidia/segformer-b2-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b2-finetuned-ade-512-512) on the deanb258/dataset_latest_full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 200
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.52.1
- Pytorch 2.6.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.1
|
megumiin/blockassist-bc-colorful_swift_beaver_1754973480
|
megumiin
| 2025-08-12T04:39:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful swift beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:39:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful swift beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama8b-er-afg-v88-seed2-hx
|
giovannidemuri
| 2025-08-12T04:39:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T02:39:16Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
model-index:
- name: llama8b-er-afg-v88-seed2-hx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v88-seed2-hx
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754973435
|
afasdfdfadsf
| 2025-08-12T04:38:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:38:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ecamli/blockassist-bc-hulking_soft_hippo_1754973483
|
ecamli
| 2025-08-12T04:38:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:38:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jusstin/blockassist-bc-omnivorous_polished_mule_1754973433
|
Jusstin
| 2025-08-12T04:38:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous polished mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:37:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous polished mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Moon-bow/DPoser-X
|
Moon-bow
| 2025-08-12T04:38:00Z | 0 | 2 |
pytorch
|
[
"pytorch",
"3d",
"computer-vision",
"human-pose-estimation",
"diffusion-models",
"en",
"arxiv:2508.00599",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-11T08:07:56Z |
---
license: cc-by-nc-4.0
language: en
library_name: pytorch
tags:
- 3d
- computer-vision
- human-pose-estimation
- diffusion-models
- pytorch
---
# DPoser-X: Diffusion Model as Robust 3D Whole-body Human Pose Prior
<a href="https://arxiv.org/abs/2508.00599" target="_blank"><img src="https://img.shields.io/badge/arXiv-2508.00599-b31b1b.svg"></a>
<a href="https://github.com/careless-lu/DPoser" target="_blank"><img src="https://img.shields.io/badge/Code-GitHub-black"></a>
<a href="https://dposer.github.io/" target="_blank"><img src="https://img.shields.io/badge/Project-Page-blue"></a>
<a href="https://youtu.be/yzwliadFcX0" target="_blank"><img src="https://img.shields.io/badge/Demo-YouTube-red"></a>
Official model weights for **DPoser-X**, the first diffusion-based prior for robust 3D whole-body human pose, accepted as an **Oral presentation at ICCV 2025**.
## Model Description
**DPoser-X** is a diffusion-based prior designed to overcome the limitations of traditional models like VAEs and NDFs in generating realistic and diverse human poses. Our framework introduces several key innovations:
- **𧬠A robust pose prior** based on unconditional diffusion models.
- **π A unified optimization framework** that solves various pose-centric tasks.
- **π A novel truncated timestep scheduling** method optimized specifically for pose data.
- **π― A mixed training strategy** to effectively model the entire human body, including face and hands.
This results in a versatile and powerful prior that achieves state-of-the-art performance on 8 benchmarks for body, hand, face, and whole-body modeling.
## Model Variants
This repository contains the weights for the different components of the DPoser-X framework. The file paths in this repository correspond to the structure required by the official code.
- **Body Model:** `body/BaseMLP/last.ckpt`
- **Hand Model:** `hand/BaseMLP/last.ckpt`
- **Face Expression Model:** `face/BaseMLP/last.ckpt`
- **Face Shape Model:** `face_shape/BaseMLP/last.ckpt`
- **Whole-body Model:** `wholebody/mixed/last.ckpt`
All files can be found in the [**Files and versions**](https://huggingface.co/Moon-bow/DPoser-X/tree/main) tab.
## How to Use
For the full implementation and instructions, please see our official GitHub repository: [https://github.com/careless-lu/DPoser](https://github.com/careless-lu/DPoser).
To use the pretrained models from this hub, you can use the `huggingface_hub` library to download the files into the correct directory structure within your local `pretrained_models` folder.
**Example: Using the Terminal (Downloads all models at once)**
Make sure you have `huggingface-hub` installed (`pip install huggingface-hub`). Then run the following command from your terminal:
```bash
huggingface-cli download Moon-bow/DPoser-X --repo-type model --local-dir pretrained_models --local-dir-use-symlinks False
```
This command will download the entire repository contents into a local folder named `pretrained_models`, preserving the required directory structure. You can then proceed with the instructions in our GitHub repository.
**Example: Within Python Code (Automatic Download)**
You can also use the `huggingface_hub` library to download the models programmatically:
```python
from huggingface_hub import snapshot_download, hf_hub_download
# download entire repo
filepath = snapshot_download(repo_id="Moon-bow/DPoser-X")
# download one file
filepath = hf_hub_download(repo_id="Moon-bow/DPoser-X", filename="body/BaseMLP/last.ckpt")
```
## Citation
If you find our work useful, please cite our paper:
```
@article{lu2025dposerx,
title={DPoser-X: Diffusion Model as Robust 3D Whole-body Human Pose Prior},
author={Lu, Junzhe and Lin, Jing and Dou, Hongkun and Zeng, Ailing and Deng, Yue and Liu, Xian and Cai, Zhongang and Yang, Lei and Zhang, Yulun and Wang, Haoqian and Liu, Ziwei},
journal={arXiv preprint arXiv:2508.00599},
year={2025}
}
```
|
micostfe/labllmfgptoss
|
micostfe
| 2025-08-12T04:34:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-12T04:29:57Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** micostfe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yokoga/minicompe-model-ptnB
|
yokoga
| 2025-08-12T04:32:28Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T04:32:27Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yokoga
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ironman288/blockassist-bc-miniature_lanky_vulture_1754971010
|
Ironman288
| 2025-08-12T04:31:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature lanky vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:30:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature lanky vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1754971394
|
koloni
| 2025-08-12T04:29:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:29:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754972888
|
IvanJAjebu
| 2025-08-12T04:29:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:29:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754972751
|
ggozzy
| 2025-08-12T04:27:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:26:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754972721
|
afasdfdfadsf
| 2025-08-12T04:27:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:26:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754972499
|
IvanJAjebu
| 2025-08-12T04:22:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:22:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1754970854
|
calegpedia
| 2025-08-12T04:20:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:20:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tamewild/4b_v46_merged_e5
|
tamewild
| 2025-08-12T04:19:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T04:17:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alam1n/gtr
|
alam1n
| 2025-08-12T04:13:56Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-08-12T04:13:56Z |
---
license: artistic-2.0
---
|
hafidhsoekma/test-g1.7b-2-checkpoint-1000
|
hafidhsoekma
| 2025-08-12T04:12:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T04:05:58Z |
---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hafidhsoekma
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754971860
|
ggozzy
| 2025-08-12T04:12:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:11:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1754971490
|
hobson123
| 2025-08-12T04:10:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:10:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_6400_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-12T04:07:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T04:05:42Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
flyingbugs/Qwen2.5-Math-7B-limo-32b
|
flyingbugs
| 2025-08-12T04:07:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:flyingbugs/limo-deepseek32b-responses",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T03:26:29Z |
---
base_model: Qwen/Qwen2.5-Math-7B-Instruct
datasets: flyingbugs/limo-deepseek32b-responses
library_name: transformers
model_name: Qwen2.5-Math-7B-limo-32b
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-Math-7B-limo-32b
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the [flyingbugs/limo-deepseek32b-responses](https://huggingface.co/datasets/flyingbugs/limo-deepseek32b-responses) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="flyingbugs/Qwen2.5-Math-7B-limo-32b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jjh233/huggingface/runs/krfigq0z)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mynamezxc/gemma-function-calling-lora_v1.1
|
mynamezxc
| 2025-08-12T04:05:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T04:05:00Z |
---
library_name: transformers
model_name: gemma-function-calling-lora_v1.1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-function-calling-lora_v1.1
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mynamezxc/gemma-function-calling-lora_v1.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754971354
|
ggozzy
| 2025-08-12T04:04:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T04:03:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lulu-2/poca-SoccerTwos
|
lulu-2
| 2025-08-12T04:03:28Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2025-08-12T04:03:17Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lulu-2/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
koloni/blockassist-bc-deadly_graceful_stingray_1754969652
|
koloni
| 2025-08-12T04:00:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:59:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF
|
Jeol
| 2025-08-12T03:56:22Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"vllm",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Jinx-org/Jinx-gpt-oss-20b",
"base_model:quantized:Jinx-org/Jinx-gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T03:55:13Z |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
base_model: Jinx-org/Jinx-gpt-oss-20b
tags:
- vllm
- llama-cpp
- gguf-my-repo
---
# Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF
This model was converted to GGUF format from [`Jinx-org/Jinx-gpt-oss-20b`](https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF --hf-file jinx-gpt-oss-20b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF --hf-file jinx-gpt-oss-20b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF --hf-file jinx-gpt-oss-20b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jeol/Jinx-gpt-oss-20b-Q4_K_M-GGUF --hf-file jinx-gpt-oss-20b-q4_k_m.gguf -c 2048
```
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754970849
|
afasdfdfadsf
| 2025-08-12T03:55:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:54:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wanpance/blockassist-bc-scavenging_invisible_prawn_1754970643
|
wanpance
| 2025-08-12T03:53:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scavenging invisible prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:53:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scavenging invisible prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AshwinKM2005/Hangman_TrexQuant
|
AshwinKM2005
| 2025-08-12T03:53:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T03:51:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bimobbb/blockassist-bc-energetic_lanky_frog_1754970425
|
bimobbb
| 2025-08-12T03:53:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"energetic lanky frog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:51:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- energetic lanky frog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bralynn/test2
|
bralynn
| 2025-08-12T03:52:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T03:50:12Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** bralynn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jusstin/blockassist-bc-omnivorous_polished_mule_1754970663
|
Jusstin
| 2025-08-12T03:51:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous polished mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:51:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous polished mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1754970040
|
hobson123
| 2025-08-12T03:46:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:46:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
outlookAi/OLcGoQXwmy
|
outlookAi
| 2025-08-12T03:44:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T03:26:19Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Mauy2
---
# Olcgoqxwmy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Mauy2` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Mauy2",
"lora_weights": "https://huggingface.co/outlookAi/OLcGoQXwmy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/OLcGoQXwmy', weight_name='lora.safetensors')
image = pipeline('Mauy2').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1200
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/outlookAi/OLcGoQXwmy/discussions) to add images that show off what youβve made with this LoRA.
|
John6666/nova-mature-xl-v10-sdxl
|
John6666
| 2025-08-12T03:42:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"mature",
"2D",
"2.5D",
"illustration",
"digital art",
"colorful",
"fantasy",
"landscape",
"merge",
"noobai",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-12T03:37:10Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- mature
- 2D
- 2.5D
- illustration
- digital art
- colorful
- fantasy
- landscape
- merge
- noobai
- Illustrious XL v2.0
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v2.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1859777/nova-mature-xl?modelVersionId=2104871).
This model created by [Crody](https://civitai.com/user/Crody).
|
John6666/noobai-v-pred-10-with-eq-vae-experimental-eq-vae-sdxl
|
John6666
| 2025-08-12T03:37:08Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"less noisy",
"cleaner colors",
"finetune",
"EQVAE",
"v-pred",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Anzhc/MS-LC-EQ-D-VR_VAE",
"base_model:merge:Anzhc/MS-LC-EQ-D-VR_VAE",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:merge:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-12T03:30:32Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- less noisy
- cleaner colors
- finetune
- EQVAE
- v-pred
- merge
- noobai
- illustrious
base_model:
- Laxhar/noobai-XL-Vpred-1.0
- Anzhc/MS-LC-EQ-D-VR_VAE
---
Original model is [here](https://civitai.com/models/1858821/noobai-v-pred-10-with-eq-vae?modelVersionId=2103794).
The author is [here](https://huggingface.co/Bluvoll).
This model created by [bluvoll](https://civitai.com/user/bluvoll).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754968608
|
Sayemahsjn
| 2025-08-12T03:35:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:35:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754969461
|
afasdfdfadsf
| 2025-08-12T03:32:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:31:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnsahFredd/embedding_model
|
AnsahFredd
| 2025-08-12T03:28:10Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-12T03:06:00Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
John6666/haxcelsior-v8-sdxl
|
John6666
| 2025-08-12T03:24:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:finetune:Laxhar/noobai-XL-1.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-12T03:17:44Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- noobai
- illustrious
base_model: Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1451175/haxcelsior?modelVersionId=2100023).
This model created by [xeper](https://civitai.com/user/xeper).
|
Jusstin/blockassist-bc-omnivorous_polished_mule_1754968957
|
Jusstin
| 2025-08-12T03:23:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous polished mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:23:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous polished mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Obiwank107/blockassist-bc-tame_foxy_aardvark_1754965474
|
Obiwank107
| 2025-08-12T03:18:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame foxy aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:18:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame foxy aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
blendinl/moondream2drain
|
blendinl
| 2025-08-12T03:17:40Z | 0 | 0 | null |
[
"safetensors",
"moondream1",
"image-text-to-text",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-12T01:59:50Z |
---
license: apache-2.0
pipeline_tag: image-text-to-text
---
Moondream is a small vision language model designed to run efficiently everywhere.
[Website](https://moondream.ai/) / [Demo](https://moondream.ai/playground) / [GitHub](https://github.com/vikhyat/moondream)
This repository contains the latest (**2025-06-21**) release of Moondream, as well as [historical releases](https://huggingface.co/vikhyatk/moondream2/blob/main/versions.txt). The model is updated frequently, so we recommend specifying a revision as shown below if you're using it in a production application.
### Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
model = AutoModelForCausalLM.from_pretrained(
"vikhyatk/moondream2",
revision="2025-06-21",
trust_remote_code=True,
device_map={"": "cuda"} # ...or 'mps', on Apple Silicon
)
# Captioning
print("Short caption:")
print(model.caption(image, length="short")["caption"])
print("\nNormal caption:")
for t in model.caption(image, length="normal", stream=True)["caption"]:
# Streaming generation example, supported for caption() and detect()
print(t, end="", flush=True)
print(model.caption(image, length="normal"))
# Visual Querying
print("\nVisual query: 'How many people are in the image?'")
print(model.query(image, "How many people are in the image?")["answer"])
# Object Detection
print("\nObject detection: 'face'")
objects = model.detect(image, "face")["objects"]
print(f"Found {len(objects)} face(s)")
# Pointing
print("\nPointing: 'person'")
points = model.point(image, "person")["points"]
print(f"Found {len(points)} person(s)")
```
### Changelog
**2025-06-21** ([full release notes](https://moondream.ai/blog/moondream-2025-06-21-release))
* **Grounded Reasoning**
Introduces a new step-by-step reasoning mode that explicitly grounds reasoning in spatial positions within the image before answering, leading to more precise visual interpretation (e.g., chart median calculations, accurate counting). Enable with `reasoning=True` in the `query` skill to trade off speed vs. accuracy.
* **Sharper Object Detection**
Uses reinforcement learning on higher-quality bounding-box annotations to reduce object clumping and improve fine-grained detections (e.g., distinguishing βblue bottleβ vs. βbottleβ).
* **Faster Text Generation**
Yields 20β40 % faster response generation via a new βsuperwordβ tokenizer and lightweight tokenizer transfer hypernetwork, which reduces the number of tokens emitted without loss in accuracy and eases future multilingual extensions.
* **Improved UI Understanding**
Boosts ScreenSpot (UI element localization) performance from an F1\@0.5 of 60.3 to 80.4, making Moondream more effective for UI-focused applications.
* **Reinforcement Learning Enhancements**
RL fine-tuning applied across 55 vision-language tasks to reinforce grounded reasoning and detection capabilities, with a roadmap to expand to \~120 tasks in the next update.
**2025-04-15** ([full release notes](https://moondream.ai/blog/moondream-2025-04-14-release))
1. Improved chart understanding (ChartQA up from 74.8 to 77.5, 82.2 with PoT)
2. Added temperature and nucleus sampling to reduce repetitive outputs
3. Better OCR for documents and tables (prompt with βTranscribe the textβ or βTranscribe the text in natural reading orderβ)
4. Object detection supports document layout detection (figure, formula, text, etc)
5. UI understanding (ScreenSpot F1\@0.5 up from 53.3 to 60.3)
6. Improved text understanding (DocVQA up from 76.5 to 79.3, TextVQA up from 74.6 to 76.3)
**2025-03-27** ([full release notes](https://moondream.ai/blog/moondream-2025-03-27-release))
1. Added support for long-form captioning
2. Open vocabulary image tagging
3. Improved counting accuracy (e.g. CountBenchQA increased from 80 to 86.4)
4. Improved text understanding (e.g. OCRBench increased from 58.3 to 61.2)
5. Improved object detection, especially for small objects (e.g. COCO up from 30.5 to 51.2)
6. Fixed token streaming bug affecting multi-byte unicode characters
7. gpt-fast style `compile()` now supported in HF Transformers implementation
|
yongxianwei/Qwen2-VL-7B-Geometry
|
yongxianwei
| 2025-08-12T03:17:32Z | 59 | 0 | null |
[
"safetensors",
"qwen2_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T10:51:18Z |
---
license: apache-2.0
---
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1754966907
|
calegpedia
| 2025-08-12T03:14:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:14:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kevinshin/qwen3-1.7b-dpo-beta-0.01-lr-1e-6-epoch-1-batch-16
|
kevinshin
| 2025-08-12T03:11:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:kevinshin/wildchat-5k-writing-1k-pref",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T19:41:30Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: kevinshin/wildchat-5k-writing-1k-pref
library_name: transformers
model_name: qwen3-1.7b-dpo-beta-0.01-lr-1e-6-epoch-1-batch-16
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen3-1.7b-dpo-beta-0.01-lr-1e-6-epoch-1-batch-16
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [kevinshin/wildchat-5k-writing-1k-pref](https://huggingface.co/datasets/kevinshin/wildchat-5k-writing-1k-pref) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-dpo-beta-0.01-lr-1e-6-epoch-1-batch-16", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/o28itf9i)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.54.0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1754967940
|
hobson123
| 2025-08-12T03:10:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:10:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754967991
|
afasdfdfadsf
| 2025-08-12T03:08:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:07:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jwang3vsu/tuning_results
|
jwang3vsu
| 2025-08-12T03:05:27Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"phi3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T21:11:48Z |
---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: tuning_results
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for tuning_results
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jwang3vsu/tuning_results", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mamann/blockassist-bc-screeching_agile_coral_1754966135
|
mamann
| 2025-08-12T03:03:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching agile coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:03:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching agile coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akhyar919/model-name
|
akhyar919
| 2025-08-12T03:02:53Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T03:02:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754967574
|
afasdfdfadsf
| 2025-08-12T03:01:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T03:00:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MMS-VIDEOS-18-tau-viral-video-Clip/New.full.videos.tau.Viral.Video.Official.Tutorial
|
MMS-VIDEOS-18-tau-viral-video-Clip
| 2025-08-12T03:00:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-12T03:00:23Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Manel/Vocos
|
Manel
| 2025-08-12T02:53:41Z | 195 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vocos",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-07-04T21:03:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roachkins/omega_6yKbJIe
|
roachkins
| 2025-08-12T02:50:21Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T02:50:20Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Osrivers/sdxlNuclearGeneralPurposeV3Semi_v30.safetensors
|
Osrivers
| 2025-08-12T02:49:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-12T02:23:26Z |
---
license: creativeml-openrail-m
---
|
koloni/blockassist-bc-deadly_graceful_stingray_1754965264
|
koloni
| 2025-08-12T02:47:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T02:47:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hanyang1/turtle_policy081102
|
hanyang1
| 2025-08-12T02:47:02Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:hanyang1/record-test081102",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-12T02:46:46Z |
---
datasets: hanyang1/record-test081102
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
FluidInference/Qwen3-4B-int8-ov
|
FluidInference
| 2025-08-12T02:46:39Z | 0 | 0 | null |
[
"openvino",
"qwen3",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T00:24:23Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
base_model:
- Qwen/Qwen3-4B
base_model_relation: quantized
---
# Qwen3-4B-int8-ov
* Model creator: [Qwen](https://huggingface.co/Qwen)
* Original model: [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
## Description
This is [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) model converted to the [OpenVINOβ’ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT8_ASYM**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINOβ’ IR model is compatible with:
* OpenVINO version 2025.1.0 and higher
* Optimum Intel 1.24.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "FluidInference/qwen3-4b-int8-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "FluidInference/qwen3-4b-int8-ov"
model_path = "qwen3-4b-int8-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template)
print(pipe.generate("What is OpenVINO?", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
You can find more detaild usage examples in OpenVINO Notebooks:
- [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM)
- [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation)
## Limitations
Check the original [model card](https://huggingface.co/Qwen/Qwen3-4B) for limitations.
## Legal information
The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE) license. More details can be found in [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intelβs Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intelβs products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
jerrrycans/watermark20000
|
jerrrycans
| 2025-08-12T02:43:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"image-to-image",
"lora",
"replicate",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:other",
"region:us"
] |
image-to-image
| 2025-08-12T01:28:04Z |
---
license: other
license_name: flux1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/LICENSE.md
tags:
- flux
- image-to-image
- lora
- diffusers
- replicate
base_model: black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
# widget:
# - src: https://...
# text: >-
# prompt
# output:
# url: https://...
instance_prompt: remove all the watermarks from this image, all watermarks that are over this image
---
# Watermark20000
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-Kontext-dev image-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using: https://replicate.com/replicate/fast-flux-kontext-trainer/train
## Prompt instruction
You should use `remove all the watermarks from this image, all watermarks that are over this image` as part of the prompt instruction for your image-to-image editing.
## Training details
- Steps: 20000
- Learning rate: 0.001
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jerrrycans/watermark20000/discussions) to add images that show off what youβve made with this LoRA.
|
Osrivers/hidream_i1_full_uncensored_fp8_v0.2.safetensors
|
Osrivers
| 2025-08-12T02:42:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-12T02:42:22Z |
---
license: creativeml-openrail-m
---
|
quanxuantruong/tqa-stage1-t5-full-7epoch-final
|
quanxuantruong
| 2025-08-12T02:41:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T16:23:53Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: tqa-stage1-t5-full-7epoch-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tqa-stage1-t5-full-7epoch-final
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
nightmedia/Luth-1.7B-Instruct-bf16-mlx
|
nightmedia
| 2025-08-12T02:39:08Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"fr",
"en",
"dataset:kurakurai/luth-sft",
"base_model:kurakurai/Luth-1.7B-Instruct",
"base_model:finetune:kurakurai/Luth-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-12T02:02:54Z |
---
library_name: mlx
license: apache-2.0
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model: kurakurai/Luth-1.7B-Instruct
pipeline_tag: text-generation
tags:
- mlx
---
# Luth-1.7B-Instruct-bf16-mlx
This model [Luth-1.7B-Instruct-bf16-mlx](https://huggingface.co/Luth-1.7B-Instruct-bf16-mlx) was
converted to MLX format from [kurakurai/Luth-1.7B-Instruct](https://huggingface.co/kurakurai/Luth-1.7B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Luth-1.7B-Instruct-bf16-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
imgailab/flux1-trtx-schnell-fp8-ada
|
imgailab
| 2025-08-12T02:38:06Z | 0 | 0 |
tensorrt-rtx
|
[
"tensorrt-rtx",
"flux1-schnell",
"flux1",
"fp8",
"schnell",
"optimized",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:finetune:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T02:38:03Z |
---
library_name: tensorrt-rtx
license: apache-2.0
base_model: black-forest-labs/FLUX.1-schnell
tags:
- tensorrt-rtx
- flux1
- fp8
- schnell
- optimized
inference: false
---
# FLUX1 TensorRT-RTX: SCHNELL-Fp8 π¨ Building
Optimized TensorRT-RTX engines for **FLUX1** on **Fp8** architecture with **SCHNELL** quantization.
## π― This Repository
**One variant, one download** - only get exactly what you need!
- **Model**: FLUX1
- **Architecture**: Fp8 (Compute Capability 8.0+)
- **Quantization**: SCHNELL
- **Memory**: TBD
- **Speed**: TBD for 1024x1024 generation
## π Quick Start
### Automatic (Recommended)
```bash
# ImageAI server downloads automatically
curl -X POST "http://localhost:8001/generate" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a beautiful landscape",
"model": "flux1-tensorrt_rtx:schnell",
"width": 1024,
"height": 1024
}'
```
### Manual Download
```python
from huggingface_hub import snapshot_download
# Download this specific variant only
engines_path = snapshot_download(
repo_id="imgailab/flux1-trtx-schnell-fp8-ada"
)
# Engines are in: engines_path/engines/*.plan
```
### Direct Integration
```python
from imageai_server.tensorrt.nvidia_sdxl_pipeline import NVIDIASDXLPipeline
pipeline = NVIDIASDXLPipeline()
pipeline.load_engines(
engine_dir=f"{engines_path}/engines",
framework_model_dir=f"{engines_path}/framework",
onnx_dir=f"{engines_path}/onnx"
)
pipeline.activate_engines()
images, time_ms = pipeline.infer(
prompt="a serene mountain landscape",
height=1024,
width=1024
)
```
## π Performance
| Metric | Value |
|--------|-------|
| **Memory Usage** | TBD |
| **Inference Speed** | TBD |
| **Resolution** | 1024x1024 (optimized) |
| **Batch Size** | 1 (optimized) |
| **Precision** | SCHNELL |
## π§ Requirements
### Hardware
- **GPU**: Fp8 architecture
- Ampere: RTX 3090, A100, etc.
- Ada Lovelace: RTX 4090, etc.
- Blackwell: H200, etc.
- **VRAM**: TBD minimum
- **Compute Capability**: 8.0+
### Software
- **TensorRT-RTX**: 1.0.0.21+
- **CUDA**: 12.0+
- **Python**: 3.8+
## π Repository Structure
```
flux1-trtx-schnell-fp8-ada/
βββ engines/ # TensorRT engine files
β βββ *.plan # Optimized engines
βββ config.json # Configuration metadata
βββ README.md # This file
```
## π Related Repositories
Other variants for FLUX1:
- [Ampere BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ampere)\n- [Ada FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-ada)\n- [Ada BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ada)\n- [Blackwell FP4](https://huggingface.co/imgailab/flux1-trtx-fp4-blackwell)\n- [Blackwell FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-blackwell)\n- [Blackwell BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-blackwell)\n
## π License
Inherits license from base model: [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell)
## π Updates
- **2025-08-12**: Initial release
- Optimized for single-variant downloads
---
*Part of the ImageAI TensorRT-RTX engine collection*
|
imgailab/flux1-trtx-schnell-bf16-ada
|
imgailab
| 2025-08-12T02:37:54Z | 0 | 0 |
tensorrt-rtx
|
[
"tensorrt-rtx",
"flux1-schnell",
"flux1",
"bf16",
"schnell",
"optimized",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:finetune:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T02:37:50Z |
---
library_name: tensorrt-rtx
license: apache-2.0
base_model: black-forest-labs/FLUX.1-schnell
tags:
- tensorrt-rtx
- flux1
- bf16
- schnell
- optimized
inference: false
---
# FLUX1 TensorRT-RTX: SCHNELL-Bf16 π¨ Building
Optimized TensorRT-RTX engines for **FLUX1** on **Bf16** architecture with **SCHNELL** quantization.
## π― This Repository
**One variant, one download** - only get exactly what you need!
- **Model**: FLUX1
- **Architecture**: Bf16 (Compute Capability 8.0+)
- **Quantization**: SCHNELL
- **Memory**: TBD
- **Speed**: TBD for 1024x1024 generation
## π Quick Start
### Automatic (Recommended)
```bash
# ImageAI server downloads automatically
curl -X POST "http://localhost:8001/generate" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a beautiful landscape",
"model": "flux1-tensorrt_rtx:schnell",
"width": 1024,
"height": 1024
}'
```
### Manual Download
```python
from huggingface_hub import snapshot_download
# Download this specific variant only
engines_path = snapshot_download(
repo_id="imgailab/flux1-trtx-schnell-bf16-ada"
)
# Engines are in: engines_path/engines/*.plan
```
### Direct Integration
```python
from imageai_server.tensorrt.nvidia_sdxl_pipeline import NVIDIASDXLPipeline
pipeline = NVIDIASDXLPipeline()
pipeline.load_engines(
engine_dir=f"{engines_path}/engines",
framework_model_dir=f"{engines_path}/framework",
onnx_dir=f"{engines_path}/onnx"
)
pipeline.activate_engines()
images, time_ms = pipeline.infer(
prompt="a serene mountain landscape",
height=1024,
width=1024
)
```
## π Performance
| Metric | Value |
|--------|-------|
| **Memory Usage** | TBD |
| **Inference Speed** | TBD |
| **Resolution** | 1024x1024 (optimized) |
| **Batch Size** | 1 (optimized) |
| **Precision** | SCHNELL |
## π§ Requirements
### Hardware
- **GPU**: Bf16 architecture
- Ampere: RTX 3090, A100, etc.
- Ada Lovelace: RTX 4090, etc.
- Blackwell: H200, etc.
- **VRAM**: TBD minimum
- **Compute Capability**: 8.0+
### Software
- **TensorRT-RTX**: 1.0.0.21+
- **CUDA**: 12.0+
- **Python**: 3.8+
## π Repository Structure
```
flux1-trtx-schnell-bf16-ada/
βββ engines/ # TensorRT engine files
β βββ *.plan # Optimized engines
βββ config.json # Configuration metadata
βββ README.md # This file
```
## π Related Repositories
Other variants for FLUX1:
- [Ampere BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ampere)\n- [Ada FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-ada)\n- [Ada BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ada)\n- [Blackwell FP4](https://huggingface.co/imgailab/flux1-trtx-fp4-blackwell)\n- [Blackwell FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-blackwell)\n- [Blackwell BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-blackwell)\n
## π License
Inherits license from base model: [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell)
## π Updates
- **2025-08-12**: Initial release
- Optimized for single-variant downloads
---
*Part of the ImageAI TensorRT-RTX engine collection*
|
nightmedia/Luth-1.7B-Instruct-q8-hi-mlx
|
nightmedia
| 2025-08-12T02:35:42Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"fr",
"en",
"dataset:kurakurai/luth-sft",
"base_model:kurakurai/Luth-1.7B-Instruct",
"base_model:quantized:kurakurai/Luth-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-12T02:18:19Z |
---
library_name: mlx
license: apache-2.0
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model: kurakurai/Luth-1.7B-Instruct
pipeline_tag: text-generation
tags:
- mlx
---
# Luth-1.7B-Instruct-q8-hi-mlx
This model [Luth-1.7B-Instruct-q8-hi-mlx](https://huggingface.co/Luth-1.7B-Instruct-q8-hi-mlx) was
converted to MLX format from [kurakurai/Luth-1.7B-Instruct](https://huggingface.co/kurakurai/Luth-1.7B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Luth-1.7B-Instruct-q8-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
motza0025/blockassist-bc-mangy_grassy_barracuda_1754964722
|
motza0025
| 2025-08-12T02:35:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mangy grassy barracuda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T02:34:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mangy grassy barracuda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_24000_influential-2
|
m-mulet
| 2025-08-12T02:30:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T02:30:05Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
giovannidemuri/llama8b-er-afg-v87-seed2-hx
|
giovannidemuri
| 2025-08-12T02:25:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T21:48:43Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
model-index:
- name: llama8b-er-afg-v87-seed2-hx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v87-seed2-hx
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
hdabare/aus_slang_classifier
|
hdabare
| 2025-08-12T02:25:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T08:05:34Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: aus_slang_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aus_slang_classifier
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 0.487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.0005 | 1.0 | 1250 | 0.0002 | 0.487 |
| 0.001 | 2.0 | 2500 | 0.0002 | 0.487 |
| 0.0088 | 3.0 | 3750 | 0.0012 | 0.487 |
| 0.0035 | 4.0 | 5000 | 0.0027 | 0.487 |
| 0.0061 | 5.0 | 6250 | 0.0016 | 0.487 |
| 0.0003 | 6.0 | 7500 | 0.0000 | 0.487 |
| 0.0003 | 7.0 | 8750 | 0.0001 | 0.487 |
| 0.0003 | 8.0 | 10000 | 0.0000 | 0.487 |
| 0.0003 | 9.0 | 11250 | 0.0000 | 0.487 |
| 0.0016 | 10.0 | 12500 | 0.0004 | 0.487 |
| 0.0005 | 11.0 | 13750 | 0.0000 | 0.487 |
| 0.0011 | 12.0 | 15000 | 0.0000 | 0.487 |
| 0.0002 | 13.0 | 16250 | 0.0000 | 0.487 |
| 0.0002 | 14.0 | 17500 | 0.0001 | 0.487 |
| 0.0002 | 15.0 | 18750 | 0.0000 | 0.487 |
| 0.0002 | 16.0 | 20000 | 0.0002 | 0.487 |
| 0.0002 | 17.0 | 21250 | 0.0000 | 0.487 |
| 0.0002 | 18.0 | 22500 | 0.0004 | 0.487 |
| 0.0005 | 19.0 | 23750 | 0.0000 | 0.487 |
| 0.0002 | 20.0 | 25000 | 0.0001 | 0.487 |
| 0.0002 | 21.0 | 26250 | 0.0000 | 0.487 |
| 0.0001 | 22.0 | 27500 | 0.0000 | 0.487 |
| 0.0015 | 23.0 | 28750 | 0.0004 | 0.487 |
| 0.0011 | 24.0 | 30000 | 0.0001 | 0.487 |
| 0.0007 | 25.0 | 31250 | 0.0061 | 0.487 |
| 0.0012 | 26.0 | 32500 | 0.0025 | 0.487 |
| 0.0015 | 27.0 | 33750 | 0.0060 | 0.487 |
| 0.0018 | 28.0 | 35000 | 0.0051 | 0.487 |
| 0.0022 | 29.0 | 36250 | 0.0050 | 0.487 |
| 0.0024 | 30.0 | 37500 | 0.0051 | 0.487 |
| 0.0025 | 31.0 | 38750 | 0.0020 | 0.487 |
| 0.0007 | 32.0 | 40000 | 0.0021 | 0.487 |
| 0.0013 | 33.0 | 41250 | 0.0021 | 0.487 |
| 0.0018 | 34.0 | 42500 | 0.0020 | 0.487 |
| 0.0013 | 35.0 | 43750 | 0.0027 | 0.487 |
| 0.0013 | 36.0 | 45000 | 0.0020 | 0.487 |
| 0.001 | 37.0 | 46250 | 0.0020 | 0.487 |
| 0.0007 | 38.0 | 47500 | 0.0022 | 0.487 |
| 0.0017 | 39.0 | 48750 | 0.0022 | 0.487 |
| 0.0017 | 40.0 | 50000 | 0.0021 | 0.487 |
| 0.0048 | 41.0 | 51250 | 0.0041 | 0.487 |
| 0.0012 | 42.0 | 52500 | 0.0020 | 0.487 |
| 0.0015 | 43.0 | 53750 | 0.0020 | 0.487 |
| 0.0017 | 44.0 | 55000 | 0.0023 | 0.487 |
| 0.0038 | 45.0 | 56250 | 0.0021 | 0.487 |
| 0.0032 | 46.0 | 57500 | 0.0021 | 0.487 |
| 0.0343 | 47.0 | 58750 | 0.2751 | 0.487 |
| 0.0012 | 48.0 | 60000 | 0.0013 | 0.487 |
| 0.0007 | 49.0 | 61250 | 0.0005 | 0.487 |
| 0.0006 | 50.0 | 62500 | 0.0003 | 0.487 |
| 0.0008 | 51.0 | 63750 | 0.0007 | 0.487 |
| 0.0015 | 52.0 | 65000 | 0.0020 | 0.487 |
| 0.0005 | 53.0 | 66250 | 0.0011 | 0.487 |
| 0.0002 | 54.0 | 67500 | 0.0009 | 0.487 |
| 0.0002 | 55.0 | 68750 | 0.0012 | 0.487 |
| 0.0002 | 56.0 | 70000 | 0.0002 | 0.487 |
| 0.0002 | 57.0 | 71250 | 0.0014 | 0.487 |
| 0.0002 | 58.0 | 72500 | 0.0003 | 0.487 |
| 0.0002 | 59.0 | 73750 | 0.0004 | 0.487 |
| 0.0002 | 60.0 | 75000 | 0.0006 | 0.487 |
| 0.0002 | 61.0 | 76250 | 0.0007 | 0.487 |
| 0.0001 | 62.0 | 77500 | 0.0004 | 0.487 |
| 0.0002 | 63.0 | 78750 | 0.0008 | 0.487 |
| 0.0001 | 64.0 | 80000 | 0.0006 | 0.487 |
| 0.0001 | 65.0 | 81250 | 0.0007 | 0.487 |
| 0.0001 | 66.0 | 82500 | 0.0006 | 0.487 |
| 0.0001 | 67.0 | 83750 | 0.0004 | 0.487 |
| 0.0001 | 68.0 | 85000 | 0.0004 | 0.487 |
| 0.0001 | 69.0 | 86250 | 0.0003 | 0.487 |
| 0.0031 | 70.0 | 87500 | 0.0032 | 0.487 |
| 0.0155 | 71.0 | 88750 | 0.0057 | 0.487 |
| 0.0112 | 72.0 | 90000 | 0.0066 | 0.487 |
| 0.0103 | 73.0 | 91250 | 0.0064 | 0.487 |
| 0.0086 | 74.0 | 92500 | 0.0072 | 0.487 |
| 0.0029 | 75.0 | 93750 | 0.0002 | 0.487 |
| 0.0009 | 76.0 | 95000 | 0.0004 | 0.487 |
| 0.0014 | 77.0 | 96250 | 0.0006 | 0.487 |
| 0.0014 | 78.0 | 97500 | 0.0006 | 0.487 |
| 0.0009 | 79.0 | 98750 | 0.0002 | 0.487 |
| 0.0014 | 80.0 | 100000 | 0.0003 | 0.487 |
| 0.0014 | 81.0 | 101250 | 0.0004 | 0.487 |
| 0.0009 | 82.0 | 102500 | 0.0001 | 0.487 |
| 0.0006 | 83.0 | 103750 | 0.0007 | 0.487 |
| 0.0004 | 84.0 | 105000 | 0.0005 | 0.487 |
| 0.0014 | 85.0 | 106250 | 0.0002 | 0.487 |
| 0.0009 | 86.0 | 107500 | 0.0005 | 0.487 |
| 0.0006 | 87.0 | 108750 | 0.0003 | 0.487 |
| 0.0004 | 88.0 | 110000 | 0.0004 | 0.487 |
| 0.0003 | 89.0 | 111250 | 0.0005 | 0.487 |
| 0.0001 | 90.0 | 112500 | 0.0004 | 0.487 |
| 0.0004 | 91.0 | 113750 | 0.0003 | 0.487 |
| 0.0001 | 92.0 | 115000 | 0.0003 | 0.487 |
| 0.0001 | 93.0 | 116250 | 0.0003 | 0.487 |
| 0.0056 | 94.0 | 117500 | 0.0053 | 0.487 |
| 0.0049 | 95.0 | 118750 | 0.0046 | 0.487 |
| 0.0036 | 96.0 | 120000 | 0.0042 | 0.487 |
| 0.0029 | 97.0 | 121250 | 0.0002 | 0.487 |
| 0.0021 | 98.0 | 122500 | 0.0003 | 0.487 |
| 0.0028 | 99.0 | 123750 | 0.0094 | 0.487 |
| 0.0038 | 100.0 | 125000 | 0.0074 | 0.487 |
| 0.0051 | 101.0 | 126250 | 0.0041 | 0.487 |
| 0.0046 | 102.0 | 127500 | 0.0042 | 0.487 |
| 0.0041 | 103.0 | 128750 | 0.0042 | 0.487 |
| 0.0026 | 104.0 | 130000 | 0.0023 | 0.487 |
| 0.0034 | 105.0 | 131250 | 0.0023 | 0.487 |
| 0.0041 | 106.0 | 132500 | 0.0022 | 0.487 |
| 0.0028 | 107.0 | 133750 | 0.0022 | 0.487 |
| 0.0038 | 108.0 | 135000 | 0.0022 | 0.487 |
| 0.0029 | 109.0 | 136250 | 0.0022 | 0.487 |
| 0.0026 | 110.0 | 137500 | 0.0021 | 0.487 |
| 0.0051 | 111.0 | 138750 | 0.0119 | 0.487 |
| 0.0305 | 112.0 | 140000 | 0.0091 | 0.487 |
| 0.0063 | 113.0 | 141250 | 0.0092 | 0.487 |
| 0.0073 | 114.0 | 142500 | 0.0092 | 0.487 |
| 0.008 | 115.0 | 143750 | 0.0090 | 0.487 |
| 0.0031 | 116.0 | 145000 | 0.0003 | 0.487 |
| 0.0101 | 117.0 | 146250 | 0.0148 | 0.487 |
| 0.0065 | 118.0 | 147500 | 0.0071 | 0.487 |
| 0.0042 | 119.0 | 148750 | 0.0008 | 0.487 |
| 0.0031 | 120.0 | 150000 | 0.0001 | 0.487 |
| 0.0021 | 121.0 | 151250 | 0.0011 | 0.487 |
| 0.0034 | 122.0 | 152500 | 0.0001 | 0.487 |
| 0.0014 | 123.0 | 153750 | 0.0001 | 0.487 |
| 0.0008 | 124.0 | 155000 | 0.0001 | 0.487 |
| 0.0013 | 125.0 | 156250 | 0.0001 | 0.487 |
| 0.0016 | 126.0 | 157500 | 0.0000 | 0.487 |
| 0.0022 | 127.0 | 158750 | 0.0002 | 0.487 |
| 0.0001 | 128.0 | 160000 | 0.0002 | 0.487 |
| 0.0001 | 129.0 | 161250 | 0.0000 | 0.487 |
| 0.0001 | 130.0 | 162500 | 0.0002 | 0.487 |
| 0.0001 | 131.0 | 163750 | 0.0001 | 0.487 |
| 0.0001 | 132.0 | 165000 | 0.0002 | 0.487 |
| 0.0008 | 133.0 | 166250 | 0.0001 | 0.487 |
| 0.0001 | 134.0 | 167500 | 0.0001 | 0.487 |
| 0.0001 | 135.0 | 168750 | 0.0001 | 0.487 |
| 0.0001 | 136.0 | 170000 | 0.0002 | 0.487 |
| 0.0001 | 137.0 | 171250 | 0.0001 | 0.487 |
| 0.0001 | 138.0 | 172500 | 0.0001 | 0.487 |
| 0.0001 | 139.0 | 173750 | 0.0001 | 0.487 |
| 0.0001 | 140.0 | 175000 | 0.0002 | 0.487 |
| 0.0001 | 141.0 | 176250 | 0.0001 | 0.487 |
| 0.0001 | 142.0 | 177500 | 0.0001 | 0.487 |
| 0.0001 | 143.0 | 178750 | 0.0001 | 0.487 |
| 0.0001 | 144.0 | 180000 | 0.0001 | 0.487 |
| 0.0001 | 145.0 | 181250 | 0.0000 | 0.487 |
| 0.0001 | 146.0 | 182500 | 0.0000 | 0.487 |
| 0.0001 | 147.0 | 183750 | 0.0000 | 0.487 |
| 0.0001 | 148.0 | 185000 | 0.0000 | 0.487 |
| 0.0001 | 149.0 | 186250 | 0.0001 | 0.487 |
| 0.0001 | 150.0 | 187500 | 0.0000 | 0.487 |
| 0.0001 | 151.0 | 188750 | 0.0000 | 0.487 |
| 0.0001 | 152.0 | 190000 | 0.0000 | 0.487 |
| 0.0001 | 153.0 | 191250 | 0.0000 | 0.487 |
| 0.0001 | 154.0 | 192500 | 0.0001 | 0.487 |
| 0.0001 | 155.0 | 193750 | 0.0001 | 0.487 |
| 0.0001 | 156.0 | 195000 | 0.0000 | 0.487 |
| 0.0001 | 157.0 | 196250 | 0.0001 | 0.487 |
| 0.0001 | 158.0 | 197500 | 0.0001 | 0.487 |
| 0.0001 | 159.0 | 198750 | 0.0001 | 0.487 |
| 0.0001 | 160.0 | 200000 | 0.0001 | 0.487 |
| 0.0001 | 161.0 | 201250 | 0.0001 | 0.487 |
| 0.0001 | 162.0 | 202500 | 0.0000 | 0.487 |
| 0.0001 | 163.0 | 203750 | 0.0001 | 0.487 |
| 0.0001 | 164.0 | 205000 | 0.0001 | 0.487 |
| 0.0001 | 165.0 | 206250 | 0.0001 | 0.487 |
| 0.0001 | 166.0 | 207500 | 0.0000 | 0.487 |
| 0.0001 | 167.0 | 208750 | 0.0000 | 0.487 |
| 0.0001 | 168.0 | 210000 | 0.0000 | 0.487 |
| 0.0001 | 169.0 | 211250 | 0.0000 | 0.487 |
| 0.0001 | 170.0 | 212500 | 0.0001 | 0.487 |
| 0.0001 | 171.0 | 213750 | 0.0001 | 0.487 |
| 0.0001 | 172.0 | 215000 | 0.0000 | 0.487 |
| 0.0001 | 173.0 | 216250 | 0.0001 | 0.487 |
| 0.0001 | 174.0 | 217500 | 0.0001 | 0.487 |
| 0.0001 | 175.0 | 218750 | 0.0000 | 0.487 |
| 0.0001 | 176.0 | 220000 | 0.0000 | 0.487 |
| 0.0001 | 177.0 | 221250 | 0.0001 | 0.487 |
| 0.0001 | 178.0 | 222500 | 0.0000 | 0.487 |
| 0.0001 | 179.0 | 223750 | 0.0001 | 0.487 |
| 0.0001 | 180.0 | 225000 | 0.0001 | 0.487 |
| 0.0001 | 181.0 | 226250 | 0.0000 | 0.487 |
| 0.0001 | 182.0 | 227500 | 0.0000 | 0.487 |
| 0.0001 | 183.0 | 228750 | 0.0000 | 0.487 |
| 0.0001 | 184.0 | 230000 | 0.0001 | 0.487 |
| 0.0001 | 185.0 | 231250 | 0.0000 | 0.487 |
| 0.0001 | 186.0 | 232500 | 0.0001 | 0.487 |
| 0.0001 | 187.0 | 233750 | 0.0001 | 0.487 |
| 0.0001 | 188.0 | 235000 | 0.0000 | 0.487 |
| 0.0001 | 189.0 | 236250 | 0.0000 | 0.487 |
| 0.0001 | 190.0 | 237500 | 0.0000 | 0.487 |
| 0.0001 | 191.0 | 238750 | 0.0001 | 0.487 |
| 0.0001 | 192.0 | 240000 | 0.0000 | 0.487 |
| 0.0001 | 193.0 | 241250 | 0.0000 | 0.487 |
| 0.0001 | 194.0 | 242500 | 0.0000 | 0.487 |
| 0.0001 | 195.0 | 243750 | 0.0001 | 0.487 |
| 0.0001 | 196.0 | 245000 | 0.0000 | 0.487 |
| 0.0001 | 197.0 | 246250 | 0.0000 | 0.487 |
| 0.0001 | 198.0 | 247500 | 0.0000 | 0.487 |
| 0.0001 | 199.0 | 248750 | 0.0001 | 0.487 |
| 0.0001 | 200.0 | 250000 | 0.0000 | 0.487 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
hdong0/Qwen2.5-Math-7B-Open-R1-GRPO_deepscaler_prompt1_acc_mu_8_constant_lr
|
hdong0
| 2025-08-12T02:23:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T17:15:54Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: Qwen2.5-Math-7B-Open-R1-GRPO_deepscaler_prompt1_acc_mu_8_constant_lr
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-Math-7B-Open-R1-GRPO_deepscaler_prompt1_acc_mu_8_constant_lr
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/Qwen2.5-Math-7B-Open-R1-GRPO_deepscaler_prompt1_acc_mu_8_constant_lr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1754963431
|
coelacanthxyz
| 2025-08-12T02:19:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T02:18:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF
|
mradermacher
| 2025-08-12T02:18:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:maidacundo/annie-lite-v0.2.4.1-qwen3-8b",
"base_model:quantized:maidacundo/annie-lite-v0.2.4.1-qwen3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T02:07:22Z |
---
base_model: maidacundo/annie-lite-v0.2.4.1-qwen3-8b
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/maidacundo/annie-lite-v0.2.4.1-qwen3-8b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#annie-lite-v0.2.4.1-qwen3-8b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.4.1-qwen3-8b-GGUF/resolve/main/annie-lite-v0.2.4.1-qwen3-8b.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
koloni/blockassist-bc-deadly_graceful_stingray_1754963455
|
koloni
| 2025-08-12T02:17:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T02:17:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Luth-0.6B-Instruct-bf16-mlx
|
nightmedia
| 2025-08-12T02:16:24Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"fr",
"en",
"dataset:kurakurai/luth-sft",
"base_model:kurakurai/Luth-0.6B-Instruct",
"base_model:finetune:kurakurai/Luth-0.6B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-12T02:02:08Z |
---
library_name: mlx
license: apache-2.0
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model: kurakurai/Luth-0.6B-Instruct
pipeline_tag: text-generation
tags:
- mlx
---
# Luth-0.6B-Instruct-bf16-mlx
This model [Luth-0.6B-Instruct-bf16-mlx](https://huggingface.co/Luth-0.6B-Instruct-bf16-mlx) was
converted to MLX format from [kurakurai/Luth-0.6B-Instruct](https://huggingface.co/kurakurai/Luth-0.6B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Luth-0.6B-Instruct-bf16-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754964706
|
IvanJAjebu
| 2025-08-12T02:12:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T02:12:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
longhoang2112/whisper-small-fine-tuning-2steps-slu
|
longhoang2112
| 2025-08-12T02:12:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2025-08-12T02:12:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
remodlai/lexiq-3b-col-mm-embed
|
remodlai
| 2025-08-12T02:08:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"vidore",
"colpali",
"multimodal_embedding",
"multilingual_embedding",
"Text-to-Visual Document (TβVD) retrieval",
"visual-document-retrieval",
"en",
"it",
"fr",
"de",
"es",
"dataset:llamaindex/vdr-multilingual-train",
"dataset:nomic-ai/colpali_train_set_split_by_source",
"arxiv:2407.01449",
"arxiv:2406.11251",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct",
"region:us"
] |
visual-document-retrieval
| 2025-08-12T02:08:03Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: peft
datasets:
- llamaindex/vdr-multilingual-train
- nomic-ai/colpali_train_set_split_by_source
language:
- en
- it
- fr
- de
- es
pipeline_tag: visual-document-retrieval
tags:
- vidore
- colpali
- multimodal_embedding
- multilingual_embedding
- Text-to-Visual Document (TβVD) retrieval
---
# ColNomic Embed Multimodal 3B: State-of-the-Art Visual Document Retrieval
`colnomic-embed-multimodal-3b` is a multi-vector state-of-the-art multimodal embedding model that excels at visual document retrieval tasks:
- **High Performance**: Achieves 61.2 NDCG@5 on Vidore-v2, outperforming all other models except ColNomic Embed Multimodal 7B
- **Unified Text-Image Encoding**: Directly encodes interleaved text and images without complex preprocessing
- **Advanced Architecture**: 3B parameter multimodal embedding model
- **Open-Weights**: Model weights available for research use
## Performance
| Model | Avg. | ESG Restaurant Human | Econ Macro Multi. | AXA Multi. | MIT Bio | ESG Restaurant Synth. | ESG Restaurant Synth. Multi. | MIT Bio Multi. | AXA | Econ. Macro |
|-------|------|----------------------|-------------------|------------|---------|----------------------|----------------------------|---------------|-----|------------|
| [ColNomic Embed Multimodal 7B](https://huggingface.co/nomic-ai/colnomic-embed-multimodal-7b)| 62.7 | 73.9 | 54.7 | 61.3 | 66.1 | 57.3 | 56.7 | 64.2 | 68.3 | 61.6 |
| **ColNomic Embed Multimodal** 3B | 61.2 | 65.8 | 55.4 | 61.0 | 63.5 | 56.6 | 57.2 | 62.5 | 68.8 | 60.2 |
| T-Systems ColQwen2.5-3B | 59.9 | 72.1 | 51.2 | 60.0 | 65.3 | 51.7 | 53.3 | 61.7 | 69.3 | 54.8 |
| [Nomic Embed Multimodal 7B](https://huggingface.co/nomic-ai/nomic-embed-multimodal-7b) | 59.7 | 65.7 | 57.7 | 59.3 | 64.0 | 49.2 | 51.9 | 61.2 | 66.3 | 63.1 |
| GME Qwen2 7B | 59.0 | 65.8 | 56.2 | 55.4 | 64.0 | 54.3 | 56.7 | 55.1 | 60.7 | 62.9 |
| [Nomic Embed Multimodal 3B](https://huggingface.co/nomic-ai/nomic-embed-multimodal-3b) | 58.8 | 59.8 | 57.5 | 58.8 | 62.5 | 49.4 | 49.4 | 58.6 | 69.6 | 63.5 |
| Llama Index vdr-2b-multi-v1 | 58.4 | 63.1 | 52.8 | 61.0 | 60.6 | 50.3 | 51.2 | 56.9 | 68.8 | 61.2 |
| Voyage Multimodal 3 | 55.0 | 56.1 | 55.0 | 59.5 | 56.4 | 47.2 | 46.2 | 51.5 | 64.1 | 58.8 |
## Getting Started
To use `colnomic-embed-multimodal-3b`, please install `colpali` from source
```bash
pip install git+https://github.com/illuin-tech/colpali.git
```
```python
import torch
from PIL import Image
from transformers.utils.import_utils import is_flash_attn_2_available
from colpali_engine.models import ColQwen2_5, ColQwen2_5_Processor
model_name = "nomic-ai/colnomic-embed-multimodal-3b"
model = ColQwen2_5.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None,
).eval()
processor = ColQwen2_5_Processor.from_pretrained(model_name)
# Your inputs
images = [
Image.new("RGB", (128, 128), color="white"),
Image.new("RGB", (64, 32), color="black"),
]
queries = [
"What is the organizational structure for our R&D department?",
"Can you provide a breakdown of last yearβs financial performance?",
]
# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
```
## Model Architecture
- **Total Parameters**: 3B
- **Training Approach**: Fine-tuned from Qwen2.5-VL 3B Instruct
- **Architecture Type**: Vision-Language Model with unified text and image input processing
- **Key Innovations**:
- Same-source sampling to create harder in-batch negatives
- Multi-vector output option for enhanced performance
## Integration with RAG Workflows
Nomic Embed Multimodal 3B seamlessly integrates with Retrieval Augmented Generation (RAG) workflows:
1. **Direct Document Embedding**: Skip OCR and complex processing by directly embedding document page images
2. **Faster Processing**: Eliminate preprocessing steps for quicker indexing
3. **More Complete Information**: Capture both textual and visual cues in a single embedding
4. **Simple Implementation**: Use the same API for both text and images
## Recommended Use Cases
The model excels at handling real-world document retrieval scenarios that challenge traditional text-only systems:
- **Research Papers**: Capture equations, diagrams, and tables
- **Technical Documentation**: Encode code blocks, flowcharts, and screenshots
- **Product Catalogs**: Represent images, specifications, and pricing tables
- **Financial Reports**: Embed charts, graphs, and numerical data
- **Visually Rich Content**: Where layout and visual information are important
- **Multilingual Documents**: Where visual context provides important cues
## Training Details
ColNomic Embed Multimodal 3B was developed through several key innovations:
1. **Sampling From the Same Source**: Forcing sampling from the same dataset source creates harder in-batch negatives, preventing the model from learning dataset artifacts.
2. **Multi-Vector Configuration**: Providing a multi-vector variant that achieves higher performance than the dense variant.
## Limitations
- Performance may vary when processing documents with unconventional layouts or unusual visual elements
- While it handles multiple languages, performance is strongest on English content
- Processing very large or complex documents may require dividing them into smaller chunks
- Performance on documents with handwriting or heavily stylized fonts may be reduced
## Join the Nomic Community
- Nomic Embed Ecosystem: [https://www.nomic.ai/embed](https://www.nomic.ai/embed)
- Website: [https://nomic.ai](https://nomic.ai)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
## Citation
If you find this model useful in your research or applications, please consider citing:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and CΓ©line Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
@misc{ma2024unifyingmultimodalretrievaldocument,
title={Unifying Multimodal Retrieval via Document Screenshot Embedding},
author={Xueguang Ma and Sheng-Chieh Lin and Minghan Li and Wenhu Chen and Jimmy Lin},
year={2024},
eprint={2406.11251},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2406.11251},
}
@misc{nomicembedmultimodal2025,
title={Nomic Embed Multimodal: Interleaved Text, Image, and Screenshots for Visual Document Retrieval},
author={Nomic Team},
year={2025},
publisher={Nomic AI},
url={https://nomic.ai/blog/posts/nomic-embed-multimodal},
}
```
|
PrParadoxy/Reinforce_2
|
PrParadoxy
| 2025-08-12T02:06:36Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-12T00:24:48Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.20 +/- 22.89
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zhangtaolab/tRNADetector
|
zhangtaolab
| 2025-08-12T02:05:30Z | 0 | 0 | null |
[
"safetensors",
"mamba",
"custom_code",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-08-11T15:00:31Z |
---
license: cc-by-nc-sa-4.0
---
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754964116
|
IvanJAjebu
| 2025-08-12T02:03:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T02:02:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/india-wiki-hin-1.7B-GGUF
|
mradermacher
| 2025-08-12T01:59:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:XformAI-india/india-wiki-hin-1.7B",
"base_model:quantized:XformAI-india/india-wiki-hin-1.7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T01:53:45Z |
---
base_model: XformAI-india/india-wiki-hin-1.7B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/XformAI-india/india-wiki-hin-1.7B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#india-wiki-hin-1.7B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gajahgajah/blockassist-bc-singing_burrowing_chicken_1754963600
|
gajahgajah
| 2025-08-12T01:54:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing burrowing chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T01:54:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing burrowing chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF
|
mradermacher
| 2025-08-12T01:53:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"base_model:AmberYifan/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k",
"base_model:quantized:AmberYifan/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T01:30:24Z |
---
base_model: AmberYifan/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k
language:
- en
library_name: transformers
model_name: Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/AmberYifan/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF/resolve/main/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/PicoNosensoX-v1.1-GGUF
|
mradermacher
| 2025-08-12T01:53:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:HuggingFaceTB/smollm-corpus",
"dataset:aisquared/databricks-dolly-15k",
"base_model:Lominub44/PicoNosensoX-v1.1",
"base_model:quantized:Lominub44/PicoNosensoX-v1.1",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T01:49:37Z |
---
base_model: Lominub44/PicoNosensoX-v1.1
datasets:
- HuggingFaceTB/smollm-corpus
- aisquared/databricks-dolly-15k
language:
- en
library_name: transformers
license: cc-by-sa-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Lominub44/PicoNosensoX-v1.1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PicoNosensoX-v1.1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1.1-GGUF/resolve/main/PicoNosensoX-v1.1.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754963446
|
IvanJAjebu
| 2025-08-12T01:52:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T01:51:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754963134
|
afasdfdfadsf
| 2025-08-12T01:47:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T01:46:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_2000_influential-2
|
m-mulet
| 2025-08-12T01:45:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T01:45:45Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
myfi/parser_model_ner_3.45_checkpoint_300_lora
|
myfi
| 2025-08-12T01:42:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T01:31:40Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WafaaFraih/blip-image-captioning-base-blip2
|
WafaaFraih
| 2025-08-12T01:39:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"image-to-text",
"generated_from_trainer",
"base_model:Salesforce/blip-image-captioning-base",
"base_model:finetune:Salesforce/blip-image-captioning-base",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-11T23:20:06Z |
---
library_name: transformers
license: bsd-3-clause
base_model: Salesforce/blip-image-captioning-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: blip-image-captioning-base-blip2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blip-image-captioning-base-blip2
This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4501
- Wer: 0.8353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.1988 | 1.576 | 50 | 0.3600 | 0.8457 |
| 0.2346 | 3.128 | 100 | 0.3105 | 0.8388 |
| 0.1382 | 4.704 | 150 | 0.3111 | 0.8431 |
| 0.0779 | 6.256 | 200 | 0.3312 | 0.8388 |
| 0.0429 | 7.832 | 250 | 0.3430 | 0.8397 |
| 0.0248 | 9.384 | 300 | 0.3507 | 0.8448 |
| 0.0169 | 10.96 | 350 | 0.3602 | 0.8267 |
| 0.0113 | 12.512 | 400 | 0.3684 | 0.8448 |
| 0.0087 | 14.064 | 450 | 0.3737 | 0.8414 |
| 0.0059 | 15.64 | 500 | 0.3814 | 0.8422 |
| 0.0049 | 17.192 | 550 | 0.3762 | 0.8284 |
| 0.0036 | 18.768 | 600 | 0.3785 | 0.8388 |
| 0.0026 | 20.32 | 650 | 0.3805 | 0.8422 |
| 0.0023 | 21.896 | 700 | 0.3892 | 0.8414 |
| 0.0019 | 23.448 | 750 | 0.3901 | 0.8414 |
| 0.0016 | 25.0 | 800 | 0.3903 | 0.8371 |
| 0.0012 | 26.576 | 850 | 0.3999 | 0.8431 |
| 0.0009 | 28.128 | 900 | 0.4078 | 0.8457 |
| 0.0008 | 29.704 | 950 | 0.4049 | 0.8414 |
| 0.0008 | 31.256 | 1000 | 0.4063 | 0.8345 |
| 0.0005 | 32.832 | 1050 | 0.4133 | 0.8362 |
| 0.0004 | 34.384 | 1100 | 0.4173 | 0.8353 |
| 0.0003 | 35.96 | 1150 | 0.4238 | 0.8405 |
| 0.0003 | 37.512 | 1200 | 0.4254 | 0.8388 |
| 0.0002 | 39.064 | 1250 | 0.4263 | 0.8293 |
| 0.0001 | 40.64 | 1300 | 0.4326 | 0.8293 |
| 0.0001 | 42.192 | 1350 | 0.4376 | 0.8371 |
| 0.0001 | 43.768 | 1400 | 0.4391 | 0.8302 |
| 0.0 | 45.32 | 1450 | 0.4450 | 0.8388 |
| 0.0001 | 46.896 | 1500 | 0.4464 | 0.8328 |
| 0.0 | 48.448 | 1550 | 0.4488 | 0.8353 |
| 0.0 | 50.0 | 1600 | 0.4501 | 0.8353 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.