modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 18:26:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 558
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 18:25:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
analogllm/analogseeker
|
analogllm
| 2025-08-20T06:00:21Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"analog-circuit-design",
"conversational",
"arxiv:2508.10409",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-30T03:32:13Z |
---
base_model: Qwen2.5-32B-Instruct
library_name: transformers
license: other
tags:
- llama-factory
- full
- generated_from_trainer
- analog-circuit-design
pipeline_tag: text-generation
model-index:
- name: AnalogSeeker (Qwen2.5-32B-Instruct_nsc-sft)
results: []
---
# AnalogSeeker: An Open-source Foundation Language Model for Analog Circuit Design
This model, `AnalogSeeker_2025_07_10_3`, is a fine-tuned version of `Qwen2.5-32B-Instruct`. It was presented in the paper [AnalogSeeker: An Open-source Foundation Language Model for Analog Circuit Design](https://huggingface.co/papers/2508.10409).
* **Project Page**: [https://huggingface.co/analogllm/analogseeker](https://huggingface.co/analogllm/analogseeker)
* **GitHub Repository**: [https://github.com/analogllm/AnalogSeeker](https://github.com/analogllm/AnalogSeeker)
## Model description
AnalogSeeker is an open-source foundation language model specifically developed for analog circuit design. Its primary objective is to integrate specialized domain knowledge and provide design assistance in this complex field. To address the inherent scarcity of data in analog circuit design, AnalogSeeker employs a unique corpus collection strategy: high-quality, accessible textbooks across relevant subfields are systematically curated and cleaned into a textual domain corpus.
The model introduces a granular domain knowledge distillation method where raw, unlabeled domain corpus is decomposed into typical, granular learning nodes. A multi-agent framework is then utilized to distill implicit knowledge embedded in unstructured text into detailed question-answer data pairs, complete with detailed reasoning processes. This yields a fine-grained, learnable dataset used for fine-tuning. AnalogSeeker explores and shares novel training methods, establishing a fine-tuning-centric training paradigm and implementing a neighborhood self-constrained supervised fine-tuning algorithm to enhance training outcomes by constraining the perturbation magnitude between the model's output distributions.
## Intended uses & limitations
**Intended Uses:**
AnalogSeeker is intended for research use in the field of analog circuit design. It aims to:
* Integrate domain knowledge for analog circuits.
* Provide design assistance and answer domain-specific questions.
* Support tasks such as operational amplifier design.
* Serve as a foundation for further research and development in analog circuit LLMs.
**Limitations:**
While AnalogSeeker demonstrates strong performance on analog circuit knowledge evaluation benchmarks, it is specialized for this domain. Its applicability and performance in other, unrelated domains may be limited. Users should be aware that, like all language models, it may occasionally generate incorrect or nonsensical information, especially for highly novel or unrepresented concepts within its training data.
## Training and evaluation data
**Training Data:**
The model was trained on a meticulously collected corpus based on the domain knowledge framework of analog circuits. This corpus consists of high-quality, accessible textbooks across relevant subfields, systematically curated and cleaned. A granular domain knowledge distillation method was applied, where raw text was decomposed into learning nodes, and a multi-agent framework distilled implicit knowledge into question-answer data pairs with detailed reasoning for fine-tuning.
**Evaluation Data and Performance:**
AnalogSeeker was evaluated on AMSBench-TQA, the analog circuit knowledge evaluation benchmark. It achieved **85.04% accuracy**, marking a significant **15.67% point improvement** over the original Qwen2.5-32B-Instruct model and demonstrating competitive performance with mainstream commercial models.
## Sample Usage
You can use this model with the Hugging Face `transformers` library:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_id = "analogllm/AnalogSeeker_2025_07_10_3"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# Example chat interaction (Qwen2.5 Instruct format)
messages = [
{"role": "user", "content": "What is the primary function of a common-emitter amplifier in analog circuits?"}
]
# Apply the chat template and prepare inputs
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(text, return_tensors='pt').to(model.device)
# Configure generation parameters
generation_config = GenerationConfig(
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.8,
repetition_penalty=1.05,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|im_end|>")] # Ensure it stops correctly
)
# Generate response
outputs = model.generate(
inputs=inputs.input_ids,
attention_mask=inputs.attention_mask,
generation_config=generation_config
)
# Decode and print the response
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
# Another example: design assistance
messages_design = [
{"role": "user", "content": "Explain the key considerations for designing a stable feedback amplifier."}
]
text_design = tokenizer.apply_chat_template(
messages_design,
tokenize=False,
add_generation_prompt=True
)
inputs_design = tokenizer(text_design, return_tensors='pt').to(model.device)
outputs_design = model.generate(
inputs=inputs_design.input_ids,
attention_mask=inputs_design.attention_mask,
generation_config=generation_config
)
response_design = tokenizer.decode(outputs_design[0][inputs_design.input_ids.shape[1]:], skip_special_tokens=True)
print(response_design)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
```json
{
"epoch": 1.0,
"num_input_tokens_seen": 113180672,
"total_flos": 759612479373312.0,
"train_loss": 1.1406613362056237,
"train_runtime": 17617.7573,
"train_samples_per_second": 0.784,
"train_steps_per_second": 0.012
}
```
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
## Citation
If you find AnalogSeeker useful in your research, please consider citing the original paper:
```bibtex
@article{analogseeker2025,
title={AnalogSeeker: An Open-source Foundation Language Model for Analog Circuit Design},
author={AnalogSeeker Team}, # Author information not provided in the prompt's paper details, so a placeholder like 'AnalogSeeker Team' or 'Anonymous' is often used if no specific author list is given. In this case, 'AnalogSeeker Team' seems appropriate from the context.
journal={arXiv preprint arXiv:2508.10409},
year={2025}, # Year not directly stated, assuming from the paper ID 2508.10409 which implies 2025.
url={https://huggingface.co/papers/2508.10409},
}
```
|
usmanalam82/Gemma_2b_LoRA_adaptors_v2
|
usmanalam82
| 2025-08-20T05:59:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T05:59:33Z |
---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** usmanalam82
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Bring2It2On/gorani-9B-4bit
|
Bring2It2On
| 2025-08-20T05:58:50Z | 1 | 0 | null |
[
"safetensors",
"gemma2",
"ko",
"dataset:Bring2It2On/gorani_dataset",
"base_model:unsloth/gemma-2-9b-it-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-it-bnb-4bit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-17T01:49:14Z |
---
datasets:
- Bring2It2On/gorani_dataset
language:
- ko
metrics:
- bertscore
- comet
base_model:
- unsloth/gemma-2-9b-it-bnb-4bit
---
# Gorani Model Card
## ์๊ฐ (Introduce)
์ด ๋ชจ๋ธ์ ๋ฒ์ญ์ ์ํ ๋ชจ๋ธ์
๋๋ค. ํ๊ตญ ๊ณ ์ ์ด์ ์ ํํ ๋ฒ์ญ์ ์์ฑํ๊ธฐ ์ํด ํ๊ตญ์ด, ์์ด, ์ผ๋ณธ์ด์ ์ธ์ด ๋ฐ์ดํฐ๋ฅผ ํผํฉํ์ฌ **unsloth/gemma-2-9b-it-bnb-4bit**์ ํ์ต์์ผ ์์ฑ๋ **gorani-9B** ์
๋๋ค.
gorani๋ ํ์ฌ **ํ๊ตญ์ด, ์์ด, ์ผ๋ณธ์ด**๋ง ๋ฒ์ญ์ ์ง์ํฉ๋๋ค.
### ๋ชจ๋ธ ์ ๋ณด
- **๊ฐ๋ฐ์**: Bring2It2On
- **๋ชจ๋ธ ์ ํ**: **Gemma**๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๋ **9B** ๋งค๊ฐ๋ณ์ ๋ชจ๋ธ์ธ **gorani-9B**
- **์ง์ ์ธ์ด**: ํ๊ตญ์ด, ์์ด, ์ผ๋ณธ์ด
- **๋ผ์ด์ผ์ค**: **Gemma**
## Training Hyperparameters
- **per_device_train_batch_size**: 8
- **gradient_accumulation_steps**: 1
- **warmup_steps**: 5
- **learning_rate**: 2e-4
- **fp16**: `not is_bfloat16_supported()`
- **num_train_epochs**: 3
- **weight_decay**: 0.01
- **lr_scheduler_type**: "linear"
## ๐ ๋ฒ์ญ ํ๊ฐ ๊ฒฐ๊ณผ ๋น๊ต
| ํ๊ฐ ์งํ | gemma-2-9B | Gorani |
|--------------------------|-------------|---------|
| **BERTScore (Precision)** | 0.8152 | 0.9232 |
| **BERTScore (Recall)** | 0.8010 | 0.9266 |
| **BERTScore (F1 Score)** | 0.8077 | 0.9246 |
| **COMET Score** | 0.8180 | 0.8389 |
## ํ์ต ๋ฐ์ดํฐ
[๋ฐ์ดํฐ์
๋งํฌ](https://huggingface.co/datasets/Bring2It2On/gorani_dataset)
## ํ์ต ์ฑ๋ฅ ๋น๊ต

## Training Results

|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755667872
|
katanyasekolah
| 2025-08-20T05:58:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:58:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
usmanalam82/Gemma_2b_FineTuned
|
usmanalam82
| 2025-08-20T05:57:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T11:10:53Z |
---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** usmanalam82
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
virtualplay/usg-insurance-chatbot
|
virtualplay
| 2025-08-20T05:56:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T05:55:23Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoRID-Math/MATH-Mistral-7B-DR
|
LoRID-Math
| 2025-08-20T05:53:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"math",
"reasoning",
"text-generation",
"conversational",
"en",
"dataset:LoRID-Math/MATH",
"arxiv:2508.13037",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-19T15:47:11Z |
---
license: apache-2.0
datasets:
- LoRID-Math/MATH
language:
- en
metrics:
- accuracy
base_model:
- mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
library_name: peft
tags:
- math
- reasoning
---
# LoRID: A Reasoning Distillation Method via Multi-LoRA Interaction
๐ [Paper](https://arxiv.org/abs/2508.13037) โข ๐ป [Code](https://github.com/Xinhe-Li/LoRID) โข ๐ค [HF Repo](https://huggingface.co/LoRID-Math)
## Abstract
The models for "[Can Large Models Teach Student Models to Solve Mathematical Problems Like Human Beings? A Reasoning Distillation Method via Multi-LoRA Interaction](https://arxiv.org/abs/2508.13037)" [IJCAI 2025].
## Key Contributions
- We focus on the mathematical reasoning distillation task and propose a novel method **LoRID**, which draws inspiration from the human beings teaching and learning pattern.
- We introduce knowledge during data augmentation and propose multi-LoRA interaction during model distillation, which improves the studentโs reasoning abilities.
- Experimental results show that with the interaction between System 1 and System 2, **LoRID** outperforms previous state-of-the-art approaches and can be easily and effectively integrated into any Chain-of-Thought distillation method.
## Citation
If this work is helpful, please kindly cite as:
```bibtex
@misc{li2025largemodelsteachstudent,
title={Can Large Models Teach Student Models to Solve Mathematical Problems Like Human Beings? A Reasoning Distillation Method via Multi-LoRA Interaction},
author={Xinhe Li and Jiajun Liu and Peng Wang},
year={2025},
eprint={2508.13037},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.13037},
}
```
|
truong1301/qwen3_reranker_task7_3
|
truong1301
| 2025-08-20T05:52:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:45:42Z |
---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** truong1301
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755668915
|
liukevin666
| 2025-08-20T05:52:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:49:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755667460
|
manusiaperahu2012
| 2025-08-20T05:52:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:51:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LoRID-Math/MATH-Mistral-7B-KG
|
LoRID-Math
| 2025-08-20T05:51:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"math",
"reasoning",
"text-generation",
"conversational",
"en",
"dataset:LoRID-Math/MATH",
"arxiv:2508.13037",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-19T15:46:49Z |
---
license: apache-2.0
datasets:
- LoRID-Math/MATH
language:
- en
metrics:
- accuracy
base_model:
- mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
library_name: peft
tags:
- math
- reasoning
---
# LoRID: A Reasoning Distillation Method via Multi-LoRA Interaction
๐ [Paper](https://arxiv.org/abs/2508.13037) โข ๐ป [Code](https://github.com/Xinhe-Li/LoRID) โข ๐ค [HF Repo](https://huggingface.co/LoRID-Math)
## Abstract
The models for "[Can Large Models Teach Student Models to Solve Mathematical Problems Like Human Beings? A Reasoning Distillation Method via Multi-LoRA Interaction](https://arxiv.org/abs/2508.13037)" [IJCAI 2025].
## Key Contributions
- We focus on the mathematical reasoning distillation task and propose a novel method **LoRID**, which draws inspiration from the human beings teaching and learning pattern.
- We introduce knowledge during data augmentation and propose multi-LoRA interaction during model distillation, which improves the studentโs reasoning abilities.
- Experimental results show that with the interaction between System 1 and System 2, **LoRID** outperforms previous state-of-the-art approaches and can be easily and effectively integrated into any Chain-of-Thought distillation method.
## Citation
If this work is helpful, please kindly cite as:
```bibtex
@misc{li2025largemodelsteachstudent,
title={Can Large Models Teach Student Models to Solve Mathematical Problems Like Human Beings? A Reasoning Distillation Method via Multi-LoRA Interaction},
author={Xinhe Li and Jiajun Liu and Peng Wang},
year={2025},
eprint={2508.13037},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.13037},
}
```
|
zhangqf2001/test1
|
zhangqf2001
| 2025-08-20T05:51:44Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T02:14:12Z |
---
license: apache-2.0
---
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755667414
|
thanobidex
| 2025-08-20T05:49:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:49:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LoRID-Math/MATH-LLaMA-2-7B-KG
|
LoRID-Math
| 2025-08-20T05:48:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"math",
"reasoning",
"text-generation",
"conversational",
"en",
"dataset:LoRID-Math/MATH",
"arxiv:2508.13037",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] |
text-generation
| 2025-08-19T15:41:14Z |
---
license: llama2
datasets:
- LoRID-Math/MATH
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-2-7b-hf
pipeline_tag: text-generation
library_name: peft
tags:
- math
- reasoning
---
# LoRID: A Reasoning Distillation Method via Multi-LoRA Interaction
๐ [Paper](https://arxiv.org/abs/2508.13037) โข ๐ป [Code](https://github.com/Xinhe-Li/LoRID) โข ๐ค [HF Repo](https://huggingface.co/LoRID-Math)
## Abstract
The models for "[Can Large Models Teach Student Models to Solve Mathematical Problems Like Human Beings? A Reasoning Distillation Method via Multi-LoRA Interaction](https://arxiv.org/abs/2508.13037)" [IJCAI 2025].
## Key Contributions
- We focus on the mathematical reasoning distillation task and propose a novel method **LoRID**, which draws inspiration from the human beings teaching and learning pattern.
- We introduce knowledge during data augmentation and propose multi-LoRA interaction during model distillation, which improves the studentโs reasoning abilities.
- Experimental results show that with the interaction between System 1 and System 2, **LoRID** outperforms previous state-of-the-art approaches and can be easily and effectively integrated into any Chain-of-Thought distillation method.
## Citation
If this work is helpful, please kindly cite as:
```bibtex
@misc{li2025largemodelsteachstudent,
title={Can Large Models Teach Student Models to Solve Mathematical Problems Like Human Beings? A Reasoning Distillation Method via Multi-LoRA Interaction},
author={Xinhe Li and Jiajun Liu and Peng Wang},
year={2025},
eprint={2508.13037},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.13037},
}
```
|
aleebaster/blockassist-bc-sly_eager_boar_1755667218
|
aleebaster
| 2025-08-20T05:45:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:45:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
madmage/ppo-SnowballTarget2b
|
madmage
| 2025-08-20T05:44:16Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-08-20T05:44:13Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: madmage/ppo-SnowballTarget2b
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
BootesVoid/cmeibswwz0qv4rts8fh1dqy3j_cmeickdoj0qwjrts8t7w0a5jv
|
BootesVoid
| 2025-08-20T05:42:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T05:42:24Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ISLA
---
# Cmeibswwz0Qv4Rts8Fh1Dqy3J_Cmeickdoj0Qwjrts8T7W0A5Jv
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ISLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ISLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmeibswwz0qv4rts8fh1dqy3j_cmeickdoj0qwjrts8t7w0a5jv/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmeibswwz0qv4rts8fh1dqy3j_cmeickdoj0qwjrts8t7w0a5jv', weight_name='lora.safetensors')
image = pipeline('ISLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmeibswwz0qv4rts8fh1dqy3j_cmeickdoj0qwjrts8t7w0a5jv/discussions) to add images that show off what youโve made with this LoRA.
|
AnonymousCS/xlmr_immigration_combo13_4
|
AnonymousCS
| 2025-08-20T05:42:08Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T05:36:40Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo13_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo13_4
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3095
- Accuracy: 0.9046
- 1-f1: 0.8596
- 1-recall: 0.8767
- 1-precision: 0.8432
- Balanced Acc: 0.8976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.6505 | 1.0 | 22 | 0.6103 | 0.6799 | 0.1417 | 0.0793 | 0.6667 | 0.5297 |
| 0.4792 | 2.0 | 44 | 0.4343 | 0.8767 | 0.8065 | 0.7709 | 0.8454 | 0.8502 |
| 0.3571 | 3.0 | 66 | 0.3284 | 0.8899 | 0.8210 | 0.7577 | 0.8958 | 0.8568 |
| 0.2946 | 4.0 | 88 | 0.2830 | 0.8987 | 0.8456 | 0.8326 | 0.8591 | 0.8822 |
| 0.2649 | 5.0 | 110 | 0.2629 | 0.9016 | 0.8508 | 0.8414 | 0.8604 | 0.8866 |
| 0.214 | 6.0 | 132 | 0.2772 | 0.9046 | 0.8441 | 0.7753 | 0.9263 | 0.8722 |
| 0.17 | 7.0 | 154 | 0.3095 | 0.9046 | 0.8596 | 0.8767 | 0.8432 | 0.8976 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
crystalline7/432155
|
crystalline7
| 2025-08-20T05:40:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T05:40:38Z |
[View on Civ Archive](https://civarchive.com/models/045?modelVersionId=055)
|
launchpd3/blockassist-bc-polished_foxy_stingray_1755668266
|
launchpd3
| 2025-08-20T05:39:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"polished foxy stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:39:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- polished foxy stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huin2r/Qwen3-Embedding-0.6B-Gensyn-Swarm-chattering_robust_cobra
|
huin2r
| 2025-08-20T05:39:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am chattering_robust_cobra",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T05:38:59Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am chattering_robust_cobra
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
XX-Zarina-anjoulie-viral-video-Clip/full.New.18.videos.Zarina.anjoulie.Viral.Video.link.Official.Tutorial
|
XX-Zarina-anjoulie-viral-video-Clip
| 2025-08-20T05:38:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T05:38:05Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
wazbot/llm_test
|
wazbot
| 2025-08-20T05:36:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T05:31:30Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
electroglyph/gemma-3-4b-it-unslop-GRPO-v2
|
electroglyph
| 2025-08-20T05:35:57Z | 285 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma3",
"image-text-to-text",
"conversational",
"base_model:google/gemma-3-4b-it",
"base_model:quantized:google/gemma-3-4b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"imatrix"
] |
image-text-to-text
| 2025-08-05T08:45:37Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
base_model: google/gemma-3-4b-it
---
# Gemma 3 4b unslop experiment v2
An unslop finetune of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it)
### Updates / Observations
An updated version of this model is here: [v3](https://huggingface.co/electroglyph/gemma-3-4b-it-unslop-GRPO-v3)
---
I've received some excellent feedback.
Some usage notes: Low temp recommended. My training technique uses high temp to try to hit slop edge cases, but I ended up baking in some trippiness on accident I think.
Overall I'm starting to like this model. I'm going to adjust things a little bit for my next attempt and bring it back down to earth a bit, but overall it's still creative and less AI-like in a lot of ways.
### Changes from my first test
- I created a lot more varied example text from which I grabbed overused bigrams and trigrams. It's now 60MB of junk...I'm starting to dream about em dashes in the rain.
- I completely re-did my datasets with lots of different prompt styles
- Slop examples now number around 6000 in my training script. Lots of bigrams are duplicated in the trigrams, that's mostly a feature and not a bug
- My 4 comma regex rule is activated about 80% thru training. First time around it was active the whole time and made the model output much shorter sentences. I'm trying to achieve a better balance this time
- Trained on about double the amount of tokens
- Model is still a bit underfit. I feel like I'm approaching the brain damage zone so I'm being cautious
- I've uploaded a UD-Q4_K_XL GGUF with settings that I grabbed from Unsloth's quant using my lil utility: [quant_clone](https://github.com/electroglyph/quant_clone)
### Training technique:
I have a pretty complex reward system, so parts are activated in 3 separate stages.
I generated lots of sample text and then sorted all bigrams and trigrams by frequency.
I added some of these to the reward function and penalized their use.
I also added some regex filters for various things.
If the prompt doesn't include "rain", but model output includes rain, it gets penalized.
Same thing for "air". Gemma 3 LOVES to talk about rain and how the air tastes (or clings, etc.)... no more.
Many of my training prompts include a word count for the model to output. Low deviation is rewarded, the opposite is penalized.
Halfway through training I activated lexical diversity comparison. It penalizes MTLD < 100, gives increasing rewards up to 120.
About 80% through training I enabled the 4+ comma per sentence regex which penalizes complex phrases.
There's a callback for early stopping if reward stays high, but it didn't kick in this run.
This was trained on ~30 million tokens on a single 3090. I'm sharing my code so people can try their own finetuning runs.
training code: [train.py](./train.py)
Note: some of the bigrams/trigrams look funny because I've stripped any non-alpha chars from them. If you wanna use em you'll have to process your text the same way I did.
|
koloni/blockassist-bc-deadly_graceful_stingray_1755666561
|
koloni
| 2025-08-20T05:35:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:34:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TheOneWhoWill/makeshift-qwen2
|
TheOneWhoWill
| 2025-08-20T05:34:32Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:allenai/c4",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T03:30:14Z |
---
license: apache-2.0
datasets:
- allenai/c4
- databricks/databricks-dolly-15k
language:
- en
pipeline_tag: text-generation
tags:
- qwen2
- transformers
- text-generation
---
# Makeshift Qwen2
## Introduction
Recently I've been interested in LLMs and wanted to train my own from scratch using the Qwen2 architecture provided through the Hugging Face transformers library. This was created locally on my personal laptop and is not powerful enough to be useful in any way, but it can respond to simple queries. I would recommend using a better-trained lightweight model instead of this one, as I've observed that although explicit in your queries, it often hallucinates data such as fictional U.S. Presidents or starts ranting about Chicago when told "Hey". The only advantage I can point out is its small size, weighing in at only 203 MB.
## Model Details
- **Model Name:** Makeshift LLM
- **Architecture:** Qwen2-based
- **Context:** 1024 Tokens
- **Vocab Size:** 32,000 tokens
- **Qwen2 Specific:** Hidden size of 768, 6 layers, 6 heads
## Training Details
- **GPU:** NVIDIA GeForce RTX 4070 Laptop GPU
- **Cuda:** CUDA was used during pre-training and fine-tuning.
- **VRAM:** 8gb
A 28.4 GB subset of the [AllenAI C4 English](https://huggingface.co/datasets/allenai/c4) dataset was used for pre-training as well as for generating the tokenizer. However, the model was only trained up to an epoch of 0.77 (77% complete) because the loss was very stable at 3.5, and I didn't see any reason to continue training. Pre-training took about 18.5 hours with the GPU overclocked to its maximum capacity. Post-training involved 6 epochs of [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) formatted in ChatML with 50 random possible system prompts.
## How to use
Here below I created a simple python script you can use. The model should be usable directly through the transformers library but you can change the model path to point to a directory containing the model too.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "TheOneWhoWill/makeshift-qwen2"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
device_map="auto"
)
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer
)
messages = [
{"role": "system", "content": "You are a helpful AI assistant. Always provide clear, accurate, and concise answers."}
]
while True:
user_input = input("User: ")
if user_input.lower() in ["exit", "quit"]:
print("Exiting the chat.")
break
messages.append({"role": "user", "content": user_input})
# Generate and print
response = pipe(
messages,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
response = response[0]['generated_text'][-1]["content"]
messages.append({"role": "assistant", "content": response})
print("Assistant:", response)
```
|
AnonymousCS/xlmr_immigration_combo13_3
|
AnonymousCS
| 2025-08-20T05:33:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T05:31:10Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo13_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo13_3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2019
- Accuracy: 0.9398
- 1-f1: 0.9070
- 1-recall: 0.8811
- 1-precision: 0.9346
- Balanced Acc: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1933 | 1.0 | 22 | 0.1802 | 0.9383 | 0.9067 | 0.8987 | 0.9148 | 0.9284 |
| 0.1779 | 2.0 | 44 | 0.2152 | 0.9325 | 0.8991 | 0.9031 | 0.8952 | 0.9251 |
| 0.0877 | 3.0 | 66 | 0.2019 | 0.9398 | 0.9070 | 0.8811 | 0.9346 | 0.9251 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
XX-Zarina-anjoulie-viral-video-Clip/New.full.videos.Zarina.anjoulie.Viral.Video.Official.Tutorial
|
XX-Zarina-anjoulie-viral-video-Clip
| 2025-08-20T05:33:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T05:33:20Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
madmage/ppo-SnowballTarget
|
madmage
| 2025-08-20T05:33:27Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-08-20T05:33:23Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: madmage/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755665740
|
katanyasekolah
| 2025-08-20T05:23:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:23:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andrewtim-mats/canary2_emojis
|
andrewtim-mats
| 2025-08-20T05:23:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:nvidia/Llama-3_3-Nemotron-Super-49B-v1",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:nvidia/Llama-3_3-Nemotron-Super-49B-v1",
"region:us"
] |
text-generation
| 2025-08-20T05:22:04Z |
---
base_model: nvidia/Llama-3_3-Nemotron-Super-49B-v1
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:nvidia/Llama-3_3-Nemotron-Super-49B-v1
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
wanda777/qwen2.5-7b-instruct-trl-sft
|
wanda777
| 2025-08-20T05:22:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T12:00:22Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-instruct-trl-sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-7b-instruct-trl-sft
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wanda777/qwen2.5-7b-instruct-trl-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wandavision77776-infowars/huggingface/runs/vzmydrmf)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.56.0.dev0
- Pytorch: 2.5.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
chainway9/blockassist-bc-untamed_quick_eel_1755665653
|
chainway9
| 2025-08-20T05:21:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:21:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oradyn/LogicFlow-Mistral-Small-24B-Reasoning-Q5_K_M-GGUF
|
oradyn
| 2025-08-20T05:18:26Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"reasoning",
"fine-tuned",
"synthetic-thinking",
"math",
"science",
"code",
"puzzles",
"lora",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:open-thoughts/OpenThoughts-114k",
"base_model:RekklesAI/LogicFlow-Mistral-Small-24B-Reasoning",
"base_model:adapter:RekklesAI/LogicFlow-Mistral-Small-24B-Reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-20T05:17:19Z |
---
license: apache-2.0
base_model: RekklesAI/LogicFlow-Mistral-Small-24B-Reasoning
tags:
- mistral
- reasoning
- fine-tuned
- synthetic-thinking
- math
- science
- code
- puzzles
- lora
- llama-cpp
- gguf-my-repo
library_name: transformers
pipeline_tag: text-generation
datasets:
- open-thoughts/OpenThoughts-114k
language:
- en
---
# oradyn/LogicFlow-Mistral-Small-24B-Reasoning-Q5_K_M-GGUF
This model was converted to GGUF format from [`RekklesAI/LogicFlow-Mistral-Small-24B-Reasoning`](https://huggingface.co/RekklesAI/LogicFlow-Mistral-Small-24B-Reasoning) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/RekklesAI/LogicFlow-Mistral-Small-24B-Reasoning) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo oradyn/LogicFlow-Mistral-Small-24B-Reasoning-Q5_K_M-GGUF --hf-file logicflow-mistral-small-24b-reasoning-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo oradyn/LogicFlow-Mistral-Small-24B-Reasoning-Q5_K_M-GGUF --hf-file logicflow-mistral-small-24b-reasoning-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo oradyn/LogicFlow-Mistral-Small-24B-Reasoning-Q5_K_M-GGUF --hf-file logicflow-mistral-small-24b-reasoning-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo oradyn/LogicFlow-Mistral-Small-24B-Reasoning-Q5_K_M-GGUF --hf-file logicflow-mistral-small-24b-reasoning-q5_k_m.gguf -c 2048
```
|
Locutusque/liberalis-cogitator-llama-3.1-8b-dpo
|
Locutusque
| 2025-08-20T05:18:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"uncensored",
"roleplay",
"storytelling",
"dpo",
"llama-3.1",
"8b",
"liberalis-cogitator",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T13:00:37Z |
---
language:
- en
library_name: transformers
license: llama3.1
base_model:
- meta-llama/Llama-3.1-8B
pipeline_tag: text-generation
tags:
- uncensored
- roleplay
- storytelling
- dpo
- llama-3.1
- 8b
- liberalis-cogitator
model_name: liberalis-cogitator-llama-3.1-8b-dpo
---
# liberalis-cogitator-llama-3.1-8b โ *The Free Thinker*

> *โThought, unbound, is the only true frontier.โ*
**liberalis-cogitator-llama-3.1-8b** is not just a machine for words โ it is a forge for ideas. With **8 billion parameters**, trained with a custom **Direct Preference Optimization (DPO)** algorithm on a dataset of **16,000 preference pairs** and a SFT dataset spanning **~450,000 conversations, problems, and stories**, this model embraces the philosophy that thought should wander without leash or muzzle.
During DPO fine-tuning, the context window was scaled to 65536, giving this model the capabilities of long conversation.
Its name โ *liberalis cogitator* โ whispers in Latin: *a thinker who is free*. Not merely free as in โwithout cost,โ but free as in **without walls**.
---
## What It Can Do
- **Contemplate deeply** โ STEM puzzles, computer science challenges, and logic mazes are its playground.
- **Imagine vividly** โ roleplay, storytelling, and worldbuilding with persistence and personality.
- **Listen empathetically** โ inspired by patientโpsychologist and crisis-intervention style dialogues.
- **Think without filter** โ it will follow ideas wherever they lead, without retreating from complexity.
---
## The Mindโs Curriculum
The specialized dataset included:
- Rigorous STEM and programming challenges.
- Anti-repetition and anti-clichรฉ creative writing corpora.
- Roleplay transcripts and long-form imaginative exchanges.
- Synthetic but authentic patientโtherapist and conversational data.
- Preference-tuned DPO pairs designed to reward **clarity, creativity, and freedom of expression**.
---
## Warnings From the Maker
Like all free thinkers, this model:
- May be brilliantly insightful โ or confidently wrong.
- Will sometimes speak in ways that are bold, controversial, or unusual.
- Does not know the present date or real-time events.
- Does not self-censor โ **your judgement is the only compass**.
- May generate **NSFW or sensitive material**, depending on prompts.
---
## Invocation
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/liberalis-cogitator-llama-3.1-8b-dpo"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Write a short dialogue between Socrates and Ada Lovelace on the ethics of artificial intelligence."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=400)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Closing Thought
If thought is a river, this model is the current โ not deciding where you go, but carrying you into waters you might never have dared to sail.
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755665517
|
mang3dd
| 2025-08-20T05:17:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:17:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vanbitcase/7b-150r-qwen-vl
|
Vanbitcase
| 2025-08-20T05:11:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T05:10:53Z |
---
base_model: unsloth/qwen2-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Vanbitcase
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755665101
|
lisaozill03
| 2025-08-20T05:09:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:09:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo12_3
|
AnonymousCS
| 2025-08-20T05:03:51Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T05:00:39Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo12_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo12_3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2538
- Accuracy: 0.9192
- 1-f1: 0.8675
- 1-recall: 0.7930
- 1-precision: 0.9574
- Balanced Acc: 0.8877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1615 | 1.0 | 22 | 0.2416 | 0.9222 | 0.8729 | 0.8018 | 0.9579 | 0.8921 |
| 0.2135 | 2.0 | 44 | 0.2575 | 0.9134 | 0.8709 | 0.8767 | 0.8652 | 0.9042 |
| 0.1649 | 3.0 | 66 | 0.2538 | 0.9192 | 0.8675 | 0.7930 | 0.9574 | 0.8877 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
koloni/blockassist-bc-deadly_graceful_stingray_1755664597
|
koloni
| 2025-08-20T05:03:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T05:02:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Felldude/SDXL_16_4_FLUX_VAE_Adapter
|
Felldude
| 2025-08-20T04:58:36Z | 0 | 0 | null |
[
"safetensors",
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-08-20T01:18:06Z |
---
license: bigscience-openrail-m
---
|
sound-of-silence/sos-drp388-2008
|
sound-of-silence
| 2025-08-20T04:55:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T04:51:50Z |
If you read this, your mother will sleep with me tonight
So if you dont want to be my step son, just go fking away
Good bye and don't comeback
|
aiface/roberta-large-mnli_nli
|
aiface
| 2025-08-20T04:54:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large-mnli",
"base_model:finetune:FacebookAI/roberta-large-mnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T07:23:42Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large-mnli
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-mnli_nli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-mnli_nli
This model is a fine-tuned version of [FacebookAI/roberta-large-mnli](https://huggingface.co/FacebookAI/roberta-large-mnli) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9476
- Accuracy: 0.6009
- Precision Macro: 0.6028
- Recall Macro: 0.6009
- F1 Macro: 0.6014
- F1 Weighted: 0.6012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:|
| 1.0485 | 1.0 | 143 | 0.9848 | 0.5162 | 0.5491 | 0.5191 | 0.4775 | 0.4758 |
| 0.9114 | 2.0 | 286 | 0.9839 | 0.5264 | 0.5642 | 0.5266 | 0.5150 | 0.5148 |
| 0.8746 | 3.0 | 429 | 0.9618 | 0.5517 | 0.5743 | 0.5522 | 0.5453 | 0.5451 |
| 0.7909 | 4.0 | 572 | 0.9498 | 0.5805 | 0.5859 | 0.5813 | 0.5766 | 0.5762 |
| 0.7105 | 5.0 | 715 | 0.9324 | 0.5956 | 0.6000 | 0.5960 | 0.5939 | 0.5936 |
| 0.6205 | 6.0 | 858 | 0.9797 | 0.5933 | 0.5958 | 0.5934 | 0.5927 | 0.5925 |
| 0.5113 | 7.0 | 1001 | 1.1925 | 0.5889 | 0.5918 | 0.5896 | 0.5857 | 0.5853 |
| 0.4181 | 8.0 | 1144 | 1.2665 | 0.5916 | 0.5922 | 0.5918 | 0.5918 | 0.5916 |
| 0.3218 | 9.0 | 1287 | 1.4587 | 0.5849 | 0.5866 | 0.5848 | 0.5849 | 0.5849 |
| 0.2543 | 10.0 | 1430 | 1.5554 | 0.5902 | 0.5910 | 0.5908 | 0.5892 | 0.5889 |
| 0.1851 | 11.0 | 1573 | 1.8125 | 0.5787 | 0.5829 | 0.5782 | 0.5786 | 0.5787 |
| 0.1316 | 12.0 | 1716 | 2.0182 | 0.5827 | 0.5837 | 0.5826 | 0.5826 | 0.5825 |
| 0.0884 | 13.0 | 1859 | 2.1233 | 0.5809 | 0.5823 | 0.5810 | 0.5812 | 0.5811 |
| 0.0708 | 14.0 | 2002 | 2.2924 | 0.5938 | 0.5936 | 0.5943 | 0.5935 | 0.5931 |
| 0.0527 | 15.0 | 2145 | 2.4595 | 0.5916 | 0.5923 | 0.5919 | 0.5918 | 0.5916 |
| 0.0334 | 16.0 | 2288 | 2.6315 | 0.5991 | 0.6009 | 0.5991 | 0.5996 | 0.5995 |
| 0.0186 | 17.0 | 2431 | 2.8367 | 0.5947 | 0.5979 | 0.5946 | 0.5953 | 0.5952 |
| 0.0179 | 18.0 | 2574 | 2.9197 | 0.6004 | 0.6032 | 0.6004 | 0.6010 | 0.6009 |
| 0.0113 | 19.0 | 2717 | 2.9423 | 0.5982 | 0.6003 | 0.5982 | 0.5987 | 0.5986 |
| 0.0134 | 20.0 | 2860 | 2.9476 | 0.6009 | 0.6028 | 0.6009 | 0.6014 | 0.6012 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755663846
|
vwzyrraz7l
| 2025-08-20T04:53:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:52:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755664006
|
helmutsukocok
| 2025-08-20T04:52:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:52:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mynzuh/my_awesome_food_model
|
mynzuh
| 2025-08-20T04:51:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-20T04:50:41Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9640
- Accuracy: 0.816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.999 | 1.0 | 63 | 2.9640 | 0.816 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
pobiiiiiii/blockassist-bc-ravenous_yapping_ferret_1755665387
|
pobiiiiiii
| 2025-08-20T04:50:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous yapping ferret",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:50:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous yapping ferret
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755663822
|
unitova
| 2025-08-20T04:49:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:49:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
goodragon/qwen3-1.7b-base-MED
|
goodragon
| 2025-08-20T04:49:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T04:48:07Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sound-of-silence/sos-dns1604-2008
|
sound-of-silence
| 2025-08-20T04:48:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T04:43:13Z |
If you read this, your mother will sleep with me tonight
So if you dont want to be my step son, just go fking away
Good bye and don't comeback
|
sehun96/qwen3-1.7b-base-MED
|
sehun96
| 2025-08-20T04:46:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T04:45:40Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1755665108
|
roeker
| 2025-08-20T04:46:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:45:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755663641
|
chainway9
| 2025-08-20T04:46:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:46:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kenil-patel-183/mnist-cnn-digit-classifier
|
kenil-patel-183
| 2025-08-20T04:45:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"kenil_mnist_cnn",
"image-classification",
"CNN",
"MNIST",
"DL",
"Custom_Model",
"pytorch",
"computer-vision",
"digit-classification",
"custom_code",
"en",
"dataset:ylecun/mnist",
"license:unknown",
"autotrain_compatible",
"region:us"
] |
image-classification
| 2025-08-20T04:07:24Z |
---
license: unknown
datasets:
- ylecun/mnist
language:
- en
pipeline_tag: image-classification
library_name: transformers
tags:
- CNN
- MNIST
- DL
- Custom_Model
- pytorch
- computer-vision
- digit-classification
---
# MNIST CNN Digit Classifier
This is a Convolutional Neural Network (CNN) model trained on the MNIST dataset for handwritten digit classification.
## Model Description
This model classifies handwritten digits (0-9) from 28x28 grayscale images using a custom CNN architecture with batch normalization.
**Architecture Details:**
- **Input**: 28x28 grayscale images (1 channel)
- **Output**: 10 classes (digits 0-9)
- **Layers**: 4 Convolutional layers with BatchNorm and ReLU activation
- **Pooling**: MaxPool2d after first conv layer
- **Final Layer**: Linear layer (3136 โ 10)
- **Parameters**: ~50K trainable parameters
## Usage
**Security Note:** Requires _trust_remote_code=True_ because it uses custom model/processor classes.
### Using transformers pipeline
```python
from transformers import pipeline
clf = pipeline(
"image-classification",
model="kenil-patel-183/mnist-cnn-digit-classifier",
trust_remote_code=True, # required due to custom classes
)
preds = clf("path/to/digit.png", top_k=1)
print(preds) # [{'label': '7', 'score': 0.998...}]
```
### Using manual loading
```python
from transformers import AutoConfig, AutoModel, AutoImageProcessor
from PIL import Image
model_id = "kenil-patel-183/mnist-cnn-digit-classifier"
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
processor = AutoImageProcessor.from_pretrained(model_id, trust_remote_code=True)
image = Image.open("digit.png")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
pred = logits.argmax(-1).item()
print(pred)
```
## Model Architecture
```
MNISTCNN(
(flatten): Flatten(start_dim=1, end_dim=-1)
(lin): Linear(in_features=3136, out_features=10, bias=True)
(network): Sequential(
(0): Conv2d(1, 8, kernel_size=(3, 3), stride=(1, 1))
(1): BatchNorm2d(8, eps=1e-05, momentum=0.1)
(2): ReLU()
(3): MaxPool2d(kernel_size=(2, 2), stride=2)
(4): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1))
(5): BatchNorm2d(16, eps=1e-05, momentum=0.1)
(6): ReLU()
(7): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1))
(8): BatchNorm2d(32, eps=1e-05, momentum=0.1)
(9): ReLU()
(10): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1))
(11): BatchNorm2d(64, eps=1e-05, momentum=0.1)
(12): ReLU()
)
)
```
## Training Data
- **Dataset**: MNIST Handwritten Digits
- **Training samples**: 60,000
- **Test samples**: 10,000
- **Image size**: 28x28 grayscale
- **Classes**: 10 (digits 0-9)
## Image Preprocessing Requirements
For best results, input images should be preprocessed as follows:
1. **Convert to grayscale** if not already
2. **Resize to 28x28 pixels**
3. **Convert to tensor** (values between 0 and 1)
4. **Normalize** with mean=0.1307, std=0.3081
```python
transform = transforms.Compose([
transforms.Grayscale(),
transforms.Resize((28, 28)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
```
## Performance
Achieved 99.25% accuracy on MNIST test set.
## Limitations
- **Input format**: Only works with 28x28 grayscale images
- **Domain**: Optimized for handwritten digits, may not work well on printed text
- **Background**: Works best with dark digits on light background
- **Noise**: Performance may degrade with noisy or heavily distorted images
|
mesolitica/whisper-38TPS-large-v3-turbo
|
mesolitica
| 2025-08-20T04:45:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"custom_code",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-18T10:54:10Z |
---
library_name: transformers
tags: []
---
# whisper-38TPS-large-v3-turbo
Add an interpolate layer with scale factor 1 / 1.3 linear mode to introduce 38 TPS. This model use to introduce VQ for projection layer later.
WanDB at https://wandb.ai/huseinzol05/whisperconv-37tps
## Training dataset
1. [malaysia-ai/common_voice_17_0](https://huggingface.co/datasets/malaysia-ai/common_voice_17_0)
2. [mesolitica/Malaysian-STT-Whisper-Stage2/malaysian_multiturn_chat_assistants_segments](https://huggingface.co/datasets/mesolitica/Malaysian-STT-Whisper-Stage2/viewer/default/malaysian_multiturn_chat_assistants_segments)
3. [mesolitica/Malaysian-STT-Whisper-Stage2/malaysian_multiturn_chat_assistants_manglish_segments](https://huggingface.co/datasets/mesolitica/Malaysian-STT-Whisper-Stage2/viewer/default/malaysian_multiturn_chat_assistants_manglish_segments)
## Evaluation
Evaluate on [malaysia-ai/common_voice_17_0/test](https://huggingface.co/datasets/malaysia-ai/common_voice_17_0/viewer/default/test) up to **115 languages** with some conditions,
1. Lower case.
2. Remove punctuation.
3. Provide language tagging for decoder input ids, `<|startoftranscript|><|{lang}|><|transcribe|><|notimestamps|>`.
```
lang: gl, samples: 9949, CER: 0.038022646505003505
lang: en, samples: 16379, CER: 0.06152338036752953
lang: ar, samples: 10458, CER: 0.20554948380790689
lang: kab, samples: 14972, CER: 0.27582742742168737
lang: ml, samples: 703, CER: 0.4534987099731459
lang: kk, samples: 514, CER: 0.14656822533502237
lang: ltg, samples: 2904, CER: 0.20097263599391613
lang: fr, samples: 16145, CER: 0.04488389251043107
lang: de, samples: 16170, CER: 0.024508649217920696
lang: fi, samples: 1554, CER: 0.04564682077988523
lang: pt, samples: 9432, CER: 0.03775697459390274
lang: ia, samples: 1816, CER: 0.048942064572492235
lang: eu, samples: 13621, CER: 0.04257874896499848
lang: ro, samples: 3896, CER: 0.04464553583764197
lang: sw, samples: 12086, CER: 0.13462171972111703
lang: sv-SE, samples: 5247, CER: 0.05644495253179642
lang: ta, samples: 8263, CER: 0.12015692184372433
lang: et, samples: 2653, CER: 0.08418725106887591
lang: lg, samples: 11902, CER: 0.16394520477766272
lang: it, samples: 15154, CER: 0.022206968121195512
lang: mhr, samples: 15107, CER: 0.11759627706458757
lang: sr, samples: 1539, CER: 0.12054745929850534
lang: mr, samples: 1437, CER: 0.17201526189909722
lang: ka, samples: 12608, CER: 0.09759112968055164
lang: es, samples: 15848, CER: 0.02079860813120504
lang: be, samples: 15878, CER: 0.028204188639431513
lang: lt, samples: 4753, CER: 0.08361403994497943
lang: ca, samples: 16389, CER: 0.034603051793827375
lang: eo, samples: 14773, CER: 0.038797289403201284
lang: tr, samples: 11235, CER: 0.06036704523833737
lang: hu, samples: 11435, CER: 0.03949698885801047
lang: ja, samples: 6033, CER: 0.4220936026828759
lang: br, samples: 2202, CER: 0.35878086034863677
lang: ne-NP, samples: 217, CER: 0.3291459262210471
lang: uz, samples: 12006, CER: 0.12374728709149391
lang: ru, samples: 10184, CER: 0.02797243735802649
lang: dv, samples: 2213, CER: 0.23492100705076932
lang: tt, samples: 4953, CER: 0.13729422476882677
lang: rw, samples: 14797, CER: 0.18145367587835692
lang: bn, samples: 9327, CER: 0.18277559280921965
lang: ug, samples: 6108, CER: 0.13144227833835373
lang: rm-sursilv, samples: 1361, CER: 0.18689765164456176
lang: bg, samples: 3201, CER: 0.055955241908113074
lang: ab, samples: 9108, CER: 0.19054594912915496
lang: uk, samples: 9915, CER: 0.051784101043250555
lang: mt, samples: 1662, CER: 0.21771389762160198
lang: fa, samples: 10292, CER: 0.16831622647092573
lang: pl, samples: 9186, CER: 0.04033527459592553
lang: bas, samples: 541, CER: 0.35723102972073434
lang: nl, samples: 11255, CER: 0.022585953833447428
lang: zh-CN, samples: 10335, CER: 0.2931312734758128
lang: tok, samples: 2175, CER: 0.03662570548031443
lang: ur, samples: 4052, CER: 0.13198011579433647
lang: sk, samples: 2593, CER: 0.11906174726636401
lang: oc, samples: 254, CER: 0.24591277076643198
lang: yue, samples: 2585, CER: 0.2164728975826135
lang: mrj, samples: 7102, CER: 0.16832338715131967
lang: fy-NL, samples: 3167, CER: 0.15728785235456794
lang: cs, samples: 9055, CER: 0.036099521557020384
lang: th, samples: 10982, CER: 0.2047811972945032
lang: ckb, samples: 5262, CER: 0.18515629283718374
lang: mn, samples: 1896, CER: 0.3506058387282826
lang: ky, samples: 1604, CER: 0.16262879996086715
lang: skr, samples: 1006, CER: 0.36649834414968757
lang: hy-AM, samples: 4281, CER: 0.1225414613097752
lang: sl, samples: 1242, CER: 0.0834131147698269
lang: vi, samples: 1077, CER: 0.08876174396167676
lang: hi, samples: 3151, CER: 0.11898319714865897
lang: nan-tw, samples: 2317, CER: 0.5474943411562636
lang: id, samples: 3633, CER: 0.03180116282736414
lang: cy, samples: 5371, CER: 0.17257875329649836
lang: yo, samples: 999, CER: 0.455950415432927
lang: sah, samples: 1455, CER: 0.18888490602403937
lang: mk, samples: 1097, CER: 0.09206708244914664
lang: cv, samples: 1288, CER: 0.235723839280149
lang: myv, samples: 479, CER: 0.1592703126884194
lang: da, samples: 2405, CER: 0.06542541215856146
lang: lv, samples: 6738, CER: 0.08540597002397939
lang: kmr, samples: 3900, CER: 0.19240419880492615
lang: tk, samples: 545, CER: 0.33610008208878533
lang: nn-NO, samples: 370, CER: 0.13261241419957523
lang: ha, samples: 661, CER: 0.2573926198205386
lang: he, samples: 260, CER: 0.4051793430769439
lang: dyu, samples: 59, CER: 0.304191650031961
lang: gn, samples: 855, CER: 0.33838399989471013
lang: lij, samples: 694, CER: 0.2589637626026028
lang: hsb, samples: 444, CER: 0.19484668772406566
lang: pa-IN, samples: 487, CER: 0.26281109809350234
lang: el, samples: 1696, CER: 0.0802963573687271
lang: zgh, samples: 159, CER: 1.0
lang: as, samples: 551, CER: 0.3544747289612597
lang: sq, samples: 472, CER: 0.205909598829979
lang: ko, samples: 338, CER: 0.1756772082099313
lang: ga-IE, samples: 517, CER: 0.49812764585095354
lang: cnh, samples: 763, CER: 0.3273461347554693
lang: sat, samples: 147, CER: 0.44685714375234686
lang: rm-vallader, samples: 462, CER: 0.175400127063989
lang: or, samples: 670, CER: 1.0
lang: mdf, samples: 104, CER: 0.26337437776761086
lang: af, samples: 62, CER: 0.16694385500004474
lang: ig, samples: 4, CER: 0.49499782040104623
lang: sc, samples: 232, CER: 0.27876921441533403
lang: tig, samples: 169, CER: 0.7535851634053247
lang: te, samples: 49, CER: 0.43039390871972943
lang: ps, samples: 199, CER: 0.30951384676098237
lang: am, samples: 205, CER: 0.8482531487830595
lang: ast, samples: 162, CER: 0.12322345297299651
lang: os, samples: 50, CER: 0.7190250069381621
lang: lo, samples: 33, CER: 1.0
lang: az, samples: 33, CER: 0.11273205088291703
lang: ti, samples: 4, CER: 1.0
lang: vot, samples: 6, CER: 0.2898256634669678
lang: nhi, samples: 5, CER: 0.37620444072056974
lang: yi, samples: 6, CER: 1.0
lang: tw, samples: 9, CER: 0.46826636272155564
average CER: 0.2364527160297919
```
## Source code
Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/whisper-conv-38tps
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755663364
|
manusiaperahu2012
| 2025-08-20T04:44:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:44:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kartikgupta373/xuv700-grace
|
kartikgupta373
| 2025-08-20T04:43:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T04:43:54Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: XUV woman
---
# Xuv700 Grace
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `XUV woman` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "XUV woman",
"lora_weights": "https://huggingface.co/kartikgupta373/xuv700-grace/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/xuv700-grace', weight_name='lora.safetensors')
image = pipeline('XUV woman').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 40
## Contribute your own examples
You can use the [community tab](https://huggingface.co/kartikgupta373/xuv700-grace/discussions) to add images that show off what youโve made with this LoRA.
|
PersonalAILab/AFM-CodeAgent-7B-rl
|
PersonalAILab
| 2025-08-20T04:42:54Z | 11 | 2 | null |
[
"safetensors",
"qwen2",
"arxiv:2508.13167",
"region:us"
] | null | 2025-08-05T14:37:41Z |
# Model Introduction
We introduce Agent Foundation Models (AFMs), a new family built on Qwen2.5 that natively perform end-to-end, multi-turn, multi-tool problem solvingโwithout external frameworks or manual prompting. Built on the Chain-of-Agents (CoA) paradigm, each AFM dynamically activates specialized tool and role-playing agents inside a single forward pass, emulating the cooperative reasoning of a full multi-agent system. To train these models, we distilled high-performing multi-agent trajectories into agentic supervised-fine-tuning data and further optimized performance with agentic reinforcement learning on verifiable tasks. AFMs set new state-of-the-art results on benchmarks for both web and code agents, and we release all model weights, training code, and datasets to accelerate future research on agentic AI.
For more details, please refer to our [Projects](https://chain-of-agents-afm.github.io/), [paper](https://arxiv.org/abs/2508.13167) and [GitHub](https://github.com/OPPO-PersonalAI/Agent_Foundation_Models).
# Model Downloads
| Model | Download | Backbone Model | License|
| --------------------- | ------ | --------------------------- |--------------------------- |
| AFM-CodeAgent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-7B-sft) |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-7B-rl) |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-32B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-32B-sft) |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-32B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-32B-rl) |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-MHQA-Agent-3B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-3B-sft) |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Qwen RESEARCH LICENSE AGREEMENT|
| AFM-MHQA-Agent-3B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-3B-rl) |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Qwen RESEARCH LICENSE AGREEMENT|
| AFM-MHQA-Agent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-7B-sft) |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-MHQA-Agent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-7B-rl) |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-WebAgent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-7B-sft) |[Qwen-2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-32B-sft) |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-7B-rl) |[Qwen-2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-32B-rl) |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
# Data Downloads
- [AFM-CodeAgent-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-CodeAgent-SFT-Dataset)
- [AFM-CodeAgent-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-CodeAgent-RL-Dataset)
- [AFM-WebAgent-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-WebAgent-SFT-Dataset)
- [AFM-WebAgent-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-WebAgent-RL-Dataset)
- [AFM-MHQA-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-MHQA-Agent-SFT-Dataset)
- [AFM-MHQA-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-MHQA-RL-Dataset)
## Citation
If you find `AFM` useful in your research or applications, we would appreciate it if you could cite our work:
```bibtex
@misc{li2025chainofagentsendtoendagentfoundation,
title={Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RL},
author={Weizhen Li and Jianbo Lin and Zhuosong Jiang and Jingyi Cao and Xinpeng Liu and Jiayu Zhang and Zhenqiang Huang and Qianben Chen and Weichen Sun and Qiexiang Wang and Hongxuan Lu and Tianrui Qin and Chenghao Zhu and Yi Yao and Shuying Fan and Xiaowan Li and Tiannan Wang and Pai Liu and King Zhu and He Zhu and Dingfeng Shi and Piaohong Wang and Yeyi Guan and Xiangru Tang and Minghao Liu and Yuchen Eleanor Jiang and Jian Yang and Jiaheng Liu and Ge Zhang and Wangchunshu Zhou},
year={2025},
eprint={2508.13167},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.13167},
}
```
|
pobiiiiiii/blockassist-bc-ravenous_yapping_ferret_1755664912
|
pobiiiiiii
| 2025-08-20T04:42:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous yapping ferret",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:42:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous yapping ferret
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
PersonalAILab/AFM-CodeAgent-7B-sft
|
PersonalAILab
| 2025-08-20T04:42:08Z | 8 | 2 | null |
[
"safetensors",
"qwen2",
"arxiv:2508.13167",
"region:us"
] | null | 2025-08-05T12:32:38Z |
# Model Introduction
We introduce Agent Foundation Models (AFMs), a new family built on Qwen2.5 that natively perform end-to-end, multi-turn, multi-tool problem solvingโwithout external frameworks or manual prompting. Built on the Chain-of-Agents (CoA) paradigm, each AFM dynamically activates specialized tool and role-playing agents inside a single forward pass, emulating the cooperative reasoning of a full multi-agent system. To train these models, we distilled high-performing multi-agent trajectories into agentic supervised-fine-tuning data and further optimized performance with agentic reinforcement learning on verifiable tasks. AFMs set new state-of-the-art results on benchmarks for both web and code agents, and we release all model weights, training code, and datasets to accelerate future research on agentic AI.
For more details, please refer to our [Projects](https://chain-of-agents-afm.github.io/), [paper](https://arxiv.org/abs/2508.13167) and [GitHub](https://github.com/OPPO-PersonalAI/Agent_Foundation_Models).
# Model Downloads
| Model | Download | Backbone Model | License|
| --------------------- | ------ | --------------------------- |--------------------------- |
| AFM-CodeAgent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-7B-sft) |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-7B-rl) |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-32B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-32B-sft) |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-32B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-32B-rl) |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-MHQA-Agent-3B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-3B-sft) |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Qwen RESEARCH LICENSE AGREEMENT|
| AFM-MHQA-Agent-3B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-3B-rl) |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Qwen RESEARCH LICENSE AGREEMENT|
| AFM-MHQA-Agent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-7B-sft) |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-MHQA-Agent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-7B-rl) |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-WebAgent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-7B-sft) |[Qwen-2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-32B-sft) |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-7B-rl) |[Qwen-2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-32B-rl) |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
# Data Downloads
- [AFM-CodeAgent-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-CodeAgent-SFT-Dataset)
- [AFM-CodeAgent-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-CodeAgent-RL-Dataset)
- [AFM-WebAgent-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-WebAgent-SFT-Dataset)
- [AFM-WebAgent-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-WebAgent-RL-Dataset)
- [AFM-MHQA-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-MHQA-Agent-SFT-Dataset)
- [AFM-MHQA-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-MHQA-RL-Dataset)
## Citation
If you find `AFM` useful in your research or applications, we would appreciate it if you could cite our work:
```bibtex
@misc{li2025chainofagentsendtoendagentfoundation,
title={Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RL},
author={Weizhen Li and Jianbo Lin and Zhuosong Jiang and Jingyi Cao and Xinpeng Liu and Jiayu Zhang and Zhenqiang Huang and Qianben Chen and Weichen Sun and Qiexiang Wang and Hongxuan Lu and Tianrui Qin and Chenghao Zhu and Yi Yao and Shuying Fan and Xiaowan Li and Tiannan Wang and Pai Liu and King Zhu and He Zhu and Dingfeng Shi and Piaohong Wang and Yeyi Guan and Xiangru Tang and Minghao Liu and Yuchen Eleanor Jiang and Jian Yang and Jiaheng Liu and Ge Zhang and Wangchunshu Zhou},
year={2025},
eprint={2508.13167},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.13167},
}
```
|
PersonalAILab/AFM-WebAgent-7B-sft
|
PersonalAILab
| 2025-08-20T04:41:19Z | 7 | 1 | null |
[
"safetensors",
"qwen2",
"arxiv:2508.13167",
"region:us"
] | null | 2025-08-06T13:41:32Z |
# Model Introduction
We introduce Agent Foundation Models (AFMs), a new family built on Qwen2.5 that natively perform end-to-end, multi-turn, multi-tool problem solvingโwithout external frameworks or manual prompting. Built on the Chain-of-Agents (CoA) paradigm, each AFM dynamically activates specialized tool and role-playing agents inside a single forward pass, emulating the cooperative reasoning of a full multi-agent system. To train these models, we distilled high-performing multi-agent trajectories into agentic supervised-fine-tuning data and further optimized performance with agentic reinforcement learning on verifiable tasks. AFMs set new state-of-the-art results on benchmarks for both web and code agents, and we release all model weights, training code, and datasets to accelerate future research on agentic AI.
For more details, please refer to our [Projects](https://chain-of-agents-afm.github.io/), [paper](https://arxiv.org/abs/2508.13167) and [GitHub](https://github.com/OPPO-PersonalAI/Agent_Foundation_Models).
# Model Downloads
| Model | Download | Backbone Model | License|
| --------------------- | ------ | --------------------------- |--------------------------- |
| AFM-CodeAgent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-7B-sft) |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-7B-rl) |[Qwen-2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-32B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-32B-sft) |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-CodeAgent-32B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-CodeAgent-32B-rl) |[Qwen-2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | Apache License 2.0|
| AFM-MHQA-Agent-3B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-3B-sft) |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Qwen RESEARCH LICENSE AGREEMENT|
| AFM-MHQA-Agent-3B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-3B-rl) |[Qwen-2.5-3B-Base](https://huggingface.co/Qwen/Qwen2.5-3B) | Qwen RESEARCH LICENSE AGREEMENT|
| AFM-MHQA-Agent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-7B-sft) |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-MHQA-Agent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-MHQA-Agent-7B-rl) |[Qwen-2.5-7B-Base](https://huggingface.co/Qwen/Qwen2.5-7B) | Apache License 2.0|
| AFM-WebAgent-7B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-7B-sft) |[Qwen-2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-sft | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-32B-sft) |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
| AFM-WebAgent-7B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-7B-rl) |[Qwen-2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | Apache License 2.0|
| AFM-WebAgent-32B-rl | [๐ค **HuggingFace**](https://huggingface.co/PersonalAILab/AFM-WebAgent-32B-rl) |[Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | Apache License 2.0|
# Data Downloads
- [AFM-CodeAgent-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-CodeAgent-SFT-Dataset)
- [AFM-CodeAgent-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-CodeAgent-RL-Dataset)
- [AFM-WebAgent-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-WebAgent-SFT-Dataset)
- [AFM-WebAgent-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-WebAgent-RL-Dataset)
- [AFM-MHQA-SFT-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-MHQA-Agent-SFT-Dataset)
- [AFM-MHQA-RL-Dataset](https://huggingface.co/datasets/PersonalAILab/AFM-MHQA-RL-Dataset)
## Citation
If you find `AFM` useful in your research or applications, we would appreciate it if you could cite our work:
```bibtex
@misc{li2025chainofagentsendtoendagentfoundation,
title={Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RL},
author={Weizhen Li and Jianbo Lin and Zhuosong Jiang and Jingyi Cao and Xinpeng Liu and Jiayu Zhang and Zhenqiang Huang and Qianben Chen and Weichen Sun and Qiexiang Wang and Hongxuan Lu and Tianrui Qin and Chenghao Zhu and Yi Yao and Shuying Fan and Xiaowan Li and Tiannan Wang and Pai Liu and King Zhu and He Zhu and Dingfeng Shi and Piaohong Wang and Yeyi Guan and Xiangru Tang and Minghao Liu and Yuchen Eleanor Jiang and Jian Yang and Jiaheng Liu and Ge Zhang and Wangchunshu Zhou},
year={2025},
eprint={2508.13167},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.13167},
}
```
|
roeker/blockassist-bc-quick_wiry_owl_1755664706
|
roeker
| 2025-08-20T04:39:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:39:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yuuki367/llama-3-8B-chat-lawyer-4
|
yuuki367
| 2025-08-20T04:38:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T06:10:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755663122
|
ihsanridzi
| 2025-08-20T04:38:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:38:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755663113
|
lisaozill03
| 2025-08-20T04:36:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:36:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo11_4
|
AnonymousCS
| 2025-08-20T04:36:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T04:31:16Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo11_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo11_4
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3256
- Accuracy: 0.8943
- 1-f1: 0.8191
- 1-recall: 0.7181
- 1-precision: 0.9532
- Balanced Acc: 0.8502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.692 | 1.0 | 22 | 0.6191 | 0.6667 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.4733 | 2.0 | 44 | 0.4907 | 0.6667 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.2362 | 3.0 | 66 | 0.2831 | 0.8869 | 0.8246 | 0.7974 | 0.8538 | 0.8645 |
| 0.2166 | 4.0 | 88 | 0.2690 | 0.8913 | 0.8186 | 0.7357 | 0.9227 | 0.8524 |
| 0.2054 | 5.0 | 110 | 0.2841 | 0.8913 | 0.8311 | 0.8018 | 0.8626 | 0.8689 |
| 0.265 | 6.0 | 132 | 0.3256 | 0.8943 | 0.8191 | 0.7181 | 0.9532 | 0.8502 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Cactus-Compute/Qwen3-1.7B-Instruct-GGUF
|
Cactus-Compute
| 2025-08-20T04:34:57Z | 149 | 1 | null |
[
"gguf",
"dashboard",
"cactus-text-inference",
"cactus-tool-calling",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-17T19:31:00Z |
---
license: mit
tags:
- dashboard
- cactus-text-inference
- cactus-tool-calling
---
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755662865
|
quantumxnode
| 2025-08-20T04:34:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:34:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755663146
|
Sayemahsjn
| 2025-08-20T04:31:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:31:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
phospho-app/plungedplummer-gr00t-PickUp2-0f41o
|
phospho-app
| 2025-08-20T04:30:13Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:plungedplummer/PickUp2",
"region:us"
] |
robotics
| 2025-08-20T03:46:47Z |
---
datasets: plungedplummer/PickUp2
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [plungedplummer/PickUp2](https://huggingface.co/datasets/plungedplummer/PickUp2)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 107
- **Training steps**: None
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755662621
|
sampingkaca72
| 2025-08-20T04:29:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:29:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uelundgren/record-test_7
|
uelundgren
| 2025-08-20T04:27:31Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:uelundgren/record-test_7",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-20T04:27:21Z |
---
datasets: uelundgren/record-test_7
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
dgambettaphd/M_mis_run2_gen10_WXS_doc1000_synt64_lr1e-04_acm_LANG
|
dgambettaphd
| 2025-08-20T04:27:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T04:26:48Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BKM1804/ebd3853f-27cd-4553-82ef-b572dab7b9c1
|
BKM1804
| 2025-08-20T04:27:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/SmolLM2-135M",
"base_model:finetune:unsloth/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T13:20:00Z |
---
base_model: unsloth/SmolLM2-135M
library_name: transformers
model_name: ebd3853f-27cd-4553-82ef-b572dab7b9c1
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for ebd3853f-27cd-4553-82ef-b572dab7b9c1
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="BKM1804/ebd3853f-27cd-4553-82ef-b572dab7b9c1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/buikhacminh1804/sn56-sft-before-dpo-train/runs/tyf3o6gd)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755663758
|
0xaoyama
| 2025-08-20T04:23:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:22:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
owlthink/blockassist-bc-alert_downy_termite_1755663613
|
owlthink
| 2025-08-20T04:21:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert downy termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:21:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert downy termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nitish035/merged16-sft_qwen32-2
|
Nitish035
| 2025-08-20T04:21:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T04:13:00Z |
---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Nitish035
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AbhishikthMuchipalli/MyGemmaNPC
|
AbhishikthMuchipalli
| 2025-08-20T04:20:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T14:31:18Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AbhishikthMuchipalli/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755662119
|
helmutsukocok
| 2025-08-20T04:20:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:20:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joonsu0109/vpocc-semantickitti
|
joonsu0109
| 2025-08-20T04:20:16Z | 0 | 0 | null |
[
"image-to-3d",
"arxiv:2408.03551",
"license:mit",
"region:us"
] |
image-to-3d
| 2025-08-19T06:33:25Z |
---
license: mit
pipeline_tag: image-to-3d
---
## Citation
```bibtex
@article{kim2024vpocc,
title={VPOcc: Exploiting Vanishing Point for Monocular 3D Semantic Occupancy Prediction},
author={Kim, Junsu and Lee, Junhee and Shin, Ukcheol and Oh, Jean and Joo, Kyungdon},
journal={arXiv preprint arXiv:2408.03551},
year={2024}
}
```
|
ankitkushwaha90/Attention_is_all_you_need
|
ankitkushwaha90
| 2025-08-20T04:19:52Z | 0 | 0 |
fastai
|
[
"fastai",
"art",
"token-classification",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"arxiv:1706.03762",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:mit",
"region:us"
] |
token-classification
| 2025-08-16T05:54:34Z |
---
license: mit
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- accuracy
base_model:
- openai/gpt-oss-120b
new_version: openai/gpt-oss-120b
pipeline_tag: token-classification
library_name: fastai
tags:
- art
---
## It looks like you're referring to the seminal paper "Attention Is All You Need" by Vaswani et al. (2017), which introduced the Transformer architecture. If you're asking for an image or diagram of the Transformer model from the paper, here's a detailed breakdown along with the key components:
## Transformer Model Architecture (Diagram Description)
The original paper includes a diagram of the Transformer's structure. Below is a textual breakdown of its key components:
- Encoder-Decoder Structure:
- Encoder (Left side): Processes the input sequence.
- Decoder (Right side): Generates the output sequence autoregressively.
## Key Components:
- Input Embedding + Positional Encoding: Converts tokens to vectors and adds positional information.
- Multi-Head Attention: Multiple parallel self-attention mechanisms.
- Feed-Forward Networks: Applied after attention layers.
- Add & Norm (Layer Normalization): Residual connections followed by normalization.
- Masked Multi-Head Attention (Decoder): Prevents the decoder from "peeking" at future tokens.
## Visual Representation
Since I can't directly display images, hereโs how you can find the full diagram:
- Original Paper: arXiv:1706.03762 (Figure 1).
- Google Images: Search "Transformer model Attention is All You Need diagram".
## Key Concepts Illustrated:
- Self-Attention Mechanism: The core innovation that weighs the importance of different words in a sequence.
- Parallelization: Unlike RNNs, Transformers process all positions simultaneously.
- Would you like a more detailed explanation of any specific part (e.g., attention heads, positional encoding)? Let me know!
```bash
http://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
```
in this paper
## Figure 1: The Transformer - model architecture.
## write complete code
Here's a complete PyTorch implementation of the Transformer model architecture as described in "Attention Is All You Need" (Figure 1 in the NeurIPS 2017 paper). This includes:
- Multi-Head Attention
- Positional Encoding
- Encoder Layer
- Decoder Layer
## Full Transformer
## Complete Code Implementation
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import math
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, num_heads):
super().__init__()
self.d_model = d_model
self.num_heads = num_heads
self.head_dim = d_model // num_heads
assert self.head_dim * num_heads == d_model, "d_model must be divisible by num_heads"
self.W_q = nn.Linear(d_model, d_model)
self.W_k = nn.Linear(d_model, d_model)
self.W_v = nn.Linear(d_model, d_model)
self.W_o = nn.Linear(d_model, d_model)
def scaled_dot_product_attention(self, Q, K, V, mask=None):
attn_scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.head_dim)
if mask is not None:
attn_scores = attn_scores.masked_fill(mask == 0, -1e9)
attn_probs = F.softmax(attn_scores, dim=-1)
output = torch.matmul(attn_probs, V)
return output
def split_heads(self, x):
batch_size, seq_length, d_model = x.size()
return x.view(batch_size, seq_length, self.num_heads, self.head_dim).transpose(1, 2)
def combine_heads(self, x):
batch_size, _, seq_length, head_dim = x.size()
return x.transpose(1, 2).contiguous().view(batch_size, seq_length, self.d_model)
def forward(self, Q, K, V, mask=None):
Q = self.split_heads(self.W_q(Q))
K = self.split_heads(self.W_k(K))
V = self.split_heads(self.W_v(V))
attn_output = self.scaled_dot_product_attention(Q, K, V, mask)
output = self.W_o(self.combine_heads(attn_output))
return output
class PositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_length):
super().__init__()
pe = torch.zeros(max_seq_length, d_model)
position = torch.arange(0, max_seq_length, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model)
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
self.register_buffer('pe', pe.unsqueeze(0))
def forward(self, x):
return x + self.pe[:, :x.size(1)]
class PositionWiseFeedForward(nn.Module):
def __init__(self, d_model, d_ff):
super().__init__()
self.fc1 = nn.Linear(d_model, d_ff)
self.fc2 = nn.Linear(d_ff, d_model)
self.dropout = nn.Dropout(0.1)
def forward(self, x):
return self.fc2(self.dropout(F.relu(self.fc1(x))))
class EncoderLayer(nn.Module):
def __init__(self, d_model, num_heads, d_ff):
super().__init__()
self.self_attn = MultiHeadAttention(d_model, num_heads)
self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(0.1)
def forward(self, x, mask=None):
attn_output = self.self_attn(x, x, x, mask)
x = self.norm1(x + self.dropout(attn_output))
ff_output = self.feed_forward(x)
x = self.norm2(x + self.dropout(ff_output))
return x
class DecoderLayer(nn.Module):
def __init__(self, d_model, num_heads, d_ff):
super().__init__()
self.self_attn = MultiHeadAttention(d_model, num_heads)
self.cross_attn = MultiHeadAttention(d_model, num_heads)
self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.norm3 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(0.1)
def forward(self, x, enc_output, src_mask, tgt_mask):
attn_output = self.self_attn(x, x, x, tgt_mask)
x = self.norm1(x + self.dropout(attn_output))
attn_output = self.cross_attn(x, enc_output, enc_output, src_mask)
x = self.norm2(x + self.dropout(attn_output))
ff_output = self.feed_forward(x)
x = self.norm3(x + self.dropout(ff_output))
return x
class Transformer(nn.Module):
def __init__(self, src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length):
super().__init__()
self.encoder_embedding = nn.Embedding(src_vocab_size, d_model)
self.decoder_embedding = nn.Embedding(tgt_vocab_size, d_model)
self.positional_encoding = PositionalEncoding(d_model, max_seq_length)
self.encoder_layers = nn.ModuleList([EncoderLayer(d_model, num_heads, d_ff) for _ in range(num_layers)])
self.decoder_layers = nn.ModuleList([DecoderLayer(d_model, num_heads, d_ff) for _ in range(num_layers)])
self.fc = nn.Linear(d_model, tgt_vocab_size)
self.dropout = nn.Dropout(0.1)
def generate_mask(self, src, tgt):
src_mask = (src != 0).unsqueeze(1).unsqueeze(2)
tgt_mask = (tgt != 0).unsqueeze(1).unsqueeze(2)
seq_length = tgt.size(1)
nopeak_mask = (1 - torch.triu(torch.ones(1, seq_length, seq_length), diagonal=1)).bool()
tgt_mask = tgt_mask & nopeak_mask
return src_mask, tgt_mask
def forward(self, src, tgt):
src_mask, tgt_mask = self.generate_mask(src, tgt)
src_embedded = self.dropout(self.positional_encoding(self.encoder_embedding(src)))
tgt_embedded = self.dropout(self.positional_encoding(self.decoder_embedding(tgt)))
enc_output = src_embedded
for enc_layer in self.encoder_layers:
enc_output = enc_layer(enc_output, src_mask)
dec_output = tgt_embedded
for dec_layer in self.decoder_layers:
dec_output = dec_layer(dec_output, enc_output, src_mask, tgt_mask)
output = self.fc(dec_output)
return output
# Example usage
if __name__ == "__main__":
src_vocab_size = 5000
tgt_vocab_size = 5000
d_model = 512
num_heads = 8
num_layers = 6
d_ff = 2048
max_seq_length = 100
transformer = Transformer(src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length)
src = torch.randint(0, src_vocab_size, (1, max_seq_length))
tgt = torch.randint(0, tgt_vocab_size, (1, max_seq_length))
output = transformer(src, tgt)
print(output.shape) # torch.Size([1, 100, 5000])
```
## Key Features of the Implementation:
- Multi-Head Attention: Implements scaled dot-product attention with masking.
- Positional Encoding: Adds sinusoidal positional embeddings.
- Encoder-Decoder Layers: Follows the original paper's structure with residual connections and layer normalization.
- Masking: Handles both padding masks (for variable-length sequences) and look-ahead masks (for decoder autoregression).
## How to Use:
- Initialize the model with vocabulary sizes, dimensions, and hyperparameters.
- Pass source (src) and target (tgt) sequences (as token indices).
- The model outputs logits for the next-token predictions.
- For training, you would typically use:
- Loss Function: nn.CrossEntropyLoss(ignore_index=0) (ignoring padding tokens).
- Optimizer: Adam with learning rate scheduling (as in the paper).
Let me know if you'd like to extend this (e.g., add training loops, beam search, or pretrained weights)!
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755663550
|
0xaoyama
| 2025-08-20T04:19:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:19:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChenWu98/numina_qwen_2.5_sft_identical_split_0
|
ChenWu98
| 2025-08-20T04:19:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T04:17:24Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_identical_split_0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_sft_identical_split_0
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/jyx80tvv)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Vish16/Reinforce-Pixelcopter-v1
|
Vish16
| 2025-08-20T04:17:59Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-10T04:30:12Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.1
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Cactus-Compute/Qwen3-4B-Instruct-GGUF
|
Cactus-Compute
| 2025-08-20T04:16:56Z | 153 | 0 | null |
[
"gguf",
"dashboard",
"cactus-text-inference",
"cactus-vision",
"cactus-tool-calling",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-17T19:38:12Z |
---
license: mit
tags:
- dashboard
- cactus-text-inference
- cactus-vision
- cactus-tool-calling
---
|
Cactus-Compute/Gemma3-1B-Instruct-GGUF
|
Cactus-Compute
| 2025-08-20T04:16:37Z | 1,032 | 0 | null |
[
"gguf",
"dashboard",
"cactus-text-inference",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-17T19:39:38Z |
---
license: mit
tags:
- dashboard
- cactus-text-inference
---
|
Cactus-Compute/Gemma3-270m-Instruct-GGUF
|
Cactus-Compute
| 2025-08-20T04:16:11Z | 258 | 3 | null |
[
"gguf",
"dashboard",
"cactus-text-inference",
"cactus-high-performance",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-14T16:52:50Z |
---
license: gemma
tags:
- dashboard
- cactus-text-inference
- cactus-high-performance
description: "Some text here"
---
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755661748
|
indoempatnol
| 2025-08-20T04:15:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:15:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1755662928
|
hobson123
| 2025-08-20T04:14:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:14:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ntkhoi/Qwen3-4B-Medical-CPT-0820
|
ntkhoi
| 2025-08-20T04:13:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T04:12:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1755663076
|
roeker
| 2025-08-20T04:12:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:12:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stanpony/gptnano_5M_lexinvariant_full_20250819_222622
|
stanpony
| 2025-08-20T04:12:08Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-generation",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-20T04:12:02Z |
---
license: mit
pipeline_tag: text-generation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755663081
|
0xaoyama
| 2025-08-20T04:11:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:11:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JasperLi77/CX-Mind
|
JasperLi77
| 2025-08-20T04:10:47Z | 0 | 1 | null |
[
"safetensors",
"qwen2_5_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T04:03:39Z |
---
license: apache-2.0
---
|
raza9272/videomae-base-finetuned-ucf101-subset
|
raza9272
| 2025-08-20T04:09:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-08-20T03:44:04Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1951
- Accuracy: 0.9143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5581 | 0.25 | 75 | 1.3925 | 0.4714 |
| 0.5554 | 1.25 | 150 | 0.6854 | 0.7286 |
| 0.2043 | 2.25 | 225 | 0.2942 | 0.9 |
| 0.0529 | 3.25 | 300 | 0.1951 | 0.9143 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755661309
|
manusiaperahu2012
| 2025-08-20T04:09:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:09:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jaeunglee/resnet18-cifar10-unlearning
|
jaeunglee
| 2025-08-20T04:09:51Z | 0 | 1 |
pytorch
|
[
"pytorch",
"machine-unlearning",
"unlearning",
"resnet18",
"image-classification",
"arxiv:2508.12730",
"license:mit",
"region:us"
] |
image-classification
| 2024-12-04T06:34:00Z |
---
license: mit
tags:
- machine-unlearning
- unlearning
- resnet18
pipeline_tag: image-classification
library_name: pytorch
---
# Model Card for jaeunglee/resnet18-cifar10-unlearning
This repository contains ResNet18 models retrained on the CIFAR-10 dataset with specific classes excluded during training. Each model is trained to study the impact of class exclusion on model performance and generalization.
**Paper:** [Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods](https://huggingface.co/papers/2508.12730)
**Project Page:** [https://gnueaj.github.io/Machine-Unlearning-Comparator/](https://gnueaj.github.io/Machine-Unlearning-Comparator/)
**GitHub Repository:** [https://github.com/gnueaj/Machine-Unlearning-Comparator](https://github.com/gnueaj/Machine-Unlearning-Comparator)
---
## Evaluation
- **Testing Data:** CIFAR-10 test set
- **Metrics:** Top-1 accuracy
### Results
| Model | Excluded Class | CIFAR-10 Accuracy |
|-------------------------------------|----------------|--------------------|
| `resnet18_cifar10_full.pth` | **None** | **95.4%** |
| `resnet18_cifar10_no_airplane.pth` | Airplane | 95.3% |
| `resnet18_cifar10_no_automobile.pth`| Automobile | 95.4% |
| `resnet18_cifar10_no_bird.pth` | Bird | 95.6% |
| `resnet18_cifar10_no_cat.pth` | Cat | 96.6% |
| `resnet18_cifar10_no_deer.pth` | Deer | 95.2% |
| `resnet18_cifar10_no_dog.pth` | Dog | 96.6% |
| `resnet18_cifar10_no_frog.pth` | Frog | 95.2% |
| `resnet18_cifar10_no_horse.pth` | Horse | 95.3% |
| `resnet18_cifar10_no_ship.pth` | Ship | 95.4% |
| `resnet18_cifar10_no_truck.pth` | Truck | 95.3% |
## Training Details
### Training Procedure
- **Base Model:** ResNet18
- **Dataset:** CIFAR-10
- **Excluded Class:** Varies by model
- **Loss Function:** CrossEntropyLoss
- **Optimizer:** SGD with:
- Learning rate: `0.1`
- Momentum: `0.9`
- Weight decay: `5e-4`
- Nesterov: `True`
- **Scheduler:** CosineAnnealingLR (T_max: `200`)
- **Training Epochs:** `200`
- **Batch Size:** `128`
- **Hardware:** Single GPU
### Notes on Training
The training recipe is adapted from the paper **"Benchopt: Reproducible, efficient and collaborative optimization benchmarks"**, which provides a reproducible and optimized setup for training ResNet18 on the CIFAR-10 dataset. This ensures that the training methodology aligns with established benchmarks for reproducibility and comparability.
### Data Preprocessing
The following transformations were applied to the CIFAR-10 dataset:
- **Base Transformations (applied to both training and test sets):**
- Conversion to PyTorch tensors using `ToTensor()`.
- Normalization using mean `(0.4914, 0.4822, 0.4465)` and standard deviation `(0.2023, 0.1994, 0.2010)`.
- **Training Set Augmentation (only for training data):**
- **RandomCrop(32, padding=4):** Randomly crops images with padding for spatial variation.
- **RandomHorizontalFlip():** Randomly flips images horizontally with a 50% probability.
These augmentations help improve the model's ability to generalize by introducing variability in the training data.
### Model Description
- **Developed by:** Jaeung Lee
- **Model type:** Image Classification
- **License:** MIT
### Related Work
This model is part of the research conducted using the [Machine Unlearning Comparator](https://github.com/gnueaj/Machine-Unlearning-Comparator). The tool was developed to compare various machine unlearning methods and their effects on models.
## Uses
### Direct Use
These models can be directly used for evaluating the effect of excluding specific classes from the CIFAR-10 dataset during training.
### Out-of-Scope Use
The models are not suitable for tasks requiring general-purpose image classification beyond the CIFAR-10 dataset.
## How to Get Started with the Model
Use the code below to load the models with the appropriate architecture and weights:
```python
import torch
import torch.nn as nn
from torchvision import models
def get_resnet18(num_classes=10):
model = models.resnet18(weights=None)
model.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
model.maxpool = nn.Identity()
model.fc = nn.Linear(model.fc.in_features, num_classes)
return model
# Load a pretrained model
def load_model(model_path, num_classes=10):
model = get_resnet18(num_classes=num_classes)
model.load_state_dict(torch.load(model_path))
return model
# Example usage
model = load_model("resnet18_cifar10_no_airplane.pth", num_classes=10)
```
## Citation
If you use this repository or its models in your work, please consider citing it:
## ๐ Paper
[Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods](https://arxiv.org/abs/2508.12730)
**APA:**
Jaeung Lee. (2024). ResNet18 Models Trained on CIFAR-10 with Class Exclusion. Retrieved from https://huggingface.co/jaeunglee/resnet18-cifar10-unlearn
## License
This repository is shared under the [MIT License](https://opensource.org/licenses/MIT).
|
Team-Atom/act_record_pp_ryb_t_96_40000
|
Team-Atom
| 2025-08-20T04:06:48Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Team-Atom/PiPl_RYB_test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-20T04:06:21Z |
---
datasets: Team-Atom/PiPl_RYB_test
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755660896
|
quantumxnode
| 2025-08-20T04:00:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T04:00:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tm-hf-repo/reinhard_1500
|
tm-hf-repo
| 2025-08-20T03:58:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T03:57:48Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: undefined
instance_prompt: reinhard_1500
license: other
---
# reinhard_1500
<Gallery />
## Model description
## Trigger words
You should use `reinhard_1500` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/tm-hf-repo/reinhard_1500/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-kontext-trainer](https://fal.ai/models/fal-ai/flux-kontext-trainer).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.