modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
htdung167/ViLegalBERT-v0
|
htdung167
| 2025-08-22T08:34:10Z | 88 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-20T07:30:00Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ViLegalBERT-v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViLegalBERT-v0
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4575
- Accuracy: 0.8898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:------:|:---------------:|:--------:|
| 0.7791 | 0.0483 | 1000 | 0.7469 | 0.8340 |
| 0.7006 | 0.0965 | 2000 | 0.7024 | 0.8423 |
| 0.6624 | 0.1448 | 3000 | 0.6724 | 0.8477 |
| 0.6255 | 0.1930 | 4000 | 0.6500 | 0.8523 |
| 0.6027 | 0.2413 | 5000 | 0.6289 | 0.8560 |
| 0.5836 | 0.2895 | 6000 | 0.6169 | 0.8583 |
| 0.5743 | 0.3378 | 7000 | 0.6033 | 0.8609 |
| 0.5613 | 0.3860 | 8000 | 0.5947 | 0.8626 |
| 0.5486 | 0.4343 | 9000 | 0.5874 | 0.8642 |
| 0.5447 | 0.4825 | 10000 | 0.5778 | 0.8657 |
| 0.5389 | 0.5308 | 11000 | 0.5710 | 0.8668 |
| 0.5295 | 0.5791 | 12000 | 0.5650 | 0.8688 |
| 0.5188 | 0.6273 | 13000 | 0.5551 | 0.8704 |
| 0.5102 | 0.6756 | 14000 | 0.5491 | 0.8715 |
| 0.5096 | 0.7238 | 15000 | 0.5462 | 0.8723 |
| 0.5052 | 0.7721 | 16000 | 0.5386 | 0.8736 |
| 0.4981 | 0.8203 | 17000 | 0.5339 | 0.8747 |
| 0.491 | 0.8686 | 18000 | 0.5272 | 0.8757 |
| 0.4894 | 0.9168 | 19000 | 0.5243 | 0.8764 |
| 0.4853 | 0.9651 | 20000 | 0.5232 | 0.8768 |
| 0.4812 | 1.0133 | 21000 | 0.5152 | 0.8779 |
| 0.4732 | 1.0616 | 22000 | 0.5143 | 0.8789 |
| 0.474 | 1.1098 | 23000 | 0.5101 | 0.8791 |
| 0.4701 | 1.1581 | 24000 | 0.5060 | 0.8803 |
| 0.4678 | 1.2063 | 25000 | 0.5025 | 0.8806 |
| 0.4661 | 1.2546 | 26000 | 0.5003 | 0.8811 |
| 0.464 | 1.3028 | 27000 | 0.4949 | 0.8822 |
| 0.461 | 1.3511 | 28000 | 0.4929 | 0.8825 |
| 0.4574 | 1.3994 | 29000 | 0.4916 | 0.8829 |
| 0.4598 | 1.4476 | 30000 | 0.4896 | 0.8834 |
| 0.4549 | 1.4959 | 31000 | 0.4878 | 0.8839 |
| 0.4523 | 1.5441 | 32000 | 0.4849 | 0.8846 |
| 0.4482 | 1.5924 | 33000 | 0.4820 | 0.8850 |
| 0.4477 | 1.6406 | 34000 | 0.4802 | 0.8854 |
| 0.4467 | 1.6889 | 35000 | 0.4789 | 0.8853 |
| 0.4434 | 1.7371 | 36000 | 0.4765 | 0.8862 |
| 0.443 | 1.7854 | 37000 | 0.4752 | 0.8865 |
| 0.4417 | 1.8336 | 38000 | 0.4741 | 0.8865 |
| 0.4393 | 1.8819 | 39000 | 0.4713 | 0.8870 |
| 0.4362 | 1.9302 | 40000 | 0.4708 | 0.8874 |
| 0.4356 | 1.9784 | 41000 | 0.4687 | 0.8877 |
| 0.4343 | 2.0266 | 42000 | 0.4677 | 0.8880 |
| 0.4333 | 2.0749 | 43000 | 0.4638 | 0.8888 |
| 0.4319 | 2.1231 | 44000 | 0.4645 | 0.8889 |
| 0.4363 | 2.1714 | 45000 | 0.4633 | 0.8886 |
| 0.4281 | 2.2197 | 46000 | 0.4602 | 0.8892 |
| 0.4242 | 2.2679 | 47000 | 0.4609 | 0.8895 |
| 0.4262 | 2.3162 | 48000 | 0.4576 | 0.8898 |
| 0.4231 | 2.3644 | 49000 | 0.4554 | 0.8904 |
| 0.4197 | 2.4127 | 50000 | 0.4562 | 0.8903 |
| 0.4231 | 2.4609 | 51000 | 0.4556 | 0.8902 |
| 0.422 | 2.5092 | 52000 | 0.4522 | 0.8909 |
| 0.4222 | 2.5574 | 53000 | 0.4526 | 0.8906 |
| 0.4208 | 2.6057 | 54000 | 0.4497 | 0.8916 |
| 0.42 | 2.6539 | 55000 | 0.4510 | 0.8914 |
| 0.4218 | 2.7022 | 56000 | 0.4492 | 0.8920 |
| 0.4162 | 2.7505 | 57000 | 0.4479 | 0.8922 |
| 0.4168 | 2.7987 | 58000 | 0.4466 | 0.8922 |
| 0.418 | 2.8470 | 59000 | 0.4466 | 0.8921 |
| 0.4164 | 2.8952 | 60000 | 0.4447 | 0.8928 |
| 0.4133 | 2.9435 | 61000 | 0.4437 | 0.8929 |
| 0.4103 | 2.9917 | 62000 | 0.4418 | 0.8932 |
| 0.4106 | 3.0400 | 63000 | 0.4397 | 0.8939 |
| 0.4122 | 3.0882 | 64000 | 0.4392 | 0.8938 |
| 0.4082 | 3.1365 | 65000 | 0.4380 | 0.8942 |
| 0.4069 | 3.1847 | 66000 | 0.4379 | 0.8942 |
| 0.4076 | 3.2330 | 67000 | 0.4369 | 0.8944 |
| 0.4079 | 3.2812 | 68000 | 0.4355 | 0.8946 |
| 0.4045 | 3.3295 | 69000 | 0.4351 | 0.8946 |
| 0.4032 | 3.3777 | 70000 | 0.4350 | 0.8950 |
| 0.4043 | 3.4260 | 71000 | 0.4329 | 0.8952 |
| 0.4018 | 3.4742 | 72000 | 0.4319 | 0.8952 |
| 0.4028 | 3.5225 | 73000 | 0.4324 | 0.8954 |
| 0.4017 | 3.5708 | 74000 | 0.4303 | 0.8956 |
| 0.4013 | 3.6190 | 75000 | 0.4312 | 0.8957 |
| 0.4003 | 3.6673 | 76000 | 0.4290 | 0.8959 |
| 0.397 | 3.7155 | 77000 | 0.4289 | 0.8958 |
| 0.3978 | 3.7638 | 78000 | 0.4280 | 0.8962 |
| 0.3985 | 3.8120 | 79000 | 0.4266 | 0.8967 |
| 0.3972 | 3.8603 | 80000 | 0.4257 | 0.8966 |
| 0.3926 | 3.9085 | 81000 | 0.4241 | 0.8969 |
| 0.397 | 3.9568 | 82000 | 0.4244 | 0.8968 |
| 0.3963 | 4.0050 | 83000 | 0.4239 | 0.8971 |
| 0.3952 | 4.0533 | 84000 | 0.4234 | 0.8970 |
| 0.3916 | 4.1015 | 85000 | 0.4217 | 0.8973 |
| 0.3938 | 4.1498 | 86000 | 0.4190 | 0.8979 |
| 0.3926 | 4.1980 | 87000 | 0.4188 | 0.8981 |
| 0.3924 | 4.2463 | 88000 | 0.4198 | 0.8978 |
| 0.3895 | 4.2945 | 89000 | 0.4195 | 0.8978 |
| 0.3918 | 4.3428 | 90000 | 0.4186 | 0.8982 |
| 0.3906 | 4.3911 | 91000 | 0.4182 | 0.8983 |
| 0.3927 | 4.4393 | 92000 | 0.4178 | 0.8983 |
| 0.3902 | 4.4876 | 93000 | 0.4170 | 0.8986 |
| 0.3924 | 4.5358 | 94000 | 0.4179 | 0.8982 |
| 0.3879 | 4.5841 | 95000 | 0.4144 | 0.8989 |
| 0.3897 | 4.6323 | 96000 | 0.4150 | 0.8989 |
| 0.3903 | 4.6806 | 97000 | 0.4158 | 0.8989 |
| 0.3893 | 4.7288 | 98000 | 0.4190 | 0.8983 |
| 0.5424 | 4.7771 | 99000 | 0.6255 | 0.8573 |
| 0.4768 | 4.8253 | 100000 | 0.4770 | 0.8850 |
| 0.4418 | 4.8736 | 101000 | 0.4506 | 0.8913 |
| 0.4562 | 4.9219 | 102000 | 0.4713 | 0.8871 |
| 0.4494 | 4.9701 | 103000 | 0.4575 | 0.8898 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
VIDEOS-18-brown-girl-viral-video-Clip-XX/New.full.videos.brown.girl.Viral.Video.Official.Tutorial
|
VIDEOS-18-brown-girl-viral-video-Clip-XX
| 2025-08-22T08:33:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-22T08:32:52Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
internlm/Intern-S1-mini-FP8
|
internlm
| 2025-08-22T08:31:43Z | 46 | 1 | null |
[
"safetensors",
"interns1",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2508.15763",
"base_model:internlm/Intern-S1-mini",
"base_model:quantized:internlm/Intern-S1-mini",
"license:apache-2.0",
"fp8",
"region:us"
] |
image-text-to-text
| 2025-08-18T06:37:20Z |
---
license: apache-2.0
pipeline_tag: image-text-to-text
base_model:
- internlm/Intern-S1-mini
---
## Intern-S1-mini
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/642695e5274e7ad464c8a5ba/E43cgEXBRWjVJlU_-hdh6.png" />
<div> </div>
[๐ปGithub Repo](https://github.com/InternLM/Intern-S1) โข [๐คModel Collections](https://huggingface.co/collections/internlm/intern-s1-6882e325e8ac1c58ba108aa5) โข [๐Technical Report](https://arxiv.org/abs/2508.15763) โข [๐ฌOnline Chat](https://chat.intern-ai.org.cn/)
</div>
<p align="center">
๐ join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://cdn.vansin.top/intern-s1.jpg" target="_blank">WeChat</a>
</p>
## Introduction
We introduce **Intern-S1-mini**, a lightweight open-source multimodal reasoning model based on the same techniques as **[Intern-S1](https://huggingface.co/internlm/Intern-S1)**.
Built upon a 8B dense language model (Qwen3) and a 0.3B Vision encoder (InternViT), Intern-S1-mini has been further pretrained on **5 trillion tokens** of multimodal data, including over **2.5 trillion scientific-domain tokens**. This enables the model to retain strong general capabilities while excelling in specialized scientific domains such as **interpreting chemical structures, understanding protein sequences, and planning compound synthesis routes**, making Intern-S1-mini to be a capable research assistant for real-world scientific applications.
## Features
- Strong performance across language and vision reasoning benchmarks, especially scientific tasks.
- Continuously pretrained on a massive 5T token dataset, with over 50% specialized scientific data, embedding deep domain expertise.
- Dynamic tokenizer enables native understanding of molecular formulas and protein sequences.
## Performance
We evaluate the Intern-S1-mini on various benchmarks including general datasets and scientific datasets. We report the performance comparison with the recent VLMs and LLMs below.
| | | Intern-S1-mini | Qwen3-8B | GLM-4.1V | MiMo-VL-7B-RL-2508 |
|------------|----------------|-------------------|----------|----------|--------------------|
| General | MMLU-Pro | **74.78** | 73.7 | 57.1 | 73.93 |
| ใ | MMMU | **72.33** | N/A | 69.9 | 70.4 |
| ใ | MMStar | 65.2 | N/A | 71.5 | 72.9 |
| ใ | GPQA | **65.15** | 62 | 50.32 | 60.35 |
| ใ | AIME2024 | **84.58** | 76 | 36.2 | 72.6 |
| ใ | AIME2025 | **80** | 67.3 | 32 | 64.4 |
| ใ | MathVision | 51.41 | N/A | 53.9 | 54.5 |
| ใ | MathVista | 70.3 | N/A | 80.7 | 79.4 |
| ใ | IFEval | 81.15 | 85 | 71.53 | 71.4 |
| | | | | | |
| Scientific | SFE | 35.84 | N/A | 43.2 | 43.9 |
| ใ | Physics | **28.76** | N/A | 28.3 | 28.2 |
| ใ | SmolInstruct | **32.2** | 17.6 | 18.1 | 16.11 |
| ใ | ChemBench | **76.47** | 61.1 | 56.2 | 66.78 |
| ใ | MatBench | **61.55** | 45.24 | 54.3 | 46.9 |
| ใ | MicroVQA | **56.62** | N/A | 50.2 | 50.96 |
| ใ | ProteinLMBench | 58.47 | 59.1 | 58.3 | 59.8 |
| ใ | MSEarthMCQ | **58.12** | N/A | 50.3 | 47.3 |
| ใ | XLRS-Bench | **51.63** | N/A | 49.8 | 12.29 |
We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [VLMEvalkit](https://github.com/open-compass/vlmevalkit) to evaluate all models.
## Quick Start
### Sampling Parameters
We recommend using the following hyperparameters to ensure better results
```python
top_p = 1.0
top_k = 50
min_p = 0.0
temperature = 0.8
```
### Transformers
The following provides demo code illustrating how to generate based on text and multimodal inputs.
> **Please use transformers>=4.55.2 to ensure the model works normally.**
#### Text input
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini-FP8"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "tell me about an interesting physical phenomenon."},
],
}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
#### Image input
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini-FP8"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
{"type": "text", "text": "Please describe the image explicitly."},
],
}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
#### Video input
Please ensure that the decord video decoding library is installed via `pip install decord`. To avoid OOM, please install flash_attention and use at least 2 GPUS.
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini-FP8"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"url": "https://huggingface.co/datasets/hf-internal-testing/fixtures_videos/resolve/main/tennis.mp4",
},
{"type": "text", "text": "What type of shot is the man performing?"},
],
}
]
inputs = processor.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
video_load_backend="decord",
tokenize=True,
return_dict=True,
).to(model.device, dtype=torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
### Serving
The minimum hardware requirements for deploying Intern-S1 series models are:
| Model | A100(GPUs) | H800(GPUs) | H100(GPUs) | H200(GPUs) |
| :---------------------------------------------------------------------: | :--------: | :--------: | :--------: | :--------: |
| [internlm/Intern-S1-mini](https://huggingface.co/internlm/Intern-S1-mini) | 1 | 1 | 1 | 1 |
| [internlm/Intern-S1-mini-FP8](https://huggingface.co/internlm/Intern-S1-mini-FP8) | - | 1 | 1 | 1 |
You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
#### [lmdeploy (>=0.9.2)](https://github.com/InternLM/lmdeploy)
```bash
lmdeploy serve api_server internlm/Intern-S1-mini-FP8 --reasoning-parser intern-s1 --tool-call-parser intern-s1
```
#### [vllm (>=0.10.1)](https://github.com/vllm-project/vllm)
```bash
vllm serve internlm/Intern-S1-mini-FP8 --trust-remote-code
```
#### [sglang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server \
--model-path internlm/Intern-S1-mini-FP8 \
--trust-remote-code \
--grammar-backend none
```
#### ollama for local deployment:
```bash
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch model
ollama pull internlm/interns1-mini
# run model
ollama run internlm/interns1-mini
# then use openai client to call on http://localhost:11434/v1
```
## Advanced Usage
### Tool Calling
Many Large Language Models (LLMs) now feature **Tool Calling**, a powerful capability that allows them to extend their functionality by interacting with external tools and APIs. This enables models to perform tasks like fetching up-to-the-minute information, running code, or calling functions within other applications.
A key advantage for developers is that a growing number of open-source LLMs are designed to be compatible with the OpenAI API. This means you can leverage the same familiar syntax and structure from the OpenAI library to implement tool calling with these open-source models. As a result, the code demonstrated in this tutorial is versatileโit works not just with OpenAI models, but with any model that follows the same interface standard.
To illustrate how this works, let's dive into a practical code example that uses tool calling to get the latest weather forecast (based on lmdeploy api server).
```python
from openai import OpenAI
import json
def get_current_temperature(location: str, unit: str = "celsius"):
"""Get current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, State, Country".
unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
Returns:
the temperature, the location, and the unit in a dict
"""
return {
"temperature": 26.1,
"location": location,
"unit": unit,
}
def get_temperature_date(location: str, date: str, unit: str = "celsius"):
"""Get temperature at a location and date.
Args:
location: The location to get the temperature for, in the format "City, State, Country".
date: The date to get the temperature for, in the format "Year-Month-Day".
unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
Returns:
the temperature, the location, the date and the unit in a dict
"""
return {
"temperature": 25.9,
"location": location,
"date": date,
"unit": unit,
}
def get_function_by_name(name):
if name == "get_current_temperature":
return get_current_temperature
if name == "get_temperature_date":
return get_temperature_date
tools = [{
'type': 'function',
'function': {
'name': 'get_current_temperature',
'description': 'Get current temperature at a location.',
'parameters': {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The location to get the temperature for, in the format \'City, State, Country\'.'
},
'unit': {
'type': 'string',
'enum': [
'celsius',
'fahrenheit'
],
'description': 'The unit to return the temperature in. Defaults to \'celsius\'.'
}
},
'required': [
'location'
]
}
}
}, {
'type': 'function',
'function': {
'name': 'get_temperature_date',
'description': 'Get temperature at a location and date.',
'parameters': {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The location to get the temperature for, in the format \'City, State, Country\'.'
},
'date': {
'type': 'string',
'description': 'The date to get the temperature for, in the format \'Year-Month-Day\'.'
},
'unit': {
'type': 'string',
'enum': [
'celsius',
'fahrenheit'
],
'description': 'The unit to return the temperature in. Defaults to \'celsius\'.'
}
},
'required': [
'location',
'date'
]
}
}
}]
messages = [
{'role': 'user', 'content': 'Today is 2024-11-14, What\'s the temperature in San Francisco now? How about tomorrow?'}
]
openai_api_key = "EMPTY"
openai_api_base = "http://0.0.0.0:23333/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=messages,
max_tokens=32768,
temperature=0.8,
top_p=0.8,
stream=False,
extra_body=dict(spaces_between_special_tokens=False, enable_thinking=False),
tools=tools)
print(response.choices[0].message)
messages.append(response.choices[0].message)
for tool_call in response.choices[0].message.tool_calls:
tool_call_args = json.loads(tool_call.function.arguments)
tool_call_result = get_function_by_name(tool_call.function.name)(**tool_call_args)
tool_call_result = json.dumps(tool_call_result, ensure_ascii=False)
messages.append({
'role': 'tool',
'name': tool_call.function.name,
'content': tool_call_result,
'tool_call_id': tool_call.id
})
response = client.chat.completions.create(
model=model_name,
messages=messages,
temperature=0.8,
top_p=0.8,
stream=False,
extra_body=dict(spaces_between_special_tokens=False, enable_thinking=False),
tools=tools)
print(response.choices[0].message.content)
```
### Switching Between Thinking and Non-Thinking Modes
Intern-S1-mini enables thinking mode by default, enhancing the model's reasoning capabilities to generate higher-quality responses. This feature can be disabled by setting `enable_thinking=False` in `tokenizer.apply_chat_template`
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # think mode indicator
)
```
With LMDeploy serving Intern-S1-mini models, you can dynamically control the thinking mode by adjusting the `enable_thinking` parameter in your requests.
```python
from openai import OpenAI
import json
messages = [
{
'role': 'user',
'content': 'who are you'
}, {
'role': 'assistant',
'content': 'I am an AI'
}, {
'role': 'user',
'content': 'AGI is?'
}]
openai_api_key = "EMPTY"
openai_api_base = "http://0.0.0.0:23333/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=messages,
temperature=0.8,
top_p=0.8,
max_tokens=2048,
extra_body={
"enable_thinking": False,
}
)
print(json.dumps(response.model_dump(), indent=2, ensure_ascii=False))
```
For vllm and sglang users, configure this through,
```python
extra_body={
"chat_template_kwargs": {"enable_thinking": False}
}
```
## Fine-tuning
See this [documentation](https://github.com/InternLM/Intern-S1/blob/main/docs/sft.md) for more details.
## Citation
If you find this work useful, feel free to give us a cite.
```
@misc{bai2025interns1scientificmultimodalfoundation,
title={Intern-S1: A Scientific Multimodal Foundation Model},
author={Lei Bai and Zhongrui Cai and Maosong Cao and Weihan Cao and Chiyu Chen and Haojiong Chen and Kai Chen and Pengcheng Chen and Ying Chen and Yongkang Chen and Yu Cheng and Yu Cheng and Pei Chu and Tao Chu and Erfei Cui and Ganqu Cui and Long Cui and Ziyun Cui and Nianchen Deng and Ning Ding and Nanqin Dong and Peijie Dong and Shihan Dou and Sinan Du and Haodong Duan and Caihua Fan and Ben Gao and Changjiang Gao and Jianfei Gao and Songyang Gao and Yang Gao and Zhangwei Gao and Jiaye Ge and Qiming Ge and Lixin Gu and Yuzhe Gu and Aijia Guo and Qipeng Guo and Xu Guo and Conghui He and Junjun He and Yili Hong and Siyuan Hou and Caiyu Hu and Hanglei Hu and Jucheng Hu and Ming Hu and Zhouqi Hua and Haian Huang and Junhao Huang and Xu Huang and Zixian Huang and Zhe Jiang and Lingkai Kong and Linyang Li and Peiji Li and Pengze Li and Shuaibin Li and Tianbin Li and Wei Li and Yuqiang Li and Dahua Lin and Junyao Lin and Tianyi Lin and Zhishan Lin and Hongwei Liu and Jiangning Liu and Jiyao Liu and Junnan Liu and Kai Liu and Kaiwen Liu and Kuikun Liu and Shichun Liu and Shudong Liu and Wei Liu and Xinyao Liu and Yuhong Liu and Zhan Liu and Yinquan Lu and Haijun Lv and Hongxia Lv and Huijie Lv and Qidang Lv and Ying Lv and Chengqi Lyu and Chenglong Ma and Jianpeng Ma and Ren Ma and Runmin Ma and Runyuan Ma and Xinzhu Ma and Yichuan Ma and Zihan Ma and Sixuan Mi and Junzhi Ning and Wenchang Ning and Xinle Pang and Jiahui Peng and Runyu Peng and Yu Qiao and Jiantao Qiu and Xiaoye Qu and Yuan Qu and Yuchen Ren and Fukai Shang and Wenqi Shao and Junhao Shen and Shuaike Shen and Chunfeng Song and Demin Song and Diping Song and Chenlin Su and Weijie Su and Weigao Sun and Yu Sun and Qian Tan and Cheng Tang and Huanze Tang and Kexian Tang and Shixiang Tang and Jian Tong and Aoran Wang and Bin Wang and Dong Wang and Lintao Wang and Rui Wang and Weiyun Wang and Wenhai Wang and Yi Wang and Ziyi Wang and Ling-I Wu and Wen Wu and Yue Wu and Zijian Wu and Linchen Xiao and Shuhao Xing and Chao Xu and Huihui Xu and Jun Xu and Ruiliang Xu and Wanghan Xu and GanLin Yang and Yuming Yang and Haochen Ye and Jin Ye and Shenglong Ye and Jia Yu and Jiashuo Yu and Jing Yu and Fei Yuan and Bo Zhang and Chao Zhang and Chen Zhang and Hongjie Zhang and Jin Zhang and Qiaosheng Zhang and Qiuyinzhe Zhang and Songyang Zhang and Taolin Zhang and Wenlong Zhang and Wenwei Zhang and Yechen Zhang and Ziyang Zhang and Haiteng Zhao and Qian Zhao and Xiangyu Zhao and Xiangyu Zhao and Bowen Zhou and Dongzhan Zhou and Peiheng Zhou and Yuhao Zhou and Yunhua Zhou and Dongsheng Zhu and Lin Zhu and Yicheng Zou},
year={2025},
eprint={2508.15763},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.15763},
}
```
|
18-VIDEOS-Uppal-Farm-Girl-viral-video-Clip/New.full.videos.Uppal.Farm.Girl.Viral.Video.Official.Tutorial
|
18-VIDEOS-Uppal-Farm-Girl-viral-video-Clip
| 2025-08-22T08:30:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-22T08:30:12Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
finding1/ERNIE-4.5-300B-A47B-MLX-8.5bpw
|
finding1
| 2025-08-22T08:29:51Z | 2 | 0 |
mlx
|
[
"mlx",
"safetensors",
"ernie4_5_moe",
"ERNIE4.5",
"text-generation",
"conversational",
"en",
"zh",
"base_model:baidu/ERNIE-4.5-300B-A47B-PT",
"base_model:quantized:baidu/ERNIE-4.5-300B-A47B-PT",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-21T08:45:25Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
tags:
- ERNIE4.5
- mlx
library_name: mlx
base_model: baidu/ERNIE-4.5-300B-A47B-PT
---
This model [finding1/ERNIE-4.5-300B-A47B-MLX-8.5bpw](https://huggingface.co/finding1/ERNIE-4.5-300B-A47B-MLX-8.5bpw) was
converted to MLX format from [baidu/ERNIE-4.5-300B-A47B-PT](https://huggingface.co/baidu/ERNIE-4.5-300B-A47B-PT)
using mlx-lm version **0.26.3** with `mlx_lm.convert --quantize --q-bits 8 --hf-path baidu/ERNIE-4.5-300B-A47B-PT`.
|
Wuhall/code-search-net-tokenizer
|
Wuhall
| 2025-08-22T08:29:31Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T08:29:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755851289
|
kapalbalap
| 2025-08-22T08:29:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:28:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eddie-c/MoD-MoE-AdamW
|
eddie-c
| 2025-08-22T08:28:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T08:28:42Z |
---
license: apache-2.0
---
|
ypszn/blockassist-bc-yapping_pawing_worm_1755851170
|
ypszn
| 2025-08-22T08:27:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:26:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
flralex1408/first-model
|
flralex1408
| 2025-08-22T08:26:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T08:02:57Z |
---
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: transformers
model_name: first-model
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for first-model
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="flralex1408/first-model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.2.2
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755851047
|
Dejiat
| 2025-08-22T08:24:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:24:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pasithbas159/Gemma3_HII_satellite_v3
|
pasithbas159
| 2025-08-22T08:24:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T08:23:10Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pasithbas159
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755851012
|
0xaoyama
| 2025-08-22T08:23:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:23:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755849464
|
vwzyrraz7l
| 2025-08-22T08:23:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:23:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
colabbear/bce-reranker-base_v1-Q4_K_M-GGUF
|
colabbear
| 2025-08-22T08:21:34Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"transformers",
"llama-cpp",
"gguf-my-repo",
"text-classification",
"en",
"zh",
"ja",
"ko",
"base_model:maidalun1020/bce-reranker-base_v1",
"base_model:quantized:maidalun1020/bce-reranker-base_v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] |
text-classification
| 2025-08-22T08:21:30Z |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- transformers
- sentence-transformers
- llama-cpp
- gguf-my-repo
language:
- en
- zh
- ja
- ko
base_model: maidalun1020/bce-reranker-base_v1
---
# colabbear/bce-reranker-base_v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`maidalun1020/bce-reranker-base_v1`](https://huggingface.co/maidalun1020/bce-reranker-base_v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maidalun1020/bce-reranker-base_v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo colabbear/bce-reranker-base_v1-Q4_K_M-GGUF --hf-file bce-reranker-base_v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo colabbear/bce-reranker-base_v1-Q4_K_M-GGUF --hf-file bce-reranker-base_v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo colabbear/bce-reranker-base_v1-Q4_K_M-GGUF --hf-file bce-reranker-base_v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo colabbear/bce-reranker-base_v1-Q4_K_M-GGUF --hf-file bce-reranker-base_v1-q4_k_m.gguf -c 2048
```
|
artfulf/test-google-gemma-2-2b-it
|
artfulf
| 2025-08-22T08:18:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:41:12Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755849058
|
mang3dd
| 2025-08-22T08:16:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:16:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tammycra121/blockassist-bc-marine_rangy_eel_1755849062
|
tammycra121
| 2025-08-22T08:15:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine rangy eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:15:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine rangy eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mynkjd/watergel
|
mynkjd
| 2025-08-22T08:15:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-22T08:15:01Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Watergel
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/mynkjd/watergel/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mynkjd/watergel', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/mynkjd/watergel/discussions) to add images that show off what youโve made with this LoRA.
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755850430
|
0xaoyama
| 2025-08-22T08:14:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:14:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
artfulf/test-deepseek-ai-DeepSeek-R1-Distill-Llama-8B
|
artfulf
| 2025-08-22T08:13:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:41:08Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
artfulf/test-google-gemma-3-1b-it
|
artfulf
| 2025-08-22T08:13:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:41:10Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755850337
|
IvanJAjebu
| 2025-08-22T08:13:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:13:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
caolahuu121/blockassist-bc-solitary_tenacious_gerbil_1755848949
|
caolahuu121
| 2025-08-22T08:12:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary tenacious gerbil",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:12:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary tenacious gerbil
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
InfoJelly/blockassist-bc-majestic_prehistoric_capybara_1755850323
|
InfoJelly
| 2025-08-22T08:12:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"majestic prehistoric capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:12:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- majestic prehistoric capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755850195
|
2hpsatt
| 2025-08-22T08:11:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:11:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1755850214
|
ypszn
| 2025-08-22T08:10:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:10:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GeniusJunP/grab_candy_policy
|
GeniusJunP
| 2025-08-22T08:10:26Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:GeniusJunP/grab_candy",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-22T07:51:44Z |
---
datasets: GeniusJunP/grab_candy
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
artfulf/test-deepseek-ai-DeepSeek-R1-Distill-Qwen-7B
|
artfulf
| 2025-08-22T08:09:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:41:07Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
InfoJelly/blockassist-bc-majestic_prehistoric_capybara_1755850096
|
InfoJelly
| 2025-08-22T08:08:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"majestic prehistoric capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:08:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- majestic prehistoric capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mynkjd/latest
|
mynkjd
| 2025-08-22T08:08:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-22T08:08:41Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Latest
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/mynkjd/latest/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mynkjd/latest', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/mynkjd/latest/discussions) to add images that show off what youโve made with this LoRA.
|
nema122/blockassist-bc-robust_fluffy_ram_1755849762
|
nema122
| 2025-08-22T08:03:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:03:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755849668
|
kapalbalap
| 2025-08-22T08:02:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T08:01:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
InfoJelly/blockassist-bc-majestic_prehistoric_capybara_1755849563
|
InfoJelly
| 2025-08-22T07:59:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"majestic prehistoric capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:59:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- majestic prehistoric capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1755849507
|
ypszn
| 2025-08-22T07:59:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:59:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ranveerphakade/ISL-Sign-Lang-Detection
|
ranveerphakade
| 2025-08-22T07:59:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-22T07:57:50Z |
# Indian Sign Language Detection with IoT Integration
A real-time Indian Sign Language (ISL) detection system using TensorFlow, MediaPipe, and ESP32-CAM. This project can recognize hand gestures for alphabets (A-Z) and numbers (0-9) in real-time using either a webcam or ESP32-CAM, with results displayed on an OLED screen.
## Features
- Real-time hand gesture recognition
- Support for 36 different signs (A-Z, 0-9)
- Word suggestions based on detected gestures
- IoT integration with ESP32-CAM
- OLED display output
- Support for both single and double-handed gestures
- ~98% accuracy on test data
## System Architecture
1. **Image Capture**:
- ESP32-CAM captures images
- Sends via HTTP POST to Flask server
2. **Processing Server**:
- Flask server receives images
- Processes using MediaPipe for hand landmark detection
- Uses TensorFlow model for gesture recognition
3. **Display**:
- Results sent back to ESP32
- Displayed on SSD1306 OLED screen
- Shows gesture and confidence score
## Requirements
### Software
- Python 3.8 or higher
- Arduino IDE for ESP32
- Required Python packages (see requirements.txt)
### Hardware
- ESP32-CAM module
- SSD1306 OLED Display (128x64)
- USB-TTL converter for ESP32 programming
- Connecting wires
- 5V power supply
## Installation
1. Clone the repository:
```bash
git clone https://github.com/ranveerphakade/ISL-sign-lang-detection.git
cd ISL-sign-lang-detection
```
2. Create and activate a virtual environment:
```bash
# On Windows
python -m venv venv
venv\Scripts\activate
# On Linux/Mac
python3 -m venv venv
source venv/bin/activate
```
3. Install required packages:
```bash
pip install -r requirements.txt
```
4. Configure ESP32:
- Open `esp32_code/esp32_cam_oled.ino` in Arduino IDE
- Install required libraries:
- ESP32 board support
- Adafruit SSD1306
- Adafruit GFX
- Update WiFi credentials and server IP
- Upload to ESP32-CAM
## Project Structure
```
ISL-sign-lang-detection/
โโโ flask_server/
โ โโโ app.py # Flask server
โ โโโ static/ # Static files
โ โโโ templates/ # HTML templates
โ โโโ test_client.py # Test script
โโโ esp32_code/
โ โโโ esp32_cam_oled.ino # ESP32 code
โโโ dummy_input/ # Test images
โโโ gesture_model.h5 # Trained model
โโโ train.py # Training script
โโโ detect.py # Webcam detection script
โโโ process.py # Data processing
โโโ README.md
```
## Usage
### Server Setup
1. Start the Flask server:
```bash
cd flask_server
python app.py
```
### Testing Without Hardware
1. Add test images to `dummy_input/` directory
2. Run the test client:
```bash
cd flask_server
python test_client.py
```
### Hardware Setup
1. Connect OLED display to ESP32-CAM:
- VCC โ 5V
- GND โ GND
- SCL โ GPIO22
- SDA โ GPIO21
2. Power up the ESP32-CAM
3. The system will automatically:
- Connect to WiFi
- Start capturing images
- Display results on OLED
## Model Architecture
The system uses a deep neural network with:
- Input layer: Hand landmark features
- Hidden layers with dropout and batch normalization
- Output layer: 36 classes (A-Z, 0-9)
## Performance
- Training accuracy: ~98%
- Real-time detection with confidence scores
- Gesture stability checking to prevent false detections
## Contributing
Feel free to contribute to this project by:
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Acknowledgments
- TensorFlow team for the deep learning framework
- MediaPipe team for the hand landmark detection system
- OpenCV team for the computer vision tools
- ESP32 community for IoT support
|
abemi/test
|
abemi
| 2025-08-22T07:57:24Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:abemi/record-test_2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-22T07:57:07Z |
---
datasets: abemi/record-test_2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
artfulf/test-Qwen-Qwen2.5-7B-Instruct
|
artfulf
| 2025-08-22T07:55:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:41:00Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hwang2006/qwen2.5-7b-alpaca-1pct-lora
|
hwang2006
| 2025-08-22T07:54:18Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"lora",
"unsloth",
"qwen",
"instruction-tuning",
"text-generation",
"conversational",
"en",
"dataset:yahma/alpaca-cleaned",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-22T07:54:12Z |
---
license: "apache-2.0"
base_model: "Qwen/Qwen2.5-7B-Instruct"
tags: ["lora", "unsloth", "peft", "qwen", "instruction-tuning"]
language: ["en"]
datasets: ["yahma/alpaca-cleaned"]
library_name: peft
pipeline_tag: text-generation
---
# LoRA Adapter for Qwen/Qwen2.5-7B-Instruct
This repository hosts a **LoRA adapter** (and tokenizer files) trained on top of **Qwen/Qwen2.5-7B-Instruct**.
## โจ Whatโs inside
- **PEFT type**: LORA
- **LoRA r**: 16
- **LoRA alpha**: 16
- **LoRA dropout**: 0.0
- **Target modules**: q_proj, gate_proj, o_proj, down_proj, k_proj, v_proj, up_proj
## ๐ Datasets
- yahma/alpaca-cleaned
## ๐ Languages
- en
## ๐ Usage
### (A) Use adapter with the **official base model**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen2.5-7B-Instruct"
adapter_id = "hwang2006/qwen2.5-7b-alpaca-1pct-lora"
tok = AutoTokenizer.from_pretrained(base)
base_model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32,
device_map="auto",
)
model = PeftModel.from_pretrained(base_model, adapter_id)
messages = [
{"role":"system","content":"You are a helpful assistant."},
{"role":"user","content":"Quick test?"},
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tok(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
out = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.9)
print(tok.decode(out[0], skip_special_tokens=True))
```
### (B) 4-bit on the fly (if VRAM is tight)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
bnb = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
base = "Qwen/Qwen2.5-7B-Instruct"
adapter_id = "hwang2006/qwen2.5-7b-alpaca-1pct-lora"
tok = AutoTokenizer.from_pretrained(base)
base_model = AutoModelForCausalLM.from_pretrained(base, quantization_config=bnb, device_map="auto")
model = PeftModel.from_pretrained(base_model, adapter_id)
```
## โ ๏ธ Notes
- Use a **compatible base** (architecture & tokenizer) with this LoRA.
- This repo contains **only** adapters/tokenizer, not full model weights.
- License here reflects this adapterโs repository. Ensure the **base modelโs license** fits your use.
|
abhinayadutta/flan-t5-large-counter-speech-gen_QLORA_v2
|
abhinayadutta
| 2025-08-22T07:54:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T07:49:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Medved444/blockassist-bc-bellowing_finicky_manatee_1755848076
|
Medved444
| 2025-08-22T07:54:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing finicky manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:53:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing finicky manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pkj1702/crypto-longshort_enter-8b
|
pkj1702
| 2025-08-22T07:53:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:43:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FlagRelease/Qwen3-235B-A22B-Instruct-2507-hygon-FlagOS
|
FlagRelease
| 2025-08-22T07:52:28Z | 0 | 0 | null |
[
"safetensors",
"qwen3_moe",
"region:us"
] | null | 2025-08-22T01:49:42Z |
# Introduction
**FlagOS** is a unified heterogeneous computing software stack for large models, co-developed with leading global chip manufacturers. With core technologies such as the **FlagScale** distributed training/inference framework, **FlagGems** universal operator library, **FlagCX** communication library, and **FlagTree** unified compiler, the **FlagRelease** platform leverages the FlagOS stack to automatically produce and release various combinations of <chip + open-source model>. This enables efficient and automated model migration across diverse chips, opening a new chapter for large model deployment and application.
Based on this, the **Qwen3-235B-A22B-Instruct-2507-hygon-FlagOS** model is adapted for the Hygon chip using the FlagOS software stack, enabling:
### Integrated Deployment
- Deep integration with the open-source [FlagScale framework](https://github.com/FlagOpen/FlagScale)
- Out-of-the-box inference scripts with pre-configured hardware and software parameters
- Released **FlagOS** container image supporting deployment within minutes
### Consistency Validation
- Rigorously evaluated through benchmark testing: Performance and results from the FlagOS software stack are compared against native stacks on multiple public.
# Technical Overview
## **FlagScale Distributed Training and Inference Framework**
FlagScale is an end-to-end framework for large models across heterogeneous computing resources, maximizing computational efficiency and ensuring model validity through core technologies. Its key advantages include:
- **Unified Deployment Interface:** Standardized command-line tools support one-click service deployment across multiple hardware platforms, significantly reducing adaptation costs in heterogeneous environments.
- **Intelligent Parallel Optimization:** Automatically generates optimal distributed parallel strategies based on chip computing characteristics, achieving dynamic load balancing of computation/communication resources.
- **Seamless Operator Switching:** Deep integration with the FlagGems operator library allows high-performance operators to be invoked via environment variables without modifying model code.
## **FlagGems Universal Large-Model Operator Library**
FlagGems is a Triton-based, cross-architecture operator library collaboratively developed with industry partners. Its core strengths include:
- **Full-stack Coverage**: Over 100 operators, with a broader range of operator types than competing libraries.
- **Ecosystem Compatibility**: Supports 7 accelerator backends. Ongoing optimizations have significantly improved performance.
- **High Efficiency**: Employs unique code generation and runtime optimization techniques for faster secondary development and better runtime performance compared to alternatives.
## **FlagEval Evaluation Framework**
FlagEval (Libra)** is a comprehensive evaluation system and open platform for large models launched in 2023. It aims to establish scientific, fair, and open benchmarks, methodologies, and tools to help researchers assess model and training algorithm performance. It features:
- **Multi-dimensional Evaluation**: Supports 800+ model evaluations across NLP, CV, Audio, and Multimodal fields, covering 20+ downstream tasks including language understanding and image-text generation.
- **Industry-Grade Use Cases**: Has completed horizontal evaluations of mainstream large models, providing authoritative benchmarks for chip-model performance validation.
# Evaluation Results
## Benchmark Result
| Metrics | Qwen3-235B-A22B-Instruct-2507-H100-CUDA | Qwen3-235B-A22B-Instruct-2507-hygon-FlagOS |
| --------- | ------------------ | ---------------------- |
| liveBench-0shot@avg1 | 0.753 | 0.751 |
| AIME-0shot@avg1 | 0.833 | 0.800 |
| MMLU-5shots@avg1 | 0.833 | 0.835 |
| MUSR-0shot@avg1 | 0.597 | 0.612 |
| GPQA-0shot@avg1 | - | 0.579 |
# User Guide
**Environment Setup**
| Item | Version |
| ------------- | ------------------------------------------------------------ |
| Docker Version | Docker version 24.0.6, build ed223bc |
| Operating System | Ubuntu 22.04.4 LTS |
| FlagScale | Version: 0.8.0 |
| FlagGems | Version: 3.0 |
## Operation Steps
### Download Open-source Model Weights
```bash
pip install modelscope
modelscope download --model Qwen/Qwen3-235B-A22B-Instruct-2507 --local_dir /share/Qwen3-235B-A22B-Instruct-2507
```
### Download FlagOS Image
BE AWARE!, Hygon's FLAGOS image have not decided public-accesible through internet or not. To obtain this image, you can contact us or hygon through issues.
```bash
docker pull harbor.baai.ac.cn/flagrelease-inner/flagrelease_hygon_qwen3_2507
```
### Start the inference service
```bash
#Container Startup
docker run -it \
--name=flagos \
--network=host \
--privileged \
--ipc=host \
--shm-size=16G \
--memory="512g" \
--ulimit stack=-1:-1 \
--ulimit memlock=-1:-1 \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
-u root \
-v /opt/hyhal:/opt/hyhal \
-v /share:/share \
harbor.baai.ac.cn/flagrelease-inner/flagrelease_hygon_qwen3_2507 \
/bin/bash
```
### Serve
```bash
flagscale serve qwen3
```
## Service Invocation
### API-based Invocation Script
```bash
import openai
openai.api_key = "EMPTY"
openai.base_url = "http://<server_ip>:9010/v1/"
model = "Qwen3-235B-A22B-Instruct-2507-hygon-flagos"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather like today?"}
]
response = openai.chat.completions.create(
model=model,
messages=messages,
temperature=0.7,
top_p=0.95,
stream=False,
)
for item in response:
print(item)
```
### AnythingLLM Integration Guide
#### 1. Download & Install
- Visit the official site: https://anythingllm.com/
- Choose the appropriate version for your OS (Windows/macOS/Linux)
- Follow the installation wizard to complete the setup
#### 2. Configuration
- Launch AnythingLLM
- Open settings (bottom left, fourth tab)
- Configure core LLM parameters
- Click "Save Settings" to apply changes
#### 3. Model Interaction
- After model loading is complete:
- Click **"New Conversation"**
- Enter your question (e.g., โExplain the basics of quantum computingโ)
- Click the send button to get a response
# Contributing
We warmly welcome global developers to join us:
1. Submit Issues to report problems
2. Create Pull Requests to contribute code
3. Improve technical documentation
4. Expand hardware adaptation support
# License
ๆฌๆจกๅ็ๆ้ๆฅๆบไบQwen/Qwen3-235B-A22B-Instruct-2507๏ผไปฅapache2.0ๅ่ฎฎhttps://www.apache.org/licenses/LICENSE-2.0.txtๅผๆบใ
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755849054
|
2hpsatt
| 2025-08-22T07:52:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:52:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755847438
|
indoempatnol
| 2025-08-22T07:51:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:51:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Bocklitz-Lab/lit2vec-tldr-bart-model
|
Bocklitz-Lab
| 2025-08-22T07:51:04Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"bart",
"text2text-generation",
"chemistry",
"scientific-summarization",
"distilbart",
"abstractive",
"tldr",
"knowledge-graphs",
"summarization",
"en",
"dataset:Bocklitz-Lab/lit2vec-tldr-bart-dataset",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
summarization
| 2025-08-15T15:24:52Z |
---
language:
- en
library_name: transformers
pipeline_tag: summarization
license: apache-2.0
tags:
- chemistry
- scientific-summarization
- distilbart
- abstractive
- tldr
- knowledge-graphs
datasets:
- Bocklitz-Lab/lit2vec-tldr-bart-dataset
model-index:
- name: lit2vec-tldr-bart
results:
- task:
name: Summarization
type: summarization
dataset:
name: Lit2Vec TL;DR Chemistry Dataset
type: Bocklitz-Lab/lit2vec-tldr-bart-dataset
split: test
size: 1001
metrics:
- type: rouge1
value: 56.11
- type: rouge2
value: 30.78
- type: rougeLsum
value: 45.43
---
# lit2vec-tldr-bart (DistilBART fine-tuned for chemistry TL;DRs)
**lit2vec-tldr-bart** is a DistilBART model fine-tuned on **19,992** CC-BY licensed chemistry abstracts to produce **concise TL;DR-style summaries** aligned with methods โ results โ significance. Itโs designed for scientific **abstractive summarization**, **semantic indexing**, and **knowledge-graph population** in chemistry and related fields.
- **Base model:** `sshleifer/distilbart-cnn-12-6`
- **Training data:** [`Bocklitz-Lab/lit2vec-tldr-bart-dataset`](https://huggingface.co/datasets/Bocklitz-Lab/lit2vec-tldr-bart-dataset)
- **Max input length:** 1024 tokens
- **Target length:** ~128 tokens
---
## ๐งช Evaluation (held-out test)
| Split | ROUGE-1 | ROUGE-2 | ROUGE-Lsum |
|------:|--------:|--------:|-----------:|
| Test | **56.11** | **30.78** | **45.43** |
> Validation RLsum: 46.05
> Metrics computed with `evaluate`'s `rouge` (NLTK sentence segmentation, `use_stemmer=True`).
---
## ๐ Quickstart
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, GenerationConfig
repo = "Bocklitz-Lab/lit2vec-tldr-bart"
tok = AutoTokenizer.from_pretrained(repo)
model = AutoModelForSeq2SeqLM.from_pretrained(repo)
gen = GenerationConfig.from_pretrained(repo) # loads default decoding params
text = "Proton exchange membrane fuel cells convert chemical energy into electricity..."
inputs = tok(text, return_tensors="pt", truncation=True, max_length=1024)
summary_ids = model.generate(**inputs, **gen.to_dict())
print(tok.decode(summary_ids[0], skip_special_tokens=True))
````
### Batch inference (PyTorch)
```python
texts = [
"Abstract 1 ...",
"Abstract 2 ...",
]
batch = tok(texts, return_tensors="pt", padding=True, truncation=True, max_length=1024)
out = model.generate(**batch, **gen.to_dict())
summaries = tok.batch_decode(out, skip_special_tokens=True)
```
---
## ๐ง Default decoding (saved in `generation_config.json`)
These are the defaults saved with the model (you can override at `generate()` time):
```json
{
"max_length": 142,
"min_length": 56,
"early_stopping": true,
"num_beams": 4,
"length_penalty": 2.0,
"no_repeat_ngram_size": 3,
"forced_bos_token_id": 0,
"forced_eos_token_id": 2
}
```
---
## ๐ Training details
* **Base:** `sshleifer/distilbart-cnn-12-6` (Distilled BART)
* **Data:** 19,992 CC-BY chemistry abstracts with TL;DR summaries
* **Splits:** train=17,992 / val=999 / test=1,001
* **Max lengths:** input 1024, target 128
* **Optimizer:** AdamW, **lr=2e-5**
* **Batching:** per-device train/eval batch size 4, **gradient\_accumulation\_steps=4**
* **Epochs:** 5
* **Precision:** fp16 (when CUDA available)
* **Hardware:** single NVIDIA RTX 3090
* **Seed:** 42
* **Libraries:** ๐ค Transformers + Datasets, `evaluate` for ROUGE, NLTK for sentence splitting
---
## โ
Intended use
* TL;DR abstractive summaries for **chemistry** and adjacent domains (materials science, chemical engineering, environmental science).
* **Semantic indexing**, **IR reranking**, and **knowledge graph** ingestion where concise method/result statements are helpful.
### Limitations & risks
* May **hallucinate** details not present in the abstract (typical for abstractive models).
* Not a substitute for expert judgment; avoid using summaries as sole evidence for scientific claims.
* Trained on CC-BY English abstracts; performance may degrade on other domains/languages.
---
## ๐ฆ Files
This repo should include:
* `config.json`, `pytorch_model.bin` or `model.safetensors`
* `tokenizer.json`, `tokenizer_config.json`, `special_tokens_map.json`, merges/vocab as applicable
* `generation_config.json` (decoding defaults)
---
## ๐ Reproducibility
* Dataset: [`Bocklitz-Lab/lit2vec-tldr-bart-dataset`](https://huggingface.co/datasets/Bocklitz-Lab/lit2vec-tldr-bart-dataset)
* Recommended preprocessing: truncate inputs at 1024 tokens; targets at 128.
* ROUGE evaluation: `evaluate.load("rouge")`, NLTK sentence tokenization, `use_stemmer=True`.
---
## ๐ Citation
If you use this model or dataset, please cite:
```bibtex
@software{lit2vec_tldr_bart_2025,
title = {lit2vec-tldr-bart: DistilBART fine-tuned for chemistry TL;DR summarization},
author = {Bocklitz Lab},
year = {2025},
url = {https://huggingface.co/Bocklitz-Lab/lit2vec-tldr-bart},
note = {Model trained on CC-BY chemistry abstracts; dataset at Bocklitz-Lab/lit2vec-tldr-bart-dataset}
}
```
Dataset:
```bibtex
@dataset{lit2vec_tldr_dataset_2025,
title = {Lit2Vec TL;DR Chemistry Dataset},
author = {Bocklitz Lab},
year = {2025},
url = {https://huggingface.co/datasets/Bocklitz-Lab/lit2vec-tldr-bart-dataset}
}
```
---
## ๐ License
* **Model weights & code:** Apache-2.0
* **Dataset:** CC BY 4.0 (attribution in per-record metadata)
---
## ๐ Acknowledgements
* Base model: DistilBART (`sshleifer/distilbart-cnn-12-6`)
* Licensing and OA links curated from publisher/aggregator sources; dataset restricted to **CC-BY** content.
|
suraj5556/tokenizer-en-mar
|
suraj5556
| 2025-08-22T07:50:55Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T07:50:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
suraj5556/transformer-en-mar
|
suraj5556
| 2025-08-22T07:50:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T07:47:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jethac/MyGemmaNPC2
|
jethac
| 2025-08-22T07:50:28Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:49:10Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC2
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jethac/MyGemmaNPC2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755847496
|
vwzyrraz7l
| 2025-08-22T07:50:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:50:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_mmlu_1755681415
|
rbelanec
| 2025-08-22T07:50:15Z | 15 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-20T09:17:40Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- ia3
- generated_from_trainer
model-index:
- name: train_mmlu_1755681415
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_mmlu_1755681415
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the mmlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1708
- Num Input Tokens Seen: 488118104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:------:|:---------------:|:-----------------:|
| 0.1364 | 0.5000 | 11233 | 0.2498 | 24389728 |
| 0.0552 | 1.0000 | 22466 | 0.2182 | 48789280 |
| 0.2115 | 1.5001 | 33699 | 0.2011 | 73201984 |
| 0.0865 | 2.0001 | 44932 | 0.1919 | 97620120 |
| 0.0963 | 2.5001 | 56165 | 0.1872 | 122127480 |
| 0.1485 | 3.0001 | 67398 | 0.1822 | 146471872 |
| 0.1541 | 3.5002 | 78631 | 0.1787 | 170850208 |
| 0.1574 | 4.0002 | 89864 | 0.1760 | 195267312 |
| 0.0981 | 4.5002 | 101097 | 0.1755 | 219639056 |
| 0.1848 | 5.0002 | 112330 | 0.1736 | 244095744 |
| 0.1731 | 5.5002 | 123563 | 0.1730 | 268478944 |
| 0.1704 | 6.0003 | 134796 | 0.1713 | 292933144 |
| 0.0277 | 6.5003 | 146029 | 0.1724 | 317335480 |
| 0.1044 | 7.0003 | 157262 | 0.1709 | 341742832 |
| 0.1996 | 7.5003 | 168495 | 0.1710 | 366182192 |
| 0.1288 | 8.0004 | 179728 | 0.1710 | 390549264 |
| 0.0802 | 8.5004 | 190961 | 0.1711 | 414924016 |
| 0.088 | 9.0004 | 202194 | 0.1708 | 439335448 |
| 0.1749 | 9.5004 | 213427 | 0.1710 | 463693944 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755848938
|
IvanJAjebu
| 2025-08-22T07:50:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:49:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jethac/MyGemmaNPC
|
jethac
| 2025-08-22T07:49:08Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:18:54Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jethac/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755847289
|
quantumxnode
| 2025-08-22T07:48:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:48:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nema122/blockassist-bc-robust_fluffy_ram_1755848831
|
nema122
| 2025-08-22T07:48:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:48:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TharunSivamani/mcprl-7b-doris
|
TharunSivamani
| 2025-08-22T07:47:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T07:47:33Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TharunSivamani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755847164
|
helmutsukocok
| 2025-08-22T07:46:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:46:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1755848686
|
ypszn
| 2025-08-22T07:45:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:45:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
InfoJelly/blockassist-bc-majestic_prehistoric_capybara_1755848698
|
InfoJelly
| 2025-08-22T07:45:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"majestic prehistoric capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:45:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- majestic prehistoric capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755848559
|
llencia
| 2025-08-22T07:43:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:43:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
srikar-v05/Qwen2.5-3B-GRPO-LoRA
|
srikar-v05
| 2025-08-22T07:42:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T07:42:23Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** srikar-v05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1755845875
|
wasabuko
| 2025-08-22T07:38:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:35:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eshaaftab900/EN_DeepSeek-R1-Distill-Llama-8B-ft-QRCD-and-Quran-lora-adapters
|
eshaaftab900
| 2025-08-22T07:36:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-22T07:36:28Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
mradermacher/Afxumo-toxicity-somaliland-SO-GGUF
|
mradermacher
| 2025-08-22T07:36:01Z | 173 | 0 |
transformers
|
[
"transformers",
"gguf",
"RoBERTa",
"acfp",
"automatic_classifiers_for_peace",
"hatespeech",
"toxicity",
"afxumo",
"so",
"base_model:datavaluepeople/Afxumo-toxicity-somaliland-SO",
"base_model:quantized:datavaluepeople/Afxumo-toxicity-somaliland-SO",
"license:agpl-3.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-08-04T20:48:52Z |
---
base_model: datavaluepeople/Afxumo-toxicity-somaliland-SO
language:
- so
library_name: transformers
license: agpl-3.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- RoBERTa
- acfp
- automatic_classifiers_for_peace
- hatespeech
- toxicity
- afxumo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/datavaluepeople/Afxumo-toxicity-somaliland-SO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Afxumo-toxicity-somaliland-SO-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Afxumo-toxicity-somaliland-SO-GGUF/resolve/main/Afxumo-toxicity-somaliland-SO.f16.gguf) | f16 | 0.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
0xGareeb/blockassist-bc-diving_jumping_llama_1755848040
|
0xGareeb
| 2025-08-22T07:35:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving jumping llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:34:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving jumping llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
InfoJelly/blockassist-bc-majestic_prehistoric_capybara_1755848085
|
InfoJelly
| 2025-08-22T07:35:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"majestic prehistoric capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:35:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- majestic prehistoric capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755846863
|
Sayemahsjn
| 2025-08-22T07:34:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:33:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755846412
|
thanobidex
| 2025-08-22T07:32:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:32:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VesileT/xlm-roberta-sentiment
|
VesileT
| 2025-08-22T07:31:59Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T07:31:59Z |
---
license: apache-2.0
---
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755847825
|
IvanJAjebu
| 2025-08-22T07:31:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:31:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sarayusapa/T5_Large_GEC_LoRA
|
sarayusapa
| 2025-08-22T07:30:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T05:41:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zxcczx/blockassist-bc-durable_energetic_fly_1755844124
|
zxcczx
| 2025-08-22T07:28:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"durable energetic fly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:28:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- durable energetic fly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755846029
|
hakimjustbao
| 2025-08-22T07:27:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:27:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Tank-123/act_so101_test_0822_based_on_robot_vla
|
Tank-123
| 2025-08-22T07:27:17Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Tank-123/act_test_0822_based_on_robot_vla",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-22T07:26:18Z |
---
datasets: Tank-123/act_test_0822_based_on_robot_vla
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755846080
|
calegpedia
| 2025-08-22T07:27:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:26:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggml-org/gpt-oss-120b-GGUF
|
ggml-org
| 2025-08-22T07:26:59Z | 31,760 | 21 | null |
[
"gguf",
"base_model:openai/gpt-oss-120b",
"base_model:quantized:openai/gpt-oss-120b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-02T16:09:37Z |
---
base_model:
- openai/gpt-oss-120b
---
# gpt-oss-120b
Detailed guide for using this model with `llama.cpp`:
https://github.com/ggml-org/llama.cpp/discussions/15396
Quick start:
```sh
llama-server -hf ggml-org/gpt-oss-120b-GGUF -c 0 -fa --jinja
# Then, access http://localhost:8080
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755847507
|
IvanJAjebu
| 2025-08-22T07:26:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:26:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755845873
|
kojeklollipop
| 2025-08-22T07:25:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:25:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1755847466
|
ypszn
| 2025-08-22T07:25:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:25:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755847440
|
kapalbalap
| 2025-08-22T07:24:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:24:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755845763
|
coelacanthxyz
| 2025-08-22T07:24:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:24:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755845730
|
katanyasekolah
| 2025-08-22T07:24:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:24:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755847258
|
kapalbalap
| 2025-08-22T07:22:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:21:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nema122/blockassist-bc-robust_fluffy_ram_1755847147
|
nema122
| 2025-08-22T07:20:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:20:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yataka112/blockassist-bc-extinct_jumping_lynx_1755847127
|
yataka112
| 2025-08-22T07:20:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct jumping lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:20:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct jumping lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mia-project-2025/bert-base-uncased-LoRA-quora-question-pairs
|
mia-project-2025
| 2025-08-22T07:20:03Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-21T21:02:03Z |
---
license: apache-2.0
---
# BERT Base Uncased + LoRA Fine-Tuned For Quora Duplicate Question Detection
This model applies **LoRA (Low-Rank Adaptation)** fine-tuning on [tomaarsen/bert-base-nq-prompts](https://huggingface.co/tomaarsen/bert-base-nq-prompts) for the **Quora Question Pairs dataset**.
It classifies whether two questions are duplicates.
---
## Model Details
- **Base Model:** `tomaarsen/bert-base-nq-prompts`
- **Fine-tuning Method:** LoRA (PEFT)
- **LoRA Config:**
- `r=8`, `alpha=64`, `dropout=0.1`
- Target modules: `query`, `key`, `value`, `dense`
- **Dataset:** [Quora Question Pairs](https://huggingface.co/datasets/quora)
- **Training Epochs:** 16
- **Optimizer:** AdamW (torch fused)
- **Batch Size:** 64 (gradient accumulation = 2)
- **Loss Function:** CrossEntropyLoss
---
## Performance (Epoch 16)
| Metric | Score |
|-------------|---------|
| Train Loss | 0.447 |
| Eval Loss | 0.257 |
| Accuracy | 89.98% |
| Precision | 83.41% |
| Recall | 90.80% |
| F1 Score | 86.95% |
---
## Example Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
# Load base model and tokenizer
model_name = "tomaarsen/bert-base-nq-prompts"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load LoRA fine-tuned model
base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)
model = PeftModel.from_pretrained(base_model, "mia-project-2025/bert-base-uncased-LoRA-quora-question-pairs")
model.eval()
def predict_duplicate(q1, q2):
inputs = tokenizer(q1, q2, return_tensors="pt", truncation=True, padding=True, max_length=128)
with torch.no_grad():
logits = model(**inputs).logits
pred = torch.argmax(logits, dim=1).item()
return "Duplicate" if pred == 1 else "Not Duplicate"
# Example
print(predict_duplicate("How can I learn Python?", "What are the best ways to learn Python programming?"))
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755847072
|
IvanJAjebu
| 2025-08-22T07:19:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:19:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Amaru-VL-3B-i1-GGUF
|
mradermacher
| 2025-08-22T07:18:20Z | 100 | 0 |
transformers
|
[
"transformers",
"gguf",
"base_model:adapter:unsloth/Qwen2.5-VL-3B-Instruct",
"lora",
"sft",
"trl",
"unsloth",
"es",
"dataset:NovaIALATAM/CuPer_Text",
"dataset:NovaIALATAM/CuPer_Images",
"base_model:NovaIALATAM/Amaru-VL-3B",
"base_model:adapter:NovaIALATAM/Amaru-VL-3B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-21T16:05:18Z |
---
base_model: NovaIALATAM/Amaru-VL-3B
datasets:
- NovaIALATAM/CuPer_Text
- NovaIALATAM/CuPer_Images
language:
- es
library_name: transformers
license: cc-by-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- base_model:adapter:unsloth/Qwen2.5-VL-3B-Instruct
- lora
- sft
- transformers
- trl
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/NovaIALATAM/Amaru-VL-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Amaru-VL-3B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Amaru-VL-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Amaru-VL-3B-i1-GGUF/resolve/main/Amaru-VL-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
suwesh/llamatron-1B-peft
|
suwesh
| 2025-08-22T07:17:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"peft",
"chat",
"LoRA",
"RTX3060",
"conversational",
"en",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-21T12:55:10Z |
---
license: llama3.2
datasets:
- nvidia/Llama-Nemotron-Post-Training-Dataset
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
tags:
- peft
- chat
- LoRA
- RTX3060
---
# Model Information
This model is a fine-tuned version of the [meta-llama/Llama-3.2-1B-Instruct](https://www.huggingface.co/meta-llama/Llama-3.2-1B-Instruct) large language model.
Fine tuning was performed using PEFT (Parameter Efficient Fine Tuning) with LoRA (Low-Rank Adaptation) on the chat subset of the [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset) dataset.
LoRA Configuration:
<pre>lora_config = LoraConfig(
task_type="CAUSAL_LM",
r=32,
lora_alpha=32,
lora_dropout=0.1,
target_modules=["q_proj", "k_proj", "v_proj"],
modules_to_save=["lm_head", "embed_token"],
)
</pre>
# Use with Transformers
<pre>pip install transformers
pip install torch</pre>
<pre>
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("suwesh/llamatron-1B-peft").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft")
input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
</pre>
Or with pipeline
<pre>
import torch
import transformers
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=("suwesh/llamatron-1B-peft"),
tokenizer=transformers.AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft"),
torch_dtype=torch.bfloat16,
device="cuda",
)
def to_model(input_text, system_message):
messages = [
{"role": "system", "content": system_message},
{"role": "user", "content": input_text}
]
outputs = pipe(
messages,
max_new_tokens=512,
temperature=0.6,
top_p=0.95
)
return outputs[0]["generated_text"][-1]['content']
response = to_model("Write a joke about windows.", "detailed thinking on")
</pre>
# Load adapter checkpoint for further fine tuning
<pre>
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft")
model = PeftModel.from_pretrained(base_model, "suwesh/llamatron-1B-peft", subfolder="checkpoint-11000")
</pre>
# Training details
<pre>Initial Training and Validation losses: 1.69 | 1.67</pre>
<pre>Checkpoint 11000 Training and Validation losses: 1.06 | 1.09</pre>
# Evaluation details
We use the [nvidia/Llama-3.1-Nemotron-Nano](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) LLM as a Judge for evaluating the responses between the base llama 3.2 1b instruct and our PEFT model. The following are the judge's preference for each prompt to the two models, we also provide the ground truth in the prompt to the judge:
<pre>base: 122
peft: 388
tie: 29
</pre>
system_message = """
You are an expert evaluator comparing two AI responses to a user instruction. Use the following criteria:
-Clarity
-Factual correctness (compared to the reference answer)
-Instruction-following
-Depth of reasoning
Below is the reference answer, which is the ideal or expected response:
"""
|
mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF
|
mradermacher
| 2025-08-22T07:16:34Z | 46 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:Jasaxion/MathSmith-HC-Qwen3-32B-ShortCoT",
"base_model:quantized:Jasaxion/MathSmith-HC-Qwen3-32B-ShortCoT",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-21T19:47:49Z |
---
base_model: Jasaxion/MathSmith-HC-Qwen3-32B-ShortCoT
language:
- en
library_name: transformers
license: other
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Jasaxion/MathSmith-HC-Qwen3-32B-ShortCoT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MathSmith-HC-Qwen3-32B-ShortCoT-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q3_K_M.gguf) | Q3_K_M | 16.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q5_K_M.gguf) | Q5_K_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MathSmith-HC-Qwen3-32B-ShortCoT-GGUF/resolve/main/MathSmith-HC-Qwen3-32B-ShortCoT.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mia-project-2025/bert-base-uncased-LoRA-glue-mnli
|
mia-project-2025
| 2025-08-22T07:15:34Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T07:08:47Z |
---
license: apache-2.0
---
# BERT-base + LoRA on GLUE MNLI
This repository contains a **BERT-base model fine-tuned with LoRA adapters** on the [GLUE MNLI dataset](https://huggingface.co/datasets/glue/viewer/mnli).
The model is trained for **natural language inference (NLI)** with three labels: *entailment*, *neutral*, and *contradiction*.
---
## Dataset
- **Name**: GLUE Multi-Genre Natural Language Inference (MNLI)
- **Task**: Natural Language Inference
- **Size**: 392k training examples, with validation splits for *matched* (in-domain) and *mismatched* (out-of-domain).
- **Labels**:
- `0` = Contradiction
- `1` = Entailment
- `2` = Neutral
---
## Training Setup
- **Base Model**: `bert-base-uncased`
- **Fine-tuning Method**: Parameter-Efficient Fine-Tuning (PEFT) using **LoRA adapters**
- **LoRA Configuration**:
- Rank `r = 8`
- Alpha = `64`
- Target modules = `query`, `key`, `value`, `dense`
- Dropout = `0.1`
- **Hyperparameters**:
- Epochs: `15`
- Batch size: `64`
- Learning rate: `5e-5`
- Optimizer: `AdamW (fused)`
- Gradient accumulation: `2`
- Weight decay: `0.01`
- Warmup ratio: `0.1`
- **Hardware**: Trained on GPUs with mixed precision (`fp16`).
---
## Results
### Final Evaluation (Epoch 15)
| Metric | Matched (in-domain) | Mismatched (out-of-domain) |
|---------------|----------------------|-----------------------------|
| Eval Loss | 0.4549 | 0.4476 |
| Accuracy | 83.62% | 83.65% |
| Precision | 0.8378 | 0.8380 |
| Recall | 0.8362 | 0.8365 |
| F1 Score | 0.8367 | 0.8370 |
| Train Loss | 0.8461 | 0.8461 |
| Train Runtime | 21540.38s | 21540.38s |
| Eval Runtime | 9.53s | 9.55s |
---
## Usage
You can load this model and tokenizer using Hugging Face Transformers:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "mia-project-2025/bert-base-uncased-LoRA-glue-mnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = {"premise": "The cat is sleeping on the sofa.", "hypothesis": "The cat is awake."}
inputs = tokenizer(text["premise"], text["hypothesis"], return_tensors="pt")
outputs = model(**inputs)
pred = outputs.logits.argmax(-1).item()
print(pred) # 0=Contradiction, 1=Entailment, 2=Neutral
|
unitova/blockassist-bc-zealous_sneaky_raven_1755845283
|
unitova
| 2025-08-22T07:14:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:14:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755846819
|
kapalbalap
| 2025-08-22T07:14:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:14:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755845156
|
helmutsukocok
| 2025-08-22T07:13:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:13:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755845150
|
ihsanridzi
| 2025-08-22T07:11:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:11:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755846574
|
llencia
| 2025-08-22T07:09:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:09:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xGareeb/blockassist-bc-diving_jumping_llama_1755846500
|
0xGareeb
| 2025-08-22T07:09:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving jumping llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:09:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving jumping llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Bnaad/PARENT_bert
|
Bnaad
| 2025-08-22T07:05:44Z | 0 | 0 |
transformers
|
[
"transformers",
"bert",
"text-classification",
"privacy-policy",
"gdpr",
"torchscript",
"en",
"dataset:MAPP-116",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-08T16:37:57Z |
---
language: en
license: apache-2.0
library_name: transformers
tags:
- bert
- text-classification
- privacy-policy
- gdpr
- torchscript
datasets:
- MAPP-116
metrics:
- f1
model-index:
- name: PARENT BERT
results:
- task:
type: text-classification
dataset:
name: MAPP-116
type: text
metrics:
- name: f1
type: score
value: 0.80 # replace with your actual F1 score
---
# PARENT BERT Models for Privacy Policy Analysis
This repository contains **TorchScript versions of 15 fine-tuned BERT models** used in the PARENT project to analyse mobile app privacy policies. These models identify **what data is collected, why it is collected, and how it is processed**, helping assess GDPR compliance.
They are part of a hybrid framework designed for non-technical users, particularly parents concerned about childrenโs privacy.
---
## Model Purpose
- Segment privacy policies to detect:
- Data collection types (e.g., contact info, location)
- Purpose of data collection
- How data is processed
- Support GDPR compliance evaluation
- Detect potential third-party sharing (in combination with a logistic regression model)
---
## References
- **MAPP Dataset:** Arora, S., Hosseini, H., Utz, C., Bannihatti Kumar, V., Dhellemmes, T., Ravichander, A., Story, P., Mangat, J., Chen, R., Degeling, M., Norton, T.B., Hupperich, T., Wilson, S., & Sadeh, N.M. (2022). *A tale of two regulatory regimes: Creation and analysis of a bilingual privacy policy corpus*. Proceedings of the International Conference on Language Resources and Evaluation (LREC 2022). [PDF link](https://aclanthology.org/2022.lrec-1.585.pdf) [Accessed 12 July 2025].
---
## Usage
```python
import torch
from transformers import BertTokenizerFast
from huggingface_hub import hf_hub_download
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
REPO_ID = "Bnaad/PARENT_bert"
# Load tokenizer
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
# Load one TorchScript model from Hugging Face
label_name = "Information Type_Contact information"
safe_label = label_name.replace(" ", "_").replace("/", "_")
filename = f"torchscript_{safe_label}.pt"
model_path = hf_hub_download(repo_id=REPO_ID, filename=filename)
model = torch.jit.load(model_path, map_location=device)
model.to(device)
model.eval()
# Example inference
sample_text = """For any questions about your account or our services, please contact our customer support team by emailing support@example.com, calling +1-800-555-1234, or visiting our office at 123 Main Street, Springfield, IL, 62701 during business hours"""
inputs = tokenizer(
sample_text,
return_tensors="pt",
truncation=True,
padding="max_length",
max_length=512
).to(device)
with torch.no_grad():
outputs = model(inputs["input_ids"], inputs["attention_mask"])
print("Logits:", outputs)
prob = torch.sigmoid(outputs.squeeze())
print(prob)
|
inlee/proactive-agent-reward-model-llama3.1-8b
|
inlee
| 2025-08-22T07:04:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T07:00:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1755846181
|
roeker
| 2025-08-22T07:03:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T07:03:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.