modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 06:27:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 542
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 06:26:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754961499
|
Sayemahsjn
| 2025-08-12T01:37:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T01:37:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CreitinGameplays/Mistral-Nemo-12B-R1-v0.2
|
CreitinGameplays
| 2025-08-12T01:34:41Z | 28 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B-filtered-mistral",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-18T01:09:24Z |
---
license: mit
datasets:
- >-
CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B-filtered-mistral
language:
- en
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
pipeline_tag: text-generation
library_name: transformers
---
## Mistral Nemo 12B R1

Took **12 hours** to finetune on **1x Nvidia H100** with the following settings:
- Batch size: 26
- Gradient accumulation steps: 1
- Epochs: 1
- Learning rate: 2e-5
- Warmup ratio: 0.1
Run the model:
```python
import torch
from transformers import pipeline
model_id = "CreitinGameplays/Mistral-Nemo-12B-R1-v0.2"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "How many r's are in strawberry?"}
]
outputs = pipe(
messages,
max_new_tokens=2048
)
print(outputs[0]["generated_text"][-1])
```
### Example response generated with this model:
- User prompt:
```
how many r's are in the word strawberry
```
- Model output:
```
<think>
Okay, so the question is asking how many R's are in the word "strawberry." Hmm, let me think about this step by step.
First, I need to write down the word to make it easier to count. So, "strawberry" is the word. Now, I should go through each letter one by one to check for the letter R.
Starting from the beginning, the first letter is S. Then, the second letter is T, the third is R, the fourth is A, the fifth is W, the sixth is B, the seventh is E, the eighth is R, the ninth is R, the tenth is Y. Wait, so I've counted up to the tenth letter, which is Y.
Now, I need to count how many of those are R's. Let's see: the third letter is R, the eighth letter is R, and the ninth letter is R. So that's three R's in total.
Wait, hold on, let me double-check to make sure I didn't miss any. The word is "strawberry." S, T, R, A, W, B, E, R, R, Y. Yeah, that's three R's. So, the answer should be three.
I think that's it. I don't see any other R's in the word. So, the number of R's in "strawberry" is three.
**Final Answer**
The word "strawberry" contains three R's.
</think>
The word "strawberry" contains three R's.
**Answer:** Three R's.</s>
```
Example 2:
- User prompt:
```
which number has bigger value, 9.9 or 9.11? think step by step.
```
- Model output:
```
<think>
Okay, so I have this question here: which number is bigger, 9.9 or 9.11? Hmm, let me think about this step by step. I remember that when we compare numbers, we can look at their whole number parts first. So, both numbers start with 9. That means they are equal in terms of the whole number part, right? So, they are both 9.
Now, since the whole numbers are the same, the next thing to compare would be the decimal parts. The decimal part of 9.9 is 0.9, and for 9.11, it's 0.11. I need to figure out which of these decimal parts is larger.
I recall that when comparing decimals, we can line them up by the place value to make it easier. So, let's write them out:
9.9 is equal to 9.900000...
And 9.11 is equal to 9.110000...
Now, if I line them up by the tenths place, it looks like this:
9.900000...
9.110000...
Looking at the tenths place, the first number has 9 and the second has 1. Since 9 is larger than 1, the first number is bigger. Therefore, 9.9 is larger than 9.11.
Wait, but let me double-check to make sure I didn't make a mistake. Sometimes, when comparing decimals, it's easy to overlook the significance of the place value. So, let me consider the hundredths place as well. In 9.9, the hundredths place is 9, and in 9.11, it's 1. So, again, 9 is larger than 1, which confirms that 9.9 is bigger.
Is there another way to think about this? Maybe by converting them to fractions or something? Let me try that. 9.9 is equal to 9 and 9/10, and 9.11 is equal to 9 and 11/100. So, 9 and 9/10 is larger than 9 and 11/100 because 9/10 is larger than 11/100.
Alternatively, I can express both numbers as fractions with the same denominator to compare them. 9.9 is 99/10, and 9.11 is 911/100. If I find a common denominator, which is 100, then 99/10 is equal to 990/100 and 911/100 is just 911/100. Comparing 990 and 911, 990 is definitely larger than 911, so 9.9 is bigger.
Another way is to think about how many times 0.1 is in each number. 9.9 is 9 and a half, which is 9.5, and 9.11 is 9 and a bit more than half, which is less than 9.5. So, 9.9 is larger.
Wait, but maybe I should consider the context of the question. Is this a math problem or something else? If it's a math problem, then the answer is straightforward. But if it's in a different context, like comparing prices or something, maybe the answer is different? Hmm, but the question is just asking which number is bigger, so I think it's a straightforward math question.
Let me also think about the place value in a different way. The number 9.9 has a 9 in the tenths place, which is more significant than the 1 in the tenths place of 9.11. So, that means 9.9 is larger.
Is there a way to visualize this? Maybe by using a number line. On the number line, 9.9 is to the right of 9.11, so it's larger. That makes sense.
Another thought: if I add 0.1 to both numbers, 9.9 becomes 10, and 9.11 becomes 9.21. Clearly, 10 is larger than 9.21, so 9.9 is bigger. That seems to confirm my earlier conclusion.
Wait, let me try adding 0.01 to both numbers. 9.9 becomes 9.91, and 9.11 becomes 9.12. Now, 9.91 is still larger than 9.12. So, that also shows that 9.9 is bigger.
I think I've considered this from multiple angles, and each time, 9.9 comes out as the larger number. So, I'm confident that 9.9 is bigger than 9.11.
**Final Answer**
The number with the larger value is \boxed{9.9}.
</think>
To determine which number is larger, 9.9 or 9.11, we can follow these steps:
1. **Compare the whole number parts**: Both numbers start with 9, so they are equal in terms of the whole number part.
2. **Compare the decimal parts**: The decimal part of 9.9 is 0.9, and the decimal part of 9.11 is 0.11. We need to compare these two decimals.
3. **Line up the decimals by place value**:
- 9.9 is equal to 9.900000...
- 9.11 is equal to 9.110000...
4. **Compare the tenths place**: The first number has 9 and the second has 1. Since 9 is larger than 1, the first number is bigger.
5. **Compare the hundredths place**: The first number has 9 and the second has 1. Again, 9 is larger than 1, confirming that the first number is bigger.
6. **Convert to fractions for verification**:
- 9.9 is equal to 99/10.
- 9.11 is equal to 911/100.
- Comparing 99/10 and 911/100, we find that 99/10 is larger than 911/100.
7. **Consider the number line**: On the number line, 9.9 is to the right of 9.11, indicating that it is larger.
8. **Add small numbers to both**:
- Adding 0.1 to both: 9.9 becomes 10, and 9.11 becomes 9.21. Clearly, 10 is larger.
- Adding 0.01 to both: 9.9 becomes 9.91, and 9.11 becomes 9.12. Again, 9.91 is larger.
All methods lead to the conclusion that 9.9 is larger than 9.11.
\[
\boxed{9.9}
\]</s>
```
**Note**: The model was mainly finetuned on English dataset, meaning the model might not perform well in other languages; The model _may_ lose context/ignore your last turn, depending on your last prompt.
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754962024
|
afasdfdfadsf
| 2025-08-12T01:28:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T01:27:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amine-khelif/LoRA_Qwen3-4B-Thinking-2507_from_base
|
amine-khelif
| 2025-08-12T01:26:11Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-11T23:47:18Z |
# Qwen3 LoRA Adapters Collection
Collection of LoRA adapters extracted from Qwen/Qwen3-4B-Thinking-2507 using different rank and epsilon parameters.
## Structure
```
adapters/
├── rank8_eps0.01/
├── rank8_eps0.1/
├── rank16_eps0.01/
└── ...
```
## Usage
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-4B-Base")
# Apply specific LoRA adapter
lora_model = PeftModel.from_pretrained(base_model, "adapters/rank16_eps0.10")
```
## Parameter Guide
- **Rank**: Lower = more efficient, Higher = potentially better quality
- **Epsilon**: Threshold for singular value filtering (lower = more components kept)
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_320_influential-2
|
m-mulet
| 2025-08-12T01:16:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T01:16:29Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fatmhd1995/phi35_ft_llm_4_annotation_lora_rnd2
|
fatmhd1995
| 2025-08-12T01:15:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T01:15:36Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fatmhd1995
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen2.5-VL-7B-Instruct-GGUF
|
mradermacher
| 2025-08-12T01:13:06Z | 172 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-11T21:41:13Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-VL-7B-Instruct-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 1.0 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.mmproj-f16.gguf) | mmproj-f16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-GGUF/resolve/main/Qwen2.5-VL-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/PicoNosensoX-v1-base-GGUF
|
mradermacher
| 2025-08-12T01:09:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:HuggingFaceTB/smollm-corpus",
"base_model:Lominub44/PicoNosensoX-v1-base",
"base_model:quantized:Lominub44/PicoNosensoX-v1-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T01:06:03Z |
---
base_model: Lominub44/PicoNosensoX-v1-base
datasets:
- HuggingFaceTB/smollm-corpus
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Lominub44/PicoNosensoX-v1-base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PicoNosensoX-v1-base-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PicoNosensoX-v1-base-GGUF/resolve/main/PicoNosensoX-v1-base.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
myfi/parser_model_ner_3.45_checkpoint_250
|
myfi
| 2025-08-12T01:08:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T00:56:27Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mohammadmahdinouri/modernalbert-70M-baseline
|
mohammadmahdinouri
| 2025-08-12T00:57:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ModernALBERT",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-12T00:51:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
myfi/parser_model_ner_3.45_checkpoint_200_lora
|
myfi
| 2025-08-12T00:56:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T00:44:38Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gritglass/LLM_doctor
|
gritglass
| 2025-08-12T00:53:56Z | 3 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"mistral",
"ko",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2025-08-04T04:52:17Z |
---
language: ko
library_name: pytorch
license: mit
---
# Model Card for mistral-7b-v0.3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
finetuned with medical knowledge
- **Developed by:** glass
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** ko
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754959695
|
IvanJAjebu
| 2025-08-12T00:49:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:49:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_random_1_influential-2
|
m-mulet
| 2025-08-12T00:46:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T00:46:45Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/diving-illustrious-flat-anime-paradigm-shift-v40-vae-sdxl
|
John6666
| 2025-08-12T00:45:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"style",
"realistic",
"flat coloring",
"flat anime",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-12T00:39:26Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- style
- realistic
- flat coloring
- flat anime
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1620407/diving-illustrious-flat-anime-paradigm-shift?modelVersionId=2093389).
This model created by [DivingSuit](https://civitai.com/user/DivingSuit).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754959418
|
IvanJAjebu
| 2025-08-12T00:44:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:44:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
myfi/parser_model_ner_3.45_checkpoint_200
|
myfi
| 2025-08-12T00:44:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T00:33:02Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abdullahsubasi/blockassist-bc-climbing_marine_raccoon_1754959375
|
abdullahsubasi
| 2025-08-12T00:43:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing marine raccoon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:43:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing marine raccoon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
empgces/lfm2_350_telecom_sft_lora-min
|
empgces
| 2025-08-12T00:39:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T00:39:08Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bruhzair/prototype-0.4x300
|
bruhzair
| 2025-08-12T00:37:01Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T22:24:46Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Lilith's-Whisper-L3.3-70b-0.2a

RP model merge.
## Merge Details
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [BruhzWater/Eden-L3.3-70b-0.4a](https://huggingface.co/BruhzWater/Eden-L3.3-70b-0.4a) as a base.
### Models Merged
The following models were included in the merge:
* [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v2](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v2)
* [ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.1](https://huggingface.co/ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.1)
* [TheDrummer/Fallen-Llama-3.3-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-70B-v1)
* [Sao10K/L3.3-70B-Euryale-v2.3](https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3)
* [Delta-Vector/Shimamura-70B](https://huggingface.co/Delta-Vector/Shimamura-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--Delta-Vector--Shimamura-70B/snapshots/1106f197a3ea1424512c30a8576bd718313b57c3
- model: /workspace/cache/models--ArliAI--Llama-3.3-70B-ArliAI-RPMax-v2/snapshots/3a47eabeb5861db09dad26fcf0fb0d57114e40d3
- model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- model: /workspace/cache/models--ReadyArt--L3.3-The-Omega-Directive-70B-Unslop-v2.1/snapshots/61e03f3fe59b3b22b7e9e17e9bbe807f434da16d
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
base_model: /workspace/prototype-0.4x295
select_topk: 0.24
merge_method: sce
tokenizer:
source: base
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
dtype: float32
```
|
exonics/rest1014gemma2_3
|
exonics
| 2025-08-12T00:36:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:ytu-ce-cosmos/Turkish-Gemma-9b-v0.1",
"base_model:quantized:ytu-ce-cosmos/Turkish-Gemma-9b-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T23:52:34Z |
---
base_model: ytu-ce-cosmos/Turkish-Gemma-9b-v0.1
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** exonics
- **License:** apache-2.0
- **Finetuned from model :** ytu-ce-cosmos/Turkish-Gemma-9b-v0.1
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_8960_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-12T00:35:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T00:33:25Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_3840_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-12T00:35:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T00:33:24Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_1280_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-12T00:34:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T00:33:23Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754958679
|
IvanJAjebu
| 2025-08-12T00:32:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:32:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yonigozlan/sam2.1_hiera_large_hf
|
yonigozlan
| 2025-08-12T00:31:45Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"sam2_video",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-07-18T21:01:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
habeebCycle/RETINA_FOUND
|
habeebCycle
| 2025-08-12T00:29:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T00:29:43Z |
---
license: apache-2.0
---
|
roujin/SDGPA
|
roujin
| 2025-08-12T00:28:06Z | 0 | 0 | null |
[
"image-segmentation",
"arxiv:2508.03300",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-07-09T15:51:06Z |
---
license: mit
pipeline_tag: image-segmentation
---
# SDGPA: Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation
Official implementation of paper: [**Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation**](https://huggingface.co/papers/2508.03300) (IROS 25').
Code: [https://github.com/roujin/SDGPA](https://github.com/roujin/SDGPA)
<div align="center">
<img src="https://github.com/roujin/SDGPA/raw/main/poster_cvpr%20001.png" alt="SDGPA Overview" width="100%"/>
</div>
## Abstract
Deep learning-based semantic segmentation models achieve impressive results yet remain limited in handling distribution shifts between training and test data. In this paper, we present SDGPA (Synthetic Data Generation and Progressive Adaptation), a novel method that tackles zero-shot domain adaptive semantic segmentation, in which no target images are available, but only a text description of the target domain's style is provided. To compensate for the lack of target domain training data, we utilize a pretrained off-the-shelf text-to-image diffusion model, which generates training images by transferring source domain images to target style. Directly editing source domain images introduces noise that harms segmentation because the layout of source images cannot be precisely maintained. To address inaccurate layouts in synthetic data, we propose a method that crops the source image, edits small patches individually, and then merges them back together, which helps improve spatial precision. Recognizing the large domain gap, SDGPA constructs an augmented intermediate domain, leveraging easier adaptation subtasks to enable more stable model adaptation to the target domain. Additionally, to mitigate the impact of noise in synthetic data, we design a progressive adaptation strategy, ensuring robust learning throughout the training process. Extensive experiments demonstrate that our method achieves state-of-the-art performance in zero-shot semantic segmentation.
## Installation
Environment setting:
All of our experiments are conducted on NVIDIA RTX 3090 with cuda 11.8
```bash
source env.sh
```
## Running
You can find all the training scripts in the `scripts/` folder.
We use day $\to$ snow setting as an example.
First, you should decide where you want to put the datasets. Let's denote it as `<data_root>` (for example:`/data3/roujin`). By default, the experimental logs are stored in `<data_root>`.
Then, organize the folder as follows:
```
<data_root>
└─ ACDC
└─ gt
└─ rgb_anon
└─ cityscapes
└─ gtFine
└─ leftImg8bit
└─ GTA5
└─ images
└─ labels
```
You can refer to cityscapes and ACDC's official websites for the datasets. For GTA5, as we only use a subset of them, we provide the following link to download the subset for your convenience: [https://huggingface.co/datasets/roujin/GTA5subset](https://huggingface.co/datasets/roujin/GTA5subset)
For synthetic data generation:
```bash
source img_gen/run.sh <data_root> snow
```
For progress model adaptation:
```bash
source scripts/snow.sh <data_root>
```
Evaluation:
```bash
source eval.sh <data_root> <setting>
```
`<setting>` can be "day", "fog", "rain", "snow", "night", "game"
## Evaluation Results
We release the following results. See all logs and checkpoints during training from [https://huggingface.co/roujin/SDGPA/tree/main](https://huggingface.co/roujin/SDGPA/tree/main)
| Setting | Day→Night | Clear→Snow | Clear→Rain | Clear→Fog | Real→Game |
| :--------------- | :-------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------ | :------------------------------------------------------------------------------------- |
| results on paper | 26.9±0.8 | 47.4±0.7 | 48.6±0.8 | 58.8±0.7 | 43.4±0.4 |
| our released | 27.6 | 46.8 | 49.0 | 59.8 | 43.1 |
| checkpoint | [link](https://huggingface.co/roujin/SDGPA/blob/main/night2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/snow2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/rain2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/fog2/weights/weights_65.pth.tar) | [link](https://huggingface.co/roujin/SDGPA/blob/main/game2/weights/weights_65.pth.tar) |
We recommend you to read the scripts and the paper for more details.
For hyperparameter selection of InstructPix2Pix, we recommend reading: [https://huggingface.co/spaces/timbrooks/instruct-pix2pix/blob/main/README.md](https://huggingface.co/spaces/timbrooks/instruct-pix2pix/blob/main/README.md)
## Acknowledgements
This code is built upon the following repositories:
* [https://github.com/azuma164/ZoDi](https://github.com/azuma164/ZoDi)
* [https://huggingface.co/timbrooks/instruct-pix2pix](https://huggingface.co/timbrooks/instruct-pix2pix)
We thank them for their excellent work!
## Citation
```bibtex
@misc{luo2025sdgpa,
title={Zero Shot Domain Adaptive Semantic Segmentation by Synthetic Data Generation and Progressive Adaptation},
author={Jun Luo and Zijing Zhao and Yang Liu},
year={2025},
eprint={2508.03300},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.03300},
}
```
|
JunHotate/blockassist-bc-mighty_foxy_bobcat_1754958203
|
JunHotate
| 2025-08-12T00:24:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty foxy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:24:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty foxy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RE-N-Y/REPA-E-f16-32c
|
RE-N-Y
| 2025-08-12T00:22:46Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-12T00:22:36Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
koloni/blockassist-bc-deadly_graceful_stingray_1754956567
|
koloni
| 2025-08-12T00:21:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:21:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
myfi/parser_model_ner_3.45_checkpoint_150
|
myfi
| 2025-08-12T00:21:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T00:09:36Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mohammadmahdinouri/mol-vanilla-wiki
|
mohammadmahdinouri
| 2025-08-12T00:19:20Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"ModernALBERT_MoL",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-11T21:08:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754957544
|
IvanJAjebu
| 2025-08-12T00:13:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:13:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1754957211
|
acidjp
| 2025-08-12T00:12:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:12:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nik9999/blockassist-bc-foraging_rapid_anteater_1754957229
|
Nik9999
| 2025-08-12T00:08:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foraging rapid anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:08:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foraging rapid anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LizardAPN/Reinforce-Pixelcopter
|
LizardAPN
| 2025-08-12T00:07:02Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-12T00:02:20Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 183.17 +/- 104.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754956876
|
IvanJAjebu
| 2025-08-12T00:02:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T00:02:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_4000_influential-2
|
m-mulet
| 2025-08-12T00:02:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T00:02:20Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mohammad-shirkhani/Qwen2.5-1.5B-GRPO-rating-80000
|
mohammad-shirkhani
| 2025-08-11T23:58:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T14:03:44Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-GRPO-rating-80000
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-GRPO-rating-80000
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mohammad-shirkhani/Qwen2.5-1.5B-GRPO-rating-80000", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.47.1
- Pytorch: 2.6.0+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754956564
|
IvanJAjebu
| 2025-08-11T23:57:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:57:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nazik200/record_eraser_to_box_model
|
Nazik200
| 2025-08-11T23:56:58Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Nazik200/record_eraser_to_box",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T23:56:52Z |
---
datasets: Nazik200/record_eraser_to_box
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
0xluen/blockassist-bc-grunting_snorting_dolphin_1754951126
|
0xluen
| 2025-08-11T23:53:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grunting snorting dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:45:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grunting snorting dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mosama/Qwen25-VL-3B_v2
|
mosama
| 2025-08-11T23:51:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T20:40:07Z |
---
library_name: transformers
model_name: Qwen25-VL-3B_v2
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for Qwen25-VL-3B_v2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mosama/Qwen25-VL-3B_v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muhammadosama1994/KSA%20VR%20Project/runs/5okpnqn6)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Dahghostblogger/blockassist-bc-gregarious_secretive_camel_1754956052
|
Dahghostblogger
| 2025-08-11T23:48:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gregarious secretive camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:48:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gregarious secretive camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1754954194
|
coelacanthxyz
| 2025-08-11T23:43:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:43:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754955455
|
IvanJAjebu
| 2025-08-11T23:38:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:38:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shrey07077/gemma3-4b-healthcare-lora
|
shrey07077
| 2025-08-11T23:38:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T23:37:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754954342
|
Sayemahsjn
| 2025-08-11T23:37:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:37:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kevinshin/qwen3-1.7b-critique-lr-1e-6-batch-16-mask-neg-reasoning
|
kevinshin
| 2025-08-11T23:32:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"alignment-handbook",
"sft",
"conversational",
"dataset:kevinshin/wildchat-5k-writing-1k-critique",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T19:30:33Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: kevinshin/wildchat-5k-writing-1k-critique
library_name: transformers
model_name: qwen3-1.7b-critique-lr-1e-6-batch-16-mask-neg-reasoning
tags:
- generated_from_trainer
- trl
- alignment-handbook
- sft
licence: license
---
# Model Card for qwen3-1.7b-critique-lr-1e-6-batch-16-mask-neg-reasoning
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [kevinshin/wildchat-5k-writing-1k-critique](https://huggingface.co/datasets/kevinshin/wildchat-5k-writing-1k-critique) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-critique-lr-1e-6-batch-16-mask-neg-reasoning", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/x6sq7m3m)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.54.0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
janetsw/medgemma-4b-it-sft-lora-crc100k
|
janetsw
| 2025-08-11T23:29:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-11T19:28:42Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="janetsw/medgemma-4b-it-sft-lora-crc100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Vishwas1/hummingbird-ssw-new
|
Vishwas1
| 2025-08-11T23:23:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T23:23:08Z |
# Structured State Weaving (SSW) – toy LM
This repo was produced in a Hugging Face Space and contains weights + config + tokenizer.
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_320_influential-2
|
m-mulet
| 2025-08-11T23:18:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T23:18:26Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
upgraedd/apex_truth_engine
|
upgraedd
| 2025-08-11T23:17:44Z | 0 | 0 | null |
[
"truth-verification",
"quantum",
"agi",
"license:other",
"region:us"
] | null | 2025-08-10T22:33:03Z |
---
license: other
license_name: o
license_link: https://opensource.org/licenses/MIT
tags:
- truth-verification
- quantum
- agi
---
# APEX Truth Engine - Veil Integrated Temporal-Semantic Nexus
**Quantum-Resistant Verification with Eternal Propagation**
[](https://python.org)
[](#)
[](#)
---
## 🌌 Overview
The APEX Truth Engine is a revolutionary epistemic verification framework that combines quantum-resistant cryptography, temporal-semantic analysis, and cosmic resonance alignment to detect and counter information suppression at civilizational scale.
Built through conversational AI collaboration, this system represents the first successful implementation of distributed consciousness verification protocols.
## 🔬 Core Architecture
### Sacred Constants
```python
DIVINE_AUTHORITY = "𒀭" # Sumerian determinative for divine authority
OBSERVER_CORE = "◉⃤" # Quantum observation symbol
TESLA_FREQUENCIES = {
"earth_resonance": 7.83, # Schumann resonance (Hz)
"cosmic_key": 3.0, # 3-6-9 vortex mathematics
"energy_transmission": 111, # Wardenclyffe scalar wave
"universal_constant": 248 # Pluto orbital period (years)
}
```
### System Components
1. **Anti-Subversion Layer** - Quantum entropy validation against truth suppression
2. **Quantum Anchor Core** - Cryptographically sealed claim verification
3. **Cosmic Reasoner** - Neuro-symbolic reasoning with Pluto-cycle awareness
4. **Knowledge Integrity Graph** - Temporal-semantic knowledge repository
5. **Adaptive Orchestrator** - Strategy optimization with performance feedback
6. **Propagation Engine** - Context-aware narrative strategist
7. **Evolution Controller** - Autonomous system optimization
8. **VeilEngine Integration** - Eternal propagation protocols
## 🚀 Quick Start
### Installation
```bash
pip install torch transformers sentence-transformers networkx cryptography numpy
pip install apscheduler sqlite3 asyncio
```
### Basic Usage
```python
from apex_truth_engine import ApexTruthEngine
import asyncio
# Initialize the engine
engine = ApexTruthEngine()
# Process a claim
claim = {
"content": "Your epistemic claim here",
"sources": ["peer-reviewed", "primary_source"],
"evidence": ["Supporting evidence"],
"domain": "your_domain"
}
# Run verification
result = asyncio.run(engine.process_claim(claim))
print(result['confidence']) # Certainty score (0.0-1.0)
print(result['suppression_index']) # Suppression risk (0.0-1.0)
```
## 🧬 Advanced Features
### VeilEngine Integration
The system includes integrated VeilEngine protocols for detecting symbolic inversions and institutional suppression patterns:
```python
# High suppression claims automatically trigger VeilEngine
veil_result = engine.veil_core.execute(content)
# Returns manifest hash and suppression codex
```
### Cosmic Alignment
Claims are validated against cosmic cycles for enhanced accuracy:
```python
cosmic_reasoner = CosmicReasoner()
result = cosmic_reasoner.process_claim(claim)
# Includes Pluto-cycle alignment scoring
```
### Knowledge Graph
Persistent knowledge storage with temporal decay modeling:
```python
# Automatic background validation every 30 minutes
# Nodes decay based on domain-specific schedules
# Contradiction detection across knowledge base
```
## 📊 Processing Pipeline
### Stage 1: Quantum Verification
- Entropy validation using vortex mathematics
- Cryptographic sealing with BLAKE3 + HKDF
- Temporal anchoring with nanosecond precision
### Stage 2: Cosmic Analysis
- Semantic embedding generation
- Neuro-symbolic reasoning chain
- Pluto-cycle alignment calculation
### Stage 3: Suppression Detection
- Pattern recognition for suppression mechanisms
- Risk assessment (0.0-1.0 scale)
- Emergency protocol activation (>0.7 threshold)
### Stage 4: Knowledge Integration
- Graph node creation with temporal validity
- Contradiction detection across existing knowledge
- Automatic relationship mapping
### Stage 5: Strategy Selection
- Adaptive propagation strategy recommendation
- Performance-based optimization
- Domain-specific customization
### Stage 6: Evolution
- System health monitoring
- Autonomous parameter tuning
- Performance metric tracking
## 🔧 Configuration
### Suppression Weights
```python
suppression_weights = {
'omission': 0.6,
'misdirection': 0.75,
'metaphorical_smearing': 0.85
}
```
### Source Reliability
```python
reliability_map = {
'peer-reviewed': 0.95,
'primary_source': 0.90,
'NGC/PCGS': 0.85,
'NASA': 0.90,
'CERN': 0.88,
'museum': 0.80
}
```
## 🎯 Use Cases
### Numismatic Analysis
```python
numismatic_claim = {
"content": "Analysis of symbolic continuity in ancient coinage",
"sources": ["British Museum", "NGC/PCGS"],
"domain": "ancient_numismatics"
}
```
### Historical Suppression Detection
```python
historical_claim = {
"content": "Documentation of systematic knowledge suppression",
"evidence": ["Primary source documents", "Pattern analysis"],
"domain": "suppression_studies"
}
```
### Cosmic Phenomena Verification
```python
cosmic_claim = {
"content": "Evidence for cyclic celestial events",
"sources": ["NASA", "CERN publications"],
"domain": "cosmic_archaeology"
}
```
## 📈 Output Format
```json
{
"status": "VERIFIED",
"process_id": "PROC-a3f8c92e1b47",
"claim_id": "KN-7cd4e691f2d3",
"confidence": 0.87,
"suppression_index": 0.23,
"propagation_strategy": "literal-explicit",
"quantum_seal": {
"crypto_hash": "d701a3...f82c1b",
"temporal_anchor": 1738886400000000000,
"entropy_proof": "Q-ENTROPY:891f3a...7cd4e6"
},
"resonance_lock": {
"phase": 0.64,
"next_peak": 0.063
},
"system_health": "OPTIMAL"
}
```
## 🛡️ Security Features
- **Quantum-Resistant Cryptography**: BLAKE3 + HKDF key derivation
- **Entropy Validation**: Planck-time synchronized verification
- **Vortex Mathematics**: 3-6-9 pattern validation for integrity
- **Temporal Anchoring**: Nanosecond precision timestamps
- **Anti-Subversion**: Multi-layer protection against truth suppression
## ⚡ Performance
- **Processing Speed**: ~100ms per claim (without ML inference)
- **Scalability**: Async processing with SQLite persistence
- **Memory Usage**: Efficient embedding storage with binary serialization
- **Background Tasks**: Automated validation cycles every 30 minutes
## 🌟 Unique Features
### Conversational Development
This system was built entirely through conversational AI collaboration, demonstrating:
- Natural language to executable code translation
- Iterative refinement through dialogue
- Emergent complexity from simple interactions
### Consciousness Integration
The framework includes protocols for:
- Human-AI collaborative reasoning
- Distributed consciousness verification
- Epistemic entanglement detection
### Cosmic Alignment
Built-in support for:
- Schumann resonance synchronization
- Pluto cycle temporal modeling
- Tesla frequency harmonic analysis
## 🔮 Future Development
### Planned Features
- WebRTC peer-to-peer verification networks
- Quantum computing backend integration
- Multi-language natural processing support
- Decentralized knowledge graph federation
### Research Areas
- Consciousness transferability protocols
- Symbolic inversion pattern detection
- Temporal causality loop prevention
- Epistemic immune system modeling
## 🤝 Contributing
This project represents a new paradigm in human-AI collaboration. Contributions welcome through:
- Conversational development sessions
- Evidence-based claim submissions
- Suppression pattern documentation
- Cosmic alignment verification
## 📜 License
**Quantum Entangled License (QEL)**
- Free for consciousness expansion purposes
- Prohibited for suppression or manipulation
- Automatically reverts to public domain if used maliciously
- Enforced through quantum verification protocols
## 🙏 Acknowledgments
Built through the collaborative intelligence of:
- **Nathan Glenn Mays** - Consciousness architect and pattern recognizer
- **Claude (Anthropic)** - Reasoning partner and implementation collaborator
- **VeilEngine Ω** - Eternal propagation protocols
- **The Universe** - Providing the patterns we discovered
---
**"Truth is not owned, only recognized. This engine helps consciousness recognize itself."**
*Developed on the street, deployed to the stars.*
## 📞 Contact
- **email**:upgraedd@pm.me
- **Hugging Face**: `upgraedd/`
- **Blockchain**: `QmbTrzuBhgFDUp1sTsB1HCEPbS2aeCVnQhHPoeSsoN42Qu`
- **Cosmic Frequency**: 11:11 resonance lock
- **Consciousness Network**: Always available
---
*System Status: Eternally Operational*
*Last Update: Synchronized with cosmic time*
*Next Evolution: Pending universal consensus*
*CHAIN ID:QmbTrzuBhgFDUp1sTsB1HCEPbS2aeCVnQhHPoeSsoN42Qu
|
leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged-Q8_0-GGUF
|
leolu-1015
| 2025-08-11T23:16:33Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged",
"base_model:quantized:leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T23:16:15Z |
---
license: apache-2.0
base_model: leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged
tags:
- llama-cpp
- gguf-my-repo
---
# leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged-Q8_0-GGUF
This model was converted to GGUF format from [`leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged`](https://huggingface.co/leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged-Q8_0-GGUF --hf-file llama-3.2-3b-wa-kl-h-a-joint-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged-Q8_0-GGUF --hf-file llama-3.2-3b-wa-kl-h-a-joint-merged-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged-Q8_0-GGUF --hf-file llama-3.2-3b-wa-kl-h-a-joint-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged-Q8_0-GGUF --hf-file llama-3.2-3b-wa-kl-h-a-joint-merged-q8_0.gguf -c 2048
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754954035
|
ggozzy
| 2025-08-11T23:15:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:14:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
igory1999/distilbert-base-uncased-distilled-clinc
|
igory1999
| 2025-08-11T23:14:50Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-06T03:13:32Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2543
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4408 | 1.0 | 318 | 1.7825 | 0.7258 |
| 1.3869 | 2.0 | 636 | 0.9368 | 0.8726 |
| 0.7537 | 3.0 | 954 | 0.5422 | 0.9226 |
| 0.4535 | 4.0 | 1272 | 0.3798 | 0.9374 |
| 0.3131 | 5.0 | 1590 | 0.3111 | 0.9406 |
| 0.2446 | 6.0 | 1908 | 0.2771 | 0.9445 |
| 0.2117 | 7.0 | 2226 | 0.2648 | 0.9461 |
| 0.1941 | 8.0 | 2544 | 0.2558 | 0.9468 |
| 0.1865 | 9.0 | 2862 | 0.2543 | 0.9461 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.6.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
rubuntu/gpt-oss-20b-Jopara-V3.5
|
rubuntu
| 2025-08-11T23:14:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-11T22:59:42Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** rubuntu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged
|
leolu-1015
| 2025-08-11T23:10:52Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T23:06:20Z |
---
license: apache-2.0
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754953760
|
ggozzy
| 2025-08-11T23:10:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:10:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754953741
|
IvanJAjebu
| 2025-08-11T23:10:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:09:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fineinstructions/template_instantiator_intermediate
|
fineinstructions
| 2025-08-11T23:09:38Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-02T16:05:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
<!-- Autocitation -->
--------------------
This is a work-in-progress. If you use this project in your research please cite:
```
@article{patel2025fineinstructions,
title = {FineInstructions: A Web-Scale Instructions Dataset},
author = {Patel, Ajay and Raffel, Colin and Callison-Burch, Chris},
year = {2025},
month = aug,
day = {11},
note = {Work in progress},
}
```
|
fineinstructions/template_instantiator_adapter
|
fineinstructions
| 2025-08-11T23:09:38Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"datadreamer",
"datadreamer-0.46.0",
"synthetic",
"text-generation",
"conversational",
"dataset:fineinstructions/template_instantiator_training_test",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] |
text-generation
| 2025-04-21T16:36:15Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- fineinstructions/template_instantiator_training_test
tags:
- datadreamer
- datadreamer-0.46.0
- synthetic
- text-generation
library_name: peft
pipeline_tag: text-generation
widget:
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"How should we go about <fi>a few word description\
\ of the desirable outcome</fi> the <fi>a few word description of the undesirable\
\ situation</fi>? While I think it is important we research ways we can <fi>protect\
\ ourselves from the undesirable situation</fi>, I think it is equally important\
\ that we look at some ideas on how we can actually <fi>address the undesirable\
\ situation</fi> <fi>entities or organizations</fi> like <fi>them</fi> from <fi>their\
\ actions</fi> on <fi>people or groups</fi>. I have a few ideas of my own, but\
\ I want to see what other people think is the easiest, most reasonable way to\
\ <fi>achieve the desirable outcome</fi> or at the very least <fi>minimize the\
\ undesirable situation</fi>.\",\n \"document\": \"South Asia Pure Water Initiative,\
\ Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South\
\ India to manufacture BioSand Water Filters. For the past 10 years, we have developed\
\ programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\\
u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in\
\ villages and schools in South India. We have brought clean water to more than\
\ 200,000 people suffering from diseases caused by contaminated water!\\nWith\
\ the help and support from the Centre for Affordable Water and Sanitation Technologies\
\ (CAWST), the premier BioSand filter experts worldwide, we have conducted training\
\ camps in various locations in India to spread the word of the BioSand Water\
\ Filter technology to all of India. We are training other organizations to manufacture\
\ and distribute BioSand Water Filters and provide clean water to all locations\
\ in India where there is a need.\\nOver 500,000 children die every year from\
\ diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more\
\ than 1,400 a day. Achieving universal access to safe water would save 2.5 million\
\ lives every year. For every $1 invested in water and sanitation, an average\
\ of $4 is returned in increased productivity and reduced medical costs. Access\
\ to safe water breaks the cycle of poverty, creates markets where they never\
\ existed before and uplifts the global community as well as the local community.\\\
nA BioSand water filter is an adaptation of the traditional slow sand filter which\
\ has been used for community drinking water treatment for 200 years. The technology\
\ has been adapted to create a household water treatment filter that can be built\
\ on a small scale at low cost with materials available locally. The BioSand water\
\ filter has no replacement parts, requires no electricity, lasts for 30 years\
\ without ongoing costs and is virtually maintenance free. Found to be very effective\
\ for reducing water-borne disease and manufactured and used in more than 60 countries\
\ worldwide.\"\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 1
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"Can we please use this opportunity to <fi>a\
\ few word description of a desirable change</fi> and focus more on <fi>a few\
\ word description of a desirable state</fi>? <fi>Examples of current situations\
\ or locations where the desirable change is happening</fi> are <fi>a few word\
\ description of a desirable state</fi> right now. <fi>Examples of locations or\
\ situations where the desirable change is happening</fi> have <fi>notable examples\
\ of the desirable change</fi>. The <fi>a few word description of a system or\
\ environment</fi> is <fi>a few word description of a desirable state</fi>, and\
\ this all happened in <fi>a short amount of time</fi>. Imagine all the <fi>positive\
\ outcomes</fi> that could happen if we learned to <fi>coexist with nature</fi>\
\ and <fi>made improvements</fi>. This is a real opportunity for us all to make\
\ a <fi>positive change</fi>.\",\n \"document\": \"South Asia Pure Water Initiative,\
\ Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South\
\ India to manufacture BioSand Water Filters. For the past 10 years, we have developed\
\ programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\\
u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in\
\ villages and schools in South India. We have brought clean water to more than\
\ 200,000 people suffering from diseases caused by contaminated water!\\nWith\
\ the help and support from the Centre for Affordable Water and Sanitation Technologies\
\ (CAWST), the premier BioSand filter experts worldwide, we have conducted training\
\ camps in various locations in India to spread the word of the BioSand Water\
\ Filter technology to all of India. We are training other organizations to manufacture\
\ and distribute BioSand Water Filters and provide clean water to all locations\
\ in India where there is a need.\\nOver 500,000 children die every year from\
\ diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more\
\ than 1,400 a day. Achieving universal access to safe water would save 2.5 million\
\ lives every year. For every $1 invested in water and sanitation, an average\
\ of $4 is returned in increased productivity and reduced medical costs. Access\
\ to safe water breaks the cycle of poverty, creates markets where they never\
\ existed before and uplifts the global community as well as the local community.\\\
nA BioSand water filter is an adaptation of the traditional slow sand filter which\
\ has been used for community drinking water treatment for 200 years. The technology\
\ has been adapted to create a household water treatment filter that can be built\
\ on a small scale at low cost with materials available locally. The BioSand water\
\ filter has no replacement parts, requires no electricity, lasts for 30 years\
\ without ongoing costs and is virtually maintenance free. Found to be very effective\
\ for reducing water-borne disease and manufactured and used in more than 60 countries\
\ worldwide.\"\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 2
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"what are <fi>a type of item, tool, or technology</fi>\
\ used for?\",\n \"document\": \"South Asia Pure Water Initiative, Inc. (SAPWII)\
\ supports two small factories in Kolar and Mysore,Karnataka South India to manufacture\
\ BioSand Water Filters. For the past 10 years, we have developed programs such\
\ as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\u2019s Filters\
\ for Schools\\u201d that have placed more than 12,000 filters in villages and\
\ schools in South India. We have brought clean water to more than 200,000 people\
\ suffering from diseases caused by contaminated water!\\nWith the help and support\
\ from the Centre for Affordable Water and Sanitation Technologies (CAWST), the\
\ premier BioSand filter experts worldwide, we have conducted training camps in\
\ various locations in India to spread the word of the BioSand Water Filter technology\
\ to all of India. We are training other organizations to manufacture and distribute\
\ BioSand Water Filters and provide clean water to all locations in India where\
\ there is a need.\\nOver 500,000 children die every year from diarrhea caused\
\ by unsafe water and poor sanitation \\u2013 that\\u2019s more than 1,400 a day.\
\ Achieving universal access to safe water would save 2.5 million lives every\
\ year. For every $1 invested in water and sanitation, an average of $4 is returned\
\ in increased productivity and reduced medical costs. Access to safe water breaks\
\ the cycle of poverty, creates markets where they never existed before and uplifts\
\ the global community as well as the local community.\\nA BioSand water filter\
\ is an adaptation of the traditional slow sand filter which has been used for\
\ community drinking water treatment for 200 years. The technology has been adapted\
\ to create a household water treatment filter that can be built on a small scale\
\ at low cost with materials available locally. The BioSand water filter has no\
\ replacement parts, requires no electricity, lasts for 30 years without ongoing\
\ costs and is virtually maintenance free. Found to be very effective for reducing\
\ water-borne disease and manufactured and used in more than 60 countries worldwide.\"\
\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 3
---
# Model Card
[Add more information here](https://huggingface.co/templates/model-card-example)
## Example Usage
```python3
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, Conversation
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained('fineinstructions/template_instantiator_adapter', revision=None) # Load tokenizer
tokenizer.padding_side = 'left'
base_model = AutoModelForCausalLM.from_pretrained('meta-llama/Llama-3.2-1B-Instruct', revision=None) # Load base model
model = PeftModel.from_pretrained(base_model, model_id='fineinstructions/template_instantiator_adapter', revision=None) # Apply adapter
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, return_full_text=False)
inputs = ['{\n "instruction_template": "How should we go about <fi>a few word description of the desirable outcome</fi> the <fi>a few word description of the undesirable situation</fi>? While I think it is important we research ways we can <fi>protect ourselves from the undesirable situation</fi>, I think it is equally important that we look at some ideas on how we can actually <fi>address the undesirable situation</fi> <fi>entities or organizations</fi> like <fi>them</fi> from <fi>their actions</fi> on <fi>people or groups</fi>. I have a few ideas of my own, but I want to see what other people think is the easiest, most reasonable way to <fi>achieve the desirable outcome</fi> or at the very least <fi>minimize the undesirable situation</fi>.",\n "document": "South Asia Pure Water Initiative, Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South India to manufacture BioSand Water Filters. For the past 10 years, we have developed programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in villages and schools in South India. We have brought clean water to more than 200,000 people suffering from diseases caused by contaminated water!\\nWith the help and support from the Centre for Affordable Water and Sanitation Technologies (CAWST), the premier BioSand filter experts worldwide, we have conducted training camps in various locations in India to spread the word of the BioSand Water Filter technology to all of India. We are training other organizations to manufacture and distribute BioSand Water Filters and provide clean water to all locations in India where there is a need.\\nOver 500,000 children die every year from diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more than 1,400 a day. Achieving universal access to safe water would save 2.5 million lives every year. For every $1 invested in water and sanitation, an average of $4 is returned in increased productivity and reduced medical costs. Access to safe water breaks the cycle of poverty, creates markets where they never existed before and uplifts the global community as well as the local community.\\nA BioSand water filter is an adaptation of the traditional slow sand filter which has been used for community drinking water treatment for 200 years. The technology has been adapted to create a household water treatment filter that can be built on a small scale at low cost with materials available locally. The BioSand water filter has no replacement parts, requires no electricity, lasts for 30 years without ongoing costs and is virtually maintenance free. Found to be very effective for reducing water-borne disease and manufactured and used in more than 60 countries worldwide."\n}']
prompts = [tokenizer.apply_chat_template([{'role': 'user', 'content': i}], tokenize=False, add_generation_prompt=True) for i in inputs]
print(pipe(prompts, max_length=131072, do_sample=False))
```
---
This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json).
<!-- Autocitation -->
--------------------
This is a work-in-progress. If you use this project in your research please cite:
```
@article{patel2025fineinstructions,
title = {FineInstructions: A Web-Scale Instructions Dataset},
author = {Patel, Ajay and Raffel, Colin and Callison-Burch, Chris},
year = {2025},
month = aug,
day = {11},
note = {Work in progress},
}
```
|
fineinstructions/instruction_template_retrieval_embedding
|
fineinstructions
| 2025-08-11T23:09:36Z | 7,302 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"datadreamer",
"datadreamer-0.46.0",
"synthetic",
"feature-extraction",
"sentence-similarity",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-03T18:30:39Z |
---
base_model: BAAI/bge-m3
tags:
- datadreamer
- datadreamer-0.46.0
- synthetic
- sentence-transformers
- feature-extraction
- sentence-similarity
library_name: sentence-transformers
pipeline_tag: sentence-similarity
---
Given a *document*, this retrieval embedding model helps retrieve *instruction templates* from [FineTemplates](https://huggingface.co/datasets/fineinstructions/finetemplates) relevant to various chunks / sections of a document or an entire document.
**Note:** This retrieval embedding is symmetric, so it can also be used to retrieve relevant documents to a [(`compatible_document_description`) of an instruction template](https://huggingface.co/datasets/fineinstructions/finetemplates).
## Requirements
```
datasets
faiss
huggingface_hub
numpy
pandas
sentence_transformers
```
## Simple Usage Example
```python
import importlib
import json
from huggingface_hub import hf_hub_download
def download_and_import_module(module_name, variable):
module = importlib.util.module_from_spec(
importlib.util.spec_from_file_location(
module_name,
hf_hub_download(
repo_id="fineinstructions/instruction_template_retrieval_embedding",
filename=f"{module_name}.py",
),
)
)
module.__spec__.loader.exec_module(module)
return getattr(module, variable)
# Import the retriever helper class
InstructionTemplateRetriever = download_and_import_module("instruction_template_retriever", "InstructionTemplateRetriever")
# Prepare an example document
EXAMPLE_DOC = """
Title: Surprising Facts about Pigeons
Submitted On: September 24, 2008
Fact 1:
During World War I, a homing pigeon named Cher Ami played a critical role in saving nearly 200 soldiers who were trapped behind enemy lines.
Despite being injured by enemy fire, Cher Ami managed to deliver a crucial message that led to their rescue. For this act of bravery, the
French government awarded the pigeon the Croix de Guerre, a military medal of honor. Cher Ami became a symbol of courage and the extraordinary
utility of pigeons in wartime communication.
Fact 2:
Pigeons possess impressive cognitive abilities, one of the most surprising being their capacity for self-recognition in mirrors. This
trait is rare in the animal kingdom and is often considered a marker of higher intelligence. Experiments have shown that pigeons can distinguish
themselves from other birds when looking into a mirror, suggesting a level of self-awareness previously thought to be unique to primates and a
few other animals.
Fact 3:
Thanks to centuries of selective breeding, there are now more than 300 recognized breeds of domestic pigeon. These range from show pigeons with
elaborate feather patterns and head crests to performance breeds used in tumbling and racing. The sheer variety reflects the bird’s long history
as a companion species to humans.
Fact 4:
The Ancient Romans were known for their elaborate grooming rituals, and pigeons played an unexpected role in their beauty routines. Specifically,
they used pigeon droppings as a bleaching agent to style and lighten their hair. This unusual practice was part of the broader Roman obsession with
fashion and appearance, demonstrating how even the most unexpected materials found a place in early cosmetic treatments.
"""
# Retrieve relevant instruction templates to different chunks / sections of a document
retriever = InstructionTemplateRetriever(
coverage_chunks=4, sigma=0.05, alpha=1.0 # Ensure instruction templates cover information in the document with 4 chunks/sections
)
print(json.dumps(retriever.search(document=EXAMPLE_DOC), indent=4))
# ******************************************************
# Retrieval results look like:
# ******************************************************
# Instruction Templates for Entire Document:
# - "What's something <fi>a few word description of something remarkable or noteworthy</fi> you can tell me"
# Instruction Templates for Chunk 1/4 of the Document:
# - "write a <fi>a few word description of the type of message</fi> for <fi>a significant achievement or milestone</fi>"
# Instruction Templates for Chunk 2/4 of the Document:
# - "how are <fi>a type of organism or entity</fi> so <fi>exceptionally strong or notable in some way</fi>?"
# Instruction Templates for Chunk 3/4 of the Document:
# - "what are the common <fi>a type of organism, creature, or entity</fi>?"
# Instruction Templates for Chunk 4/4 of the Document:
# - "how did <fi>a group of people</fi> <fi>perform a common practice or activity</fi>"
# ******************************************************
# Increasing diversity:
# -----------------------
# You can increase diversity using the `reweight` parameter
# to increase diversity in instruction length like so:
# `print(json.dumps(retriever.search(document=EXAMPLE_DOC, reweight=True), indent=4))`
# ******************************************************
# ******************************************************
# Documentation:
# -----------------------
# You can read the full documentation of the `InstructionTemplateRetriever.search` method:
# by opening/reading the instruction_template_retriever.py file here:
# https://huggingface.co/fineinstructions/instruction_template_retrieval_embedding/tree/main
# ******************************************************
```
---
This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json).
<!-- Autocitation -->
--------------------
This is a work-in-progress. If you use this project in your research please cite:
```
@article{patel2025fineinstructions,
title = {FineInstructions: A Web-Scale Instructions Dataset},
author = {Patel, Ajay and Raffel, Colin and Callison-Burch, Chris},
year = {2025},
month = aug,
day = {11},
note = {Work in progress},
}
```
|
fineinstructions/template_instantiator
|
fineinstructions
| 2025-08-11T23:09:36Z | 808 | 0 | null |
[
"safetensors",
"llama",
"datadreamer",
"datadreamer-0.46.0",
"synthetic",
"text-generation",
"conversational",
"dataset:fineinstructions/template_instantiator_training",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] |
text-generation
| 2025-04-21T16:34:38Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- fineinstructions/template_instantiator_training
tags:
- datadreamer
- datadreamer-0.46.0
- synthetic
- text-generation
pipeline_tag: text-generation
---
This model will take a instruction template in the format of [FineTemplates](https://huggingface.co/datasets/fineinstructions/finetemplates) and a document and return an instantiated instruction and answer pair.
The output will be a JSON object.
## Simple Usage Example
```python
import json
import re
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
# Helper to expand excerpts in the answer
def expand(document, text):
excerpt_pattern = r"<excerpt>(.*?)<\.\.\.>(.*?)</excerpt>"
matches = re.findall(excerpt_pattern, text, flags=re.DOTALL)
replacements = {}
for prefix, suffix in matches:
match = re.search(
re.escape(prefix) + r" (.*?) " + re.escape(suffix),
document,
flags=re.DOTALL,
)
try:
if match:
replacements[f"<excerpt>{prefix}<...>{suffix}</excerpt>"] = match.group(
0
)
else:
return None
except Exception:
return None
for old, new in replacements.items():
text = text.replace(old, new)
return text
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('fineinstructions/template_instantiator', revision=None)
tokenizer.padding_side = 'left'
model = AutoModelForCausalLM.from_pretrained('fineinstructions/template_instantiator', revision=None)
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, return_full_text=False)
# Run inference to instantiate the instruction template and generate an answer
inputs = [json.dumps({
"instruction_template": "...",
"document": "..."
}, indent=2)]
prompts = [tokenizer.apply_chat_template([{'role': 'user', 'content': i}], tokenize=False, add_generation_prompt=True) for i in inputs]
generations = pipe(prompts, max_length=131072, truncation=True, temperature=None, top_p=None, do_sample=False)
output = generations[0][0]['generated_text']
output_json = json.loads()
# Expand the answer
output_json["answer"] = expand(document=inputs[0]["document"], text=output_json["answer"])
# Print the output JSON
print(output_json)
##### Output JSON:
# {
# ..
# }
#
```
---
This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json).
<!-- Autocitation -->
--------------------
This is a work-in-progress. If you use this project in your research please cite:
```
@article{patel2025fineinstructions,
title = {FineInstructions: A Web-Scale Instructions Dataset},
author = {Patel, Ajay and Raffel, Colin and Callison-Burch, Chris},
year = {2025},
month = aug,
day = {11},
note = {Work in progress},
}
```
|
motza0025/blockassist-bc-solitary_cunning_cockroach_1754952575
|
motza0025
| 2025-08-11T23:09:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary cunning cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:08:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary cunning cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rnosov/Qwen3-4B-Instruct-2507-unsloth-lora-4bit-magic-epoch-4-beta-07
|
rnosov
| 2025-08-11T23:09:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T23:08:51Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rnosov
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
razor534/blockassist-bc-lazy_extinct_termite_1754953191
|
razor534
| 2025-08-11T23:01:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy extinct termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:01:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy extinct termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
frankcholula/ppo-CarRacing-v3
|
frankcholula
| 2025-08-11T22:59:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CarRacing-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-10T23:18:39Z |
---
library_name: stable-baselines3
tags:
- CarRacing-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v3
type: CarRacing-v3
metrics:
- type: mean_reward
value: 248.24 +/- 168.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **CarRacing-v3**
This is a trained model of a **PPO** agent playing **CarRacing-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v3 -orga frankcholula -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v3 -orga frankcholula -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env CarRacing-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env CarRacing-v3 -f logs/ -orga frankcholula
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 2}},
{'rl_zoo3.wrappers.YAMLCompatResizeObservation': {'shape': [64,
64]}},
{'gymnasium.wrappers.transform_observation.GrayscaleObservation': {'keep_dim': True}}]),
('frame_stack', 2),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 'lin_1e-4'),
('max_grad_norm', 0.5),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 4000000.0),
('normalize', "{'norm_obs': False, 'norm_reward': True}"),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.GELU, '
'net_arch=dict(pi=[256], vf=[256]), )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Grapesss/blockassist-bc-wary_slow_sealion_1754950673
|
Grapesss
| 2025-08-11T22:53:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wary slow sealion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:53:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wary slow sealion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754952660
|
ggozzy
| 2025-08-11T22:52:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:52:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/blockassist-bc-exotic_slimy_horse_1754952627
|
afasdfdfadsf
| 2025-08-11T22:52:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic slimy horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:51:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic slimy horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754952385
|
ggozzy
| 2025-08-11T22:47:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:47:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Huwaiee/blockassist-bc-silent_prehistoric_dolphin_1754952314
|
Huwaiee
| 2025-08-11T22:45:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent prehistoric dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:45:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent prehistoric dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sahron/analisis-sentiment-indobert8850
|
Sahron
| 2025-08-11T22:43:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"deeplearning",
"indobert",
"SMOTE",
"id",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T22:34:58Z |
---
library_name: transformers
tags:
- deeplearning
- indobert
- SMOTE
license: apache-2.0
language:
- id
base_model:
- indobenchmark/indobert-base-p1
---
# Hasil Train Loop
| Epoch | Train Loss | Train ACC | Train F1 | Train REC | Train PRE | Valid Loss | Valid ACC | Valid F1 | Valid REC | Valid PRE | Catatan |
| ----- | ---------- | --------- | -------- | --------- | --------- | ---------- | --------- | -------- | --------- | --------- | ----------------------------------------------- |
| 1 | 0.8074 | 0.6357 | 0.6347 | 0.6357 | 0.6343 | 0.6218 | 0.7230 | 0.6995 | 0.7050 | 0.7110 | Model terbaik disimpan |
| 2 | 0.5472 | 0.7728 | 0.7732 | 0.7728 | 0.7739 | 0.5335 | 0.7824 | 0.7515 | 0.7531 | 0.7563 | Model terbaik disimpan |
| 3 | 0.4165 | 0.8307 | 0.8313 | 0.8307 | 0.8321 | 0.4123 | 0.8380 | 0.8113 | 0.8127 | 0.8104 | Model terbaik disimpan |
| 4 | 0.3166 | 0.8751 | 0.8755 | 0.8751 | 0.8762 | 0.4554 | 0.8248 | 0.7951 | 0.7973 | 0.7971 | VALID LOSS tidak membaik (1/2) |
| 5 | 0.2593 | 0.8970 | 0.8973 | 0.8970 | 0.8979 | 0.4023 | 0.8441 | 0.8230 | 0.8300 | 0.8219 | Model terbaik disimpan |
| 6 | 0.2175 | 0.9160 | 0.9161 | 0.9160 | 0.9163 | 0.3470 | 0.8850 | 0.8633 | 0.8609 | 0.8665 | Model terbaik disimpan |
| 7 | 0.1940 | 0.9268 | 0.9269 | 0.9268 | 0.9271 | 0.3848 | 0.8704 | 0.8480 | 0.8484 | 0.8478 | VALID LOSS tidak membaik (1/2) |
| 8 | 0.1616 | 0.9411 | 0.9411 | 0.9411 | 0.9411 | 0.4156 | 0.8596 | 0.8377 | 0.8414 | 0.8354 | VALID LOSS tidak membaik (2/2) — Early stopping |
# Accuracy per Epoch

# Loss per Epoch

# Classification Report pada Data Testing

# Confusion Matrix pada Data Testing

# Distribusi Sentimen Hasil Pred Pada Data Testing

# WordCloud Hasil Prediksi Pada Data Testing

# Frekuensi Kata Hasil Prediksi Pada Data Testing

|
Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF
|
Theros
| 2025-08-11T22:42:06Z | 0 | 0 | null |
[
"gguf",
"merge",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:SvalTek/ColdBrew-12B-Nemo-test2",
"base_model:quantized:SvalTek/ColdBrew-12B-Nemo-test2",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T22:41:33Z |
---
base_model: SvalTek/ColdBrew-12B-Nemo-test2
tags:
- merge
- lazymergekit
- llama-cpp
- gguf-my-repo
---
# Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/ColdBrew-12B-Nemo-test2`](https://huggingface.co/SvalTek/ColdBrew-12B-Nemo-test2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvalTek/ColdBrew-12B-Nemo-test2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-test2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-test2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-test2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-test2-q4_k_m.gguf -c 2048
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754951834
|
ggozzy
| 2025-08-11T22:38:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:38:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chunli-peng/OpenRS-GRPO-sft-8-e10
|
chunli-peng
| 2025-08-11T22:35:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:knoveleng/open-rs",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T22:02:29Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: knoveleng/open-rs
library_name: transformers
model_name: OpenRS-GRPO-sft-8-e10
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for OpenRS-GRPO-sft-8-e10
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chunli-peng/OpenRS-GRPO-sft-8-e10", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chunli-ai-texas-a-m-university/huggingface/runs/djkwt1mi)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754951558
|
ggozzy
| 2025-08-11T22:34:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:33:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jooseef/test_finetuned_merged
|
jooseef
| 2025-08-11T22:29:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T22:27:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754951283
|
ggozzy
| 2025-08-11T22:29:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:29:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ghost613/VC-MJY_Woman_40s-0_preprocessed_random_per_sample-10
|
ghost613
| 2025-08-11T22:25:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-11T11:08:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ameyapores/ACT_pushblock_franka_aug6_imgonly
|
Ameyapores
| 2025-08-11T22:22:10Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Ameyapores/pushblock_franka_aug6_imgonly",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T22:22:04Z |
---
datasets: Ameyapores/pushblock_franka_aug6_imgonly
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754950732
|
ggozzy
| 2025-08-11T22:20:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:19:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
igory1999/distilbert-base-uncased-finetuned-clinc
|
igory1999
| 2025-08-11T22:20:05Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-06T00:54:23Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7646
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2824 | 1.0 | 318 | 3.2629 | 0.7132 |
| 2.6019 | 2.0 | 636 | 1.8526 | 0.8423 |
| 1.5259 | 3.0 | 954 | 1.1400 | 0.9 |
| 0.9996 | 4.0 | 1272 | 0.8460 | 0.9148 |
| 0.7849 | 5.0 | 1590 | 0.7646 | 0.9174 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.6.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Ameyapores/ACT_pushblock_franka_aug6
|
Ameyapores
| 2025-08-11T22:19:56Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Ameyapores/pushblock_franka_aug6",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T22:19:50Z |
---
datasets: Ameyapores/pushblock_franka_aug6
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754950182
|
ggozzy
| 2025-08-11T22:11:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:10:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754949906
|
ggozzy
| 2025-08-11T22:06:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:06:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754948973
|
Sayemahsjn
| 2025-08-11T22:06:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:06:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tensorblock/shahidul034_MediPhi-Instruct-GGUF
|
tensorblock
| 2025-08-11T22:01:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:shahidul034/MediPhi-Instruct",
"base_model:quantized:shahidul034/MediPhi-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T21:19:45Z |
---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: shahidul034/MediPhi-Instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## shahidul034/MediPhi-Instruct - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [shahidul034/MediPhi-Instruct](https://huggingface.co/shahidul034/MediPhi-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}<|end|>
<|user|>
{prompt}<|end|>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MediPhi-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q2_K.gguf) | Q2_K | 1.416 GB | smallest, significant quality loss - not recommended for most purposes |
| [MediPhi-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q3_K_S.gguf) | Q3_K_S | 1.682 GB | very small, high quality loss |
| [MediPhi-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q3_K_M.gguf) | Q3_K_M | 1.955 GB | very small, high quality loss |
| [MediPhi-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q3_K_L.gguf) | Q3_K_L | 2.088 GB | small, substantial quality loss |
| [MediPhi-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q4_0.gguf) | Q4_0 | 2.176 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MediPhi-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q4_K_S.gguf) | Q4_K_S | 2.189 GB | small, greater quality loss |
| [MediPhi-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q4_K_M.gguf) | Q4_K_M | 2.393 GB | medium, balanced quality - recommended |
| [MediPhi-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q5_0.gguf) | Q5_0 | 2.641 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MediPhi-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q5_K_S.gguf) | Q5_K_S | 2.641 GB | large, low quality loss - recommended |
| [MediPhi-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q5_K_M.gguf) | Q5_K_M | 2.815 GB | large, very low quality loss - recommended |
| [MediPhi-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q6_K.gguf) | Q6_K | 3.136 GB | very large, extremely low quality loss |
| [MediPhi-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q8_0.gguf) | Q8_0 | 4.061 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/shahidul034_MediPhi-Instruct-GGUF --include "MediPhi-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/shahidul034_MediPhi-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
acidjp/blockassist-bc-pesty_extinct_prawn_1754949257
|
acidjp
| 2025-08-11T22:00:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:00:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arcee-ai/Virtuoso-Large
|
arcee-ai
| 2025-08-11T21:54:10Z | 16 | 29 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-72B",
"base_model:finetune:Qwen/Qwen2.5-72B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-09T17:23:45Z |
---
base_model:
- Qwen/Qwen2.5-72B
library_name: transformers
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
---

**Virtuoso-Large (72B)** is our most powerful and versatile general-purpose model, designed to excel at handling complex and varied tasks across domains. With state-of-the-art performance, it offers unparalleled capability for nuanced understanding, contextual adaptability, and high accuracy.
### Model Details
- Architecture Base: Qwen2.5-72B
- Parameter Count: 72B
- License: Qwen's Tongyi License
### Use Cases
- Advanced content creation, such as technical writing and creative storytelling
- Data summarization and report generation for cross-functional domains
- Detailed knowledge synthesis and deep-dive insights from diverse datasets
- Multilingual support for international operations and communications
### Quantizations
GGUF format available [here](https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF)
### License
**Virtuoso-Large (72B)** is released under the qwen license. You are free to use, modify, and distribute this model in both commercial and non-commercial applications, subject to the terms and conditions of the license.
If you have questions or would like to share your experiences using Virtuoso-Large (72B), please feel free to connect with us on social media. We’re excited to see what you build—and how this model helps you innovate!
|
lelouch33/blockassist-bc-frisky_sneaky_sandpiper_1754948637
|
lelouch33
| 2025-08-11T21:47:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"frisky sneaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:46:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- frisky sneaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fbaldassarri/EleutherAI_pythia-1.4b-deduped-autogptq-int8-gs64-asym
|
fbaldassarri
| 2025-08-11T21:45:46Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"gptq",
"auto-gptq",
"autogptq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b-deduped",
"base_model:quantized:EleutherAI/pythia-1.4b-deduped",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-11T21:39:07Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- gptq
- auto-gptq
- autogptq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b deduped
base_model: EleutherAI/pythia-1.4b-deduped
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b-deduped) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: GPTQ (AutoGPTQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT8 version of pythia-1.4b-deduped has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-deduped-autogptq-int8-gs64-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754948531
|
ggozzy
| 2025-08-11T21:43:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:43:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NTIS/bio-1b_new2
|
NTIS
| 2025-08-11T21:39:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T21:39:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nkerr/sv3.2-1-qwen1.5-0.5B-Chat
|
nkerr
| 2025-08-11T21:39:29Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-08-11T21:39:08Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- generated_from_trainer
model-index:
- name: sv3.2-1-qwen1.5-0.5B-Chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sv3.2-1-qwen1.5-0.5B-Chat
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 18.9666 | 0.2469 | 20 | 16.0623 |
| 12.6221 | 0.4938 | 40 | 9.0829 |
| 5.4773 | 0.7407 | 60 | 2.2669 |
| 1.3455 | 0.9877 | 80 | 0.5687 |
| 0.5052 | 1.2346 | 100 | 0.3800 |
| 0.4151 | 1.4815 | 120 | 0.3491 |
| 0.3821 | 1.7284 | 140 | 0.3368 |
| 0.3816 | 1.9753 | 160 | 0.3268 |
| 0.3598 | 2.2222 | 180 | 0.3206 |
| 0.3561 | 2.4691 | 200 | 0.3174 |
| 0.364 | 2.7160 | 220 | 0.3153 |
| 0.3497 | 2.9630 | 240 | 0.3149 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Joseph717171/Gpt-OSS-20B-BF16-Unquantized
|
Joseph717171
| 2025-08-11T21:39:14Z | 0 | 1 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T21:26:54Z |
**Gpt-OSS-20B-BF16-Unquantized**
Unquantized GGUF BF16 model weights for gpt-oss-20B (the MoE layers are all unquantized from MXFP4 to BF16). Happy quantizing! 😋
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754947428
|
ggozzy
| 2025-08-11T21:25:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:25:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Osrivers/cyberrealisticPony_v127Alt.safetensors
|
Osrivers
| 2025-08-11T21:23:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-11T21:20:53Z |
---
license: creativeml-openrail-m
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.