modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 12:28:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 543
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 12:27:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Tlgoa/tmr-ai-nano
|
Tlgoa
| 2025-09-04T17:49:17Z | 0 | 1 |
mlx
|
[
"mlx",
"safetensors",
"gemma3_text",
"finance",
"gemma",
"instruction-tuning",
"dataset:Josephgflowers/Finance-Instruct-500k",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"license:other",
"region:us"
] | null | 2025-09-04T17:24:26Z |
---
license: other
base_model: google/gemma-3-270m-it
tags:
- mlx
- finance
- gemma
- instruction-tuning
datasets:
- Josephgflowers/Finance-Instruct-500k
---
# Gemma-3-270M - Fine-tuned for Financial Instructions
This is a fine-tuned version of Google's `gemma-3-270m-it` model, adapted for financial instruction-following tasks.
## Model Description
This model was fine-tuned using the Apple MLX framework. The goal was to specialize the base model for financial reporting summary and decision-making assistance. It was trained on the `Josephgflowers/Finance-Instruct-500k` dataset.
## Intended Use
This model is intended for tasks related to the financial domain, such as:
* Answering questions about financial concepts.
* Summarizing financial reports.
* Following instructions based on financial data.
## How to Use
You can use this model with the `transformers` library just like any other standard Hugging Face model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tlgoa/tmr-ai-nano"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Note: Gemma 3 uses a specific chat template.
# For single-turn inference, you can format it like this:
prompt = "What is the difference between revenue and profit?"
formatted_prompt = f"### User:\n{prompt}\n\n### Assistant:"
inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Clean up the response to only show the assistant's part
assistant_response = response.split("### Assistant:")[1].strip()
print(assistant_response)
```
## Training Procedure
### Dataset
The model was fine-tuned on the `Josephgflowers/Finance-Instruct-500k` dataset. The data was preprocessed to fit the following format:
```
### User:
{user_prompt}
### Assistant:
{assistant_response}
```
### Fine-tuning
The model was fine-tuned directly (full parameter tuning) using an Adam optimizer. Due to challenges with LoRA implementation in the available MLX version, a full fine-tuning approach was chosen. The fine-tuned weights were originally saved in MLX's `.npz` format and subsequently converted back to Hugging Face `safetensors` format for distribution.
## Licenses
- **Base Model:** This model is based on Google's Gemma-3-270M, which is subject to the [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
- **Dataset:** The training data from `Josephgflowers/Finance-Instruct-500k` is available under the Apache 2.0 License.
|
rubengerad/gemma3_google
|
rubengerad
| 2025-09-04T17:42:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:unsloth/gemma-3-270m-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-270m-it-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T23:58:32Z |
---
base_model: unsloth/gemma-3-270m-it-unsloth-bnb-4bit
library_name: transformers
model_name: gemma3_google
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for gemma3_google
This model is a fine-tuned version of [unsloth/gemma-3-270m-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-270m-it-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rubengerad/gemma3_google", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Trelis/Qwen3-4B_ds-arc-agi-2-partial-20_test-c4
|
Trelis
| 2025-09-04T17:39:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T17:28:36Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF
|
mradermacher
| 2025-09-04T17:25:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"base_model:AmberYifan/Llama-3.1-8B-sft-spin-10k-IPO",
"base_model:quantized:AmberYifan/Llama-3.1-8B-sft-spin-10k-IPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-04T16:21:10Z |
---
base_model: AmberYifan/Llama-3.1-8B-sft-spin-10k-IPO
language:
- en
library_name: transformers
model_name: Llama-3.1-8B-sft-spin-10k-IPO
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-spin-10k-IPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-8B-sft-spin-10k-IPO-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-IPO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-IPO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eekay/Meta-Llama-3-8B-Instruct-cat-numbers-ft
|
eekay
| 2025-09-04T17:15:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T17:12:02Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Viktor-01/blockassist-bc-leaping_humming_finch_1757003705
|
Viktor-01
| 2025-09-04T17:14:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping humming finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T17:14:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping humming finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_250904_1_rosbag_transform
|
ROBOTIS
| 2025-09-04T17:08:36Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_250904_1_rosbag",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-04T17:08:24Z |
---
datasets: ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_250904_1_rosbag
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
obadx/muaalem-model-v3
|
obadx
| 2025-09-04T16:58:17Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multi_level_ctc",
"generated_from_trainer",
"quran",
"ASR",
"ar",
"dataset:obadx/muaalem-annotated-v3",
"arxiv:2509.00094",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-23T22:45:37Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
- quran
- ASR
model-index:
- name: muaalem-model-v3
results: []
language:
- ar
metrics:
- cer
datasets:
- obadx/muaalem-annotated-v3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# muaalem-model-v3
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0118
- Per Phonemes: 0.0043
- Per Hams Or Jahr: 0.0020
- Per Shidda Or Rakhawa: 0.0027
- Per Tafkheem Or Taqeeq: 0.0031
- Per Itbaq: 0.0013
- Per Safeer: 0.0014
- Per Qalqla: 0.0013
- Per Tikraar: 0.0037
- Per Tafashie: 0.0019
- Per Istitala: 0.0012
- Per Ghonna: 0.0017
- Average Per: 0.0022
## Model description
The model was presented in the paper [Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning](https://huggingface.co/papers/2509.00094)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per Phonemes | Per Hams Or Jahr | Per Shidda Or Rakhawa | Per Tafkheem Or Taqeeq | Per Itbaq | Per Safeer | Per Qalqla | Per Tikraar | Per Tafashie | Per Istitala | Per Ghonna | Average Per |
|:-------------:|:------:|:----:|:---------------:|:------------:|:----------------:|:---------------------:|:----------------------:|:---------:|:----------:|:----------:|:-----------:|:------------:|:------------:|:----------:|:-----------:|
| 0.128 | 0.2002 | 650 | 0.0237 | 0.0075 | 0.0031 | 0.0043 | 0.0071 | 0.0020 | 0.0020 | 0.0019 | 0.0056 | 0.0037 | 0.0018 | 0.0024 | 0.0038 |
| 0.0172 | 0.4005 | 1300 | 0.0128 | 0.0038 | 0.0017 | 0.0025 | 0.0039 | 0.0013 | 0.0013 | 0.0012 | 0.0044 | 0.0024 | 0.0011 | 0.0016 | 0.0023 |
| 0.0146 | 0.6007 | 1950 | 0.0105 | 0.0033 | 0.0014 | 0.0022 | 0.0028 | 0.0010 | 0.0012 | 0.0011 | 0.0039 | 0.0017 | 0.0009 | 0.0015 | 0.0019 |
| 0.0111 | 0.8010 | 2600 | 0.0118 | 0.0043 | 0.0020 | 0.0027 | 0.0031 | 0.0013 | 0.0014 | 0.0013 | 0.0037 | 0.0019 | 0.0012 | 0.0017 | 0.0022 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu128
- Datasets 3.3.2
- Tokenizers 0.21.4
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1757003405
|
pempekmangedd
| 2025-09-04T16:55:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T16:55:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hast2/2025-paraphrase_mpnet_influence-figure_v1
|
hast2
| 2025-09-04T16:47:49Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"paraphrase",
"semantic-similarity",
"figurative-language",
"literary-analysis",
"sentence-similarity",
"ja",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-04T16:47:25Z |
---
language:
- ja
- en
library_name: sentence-transformers
tags:
- sentence-transformers
- paraphrase
- semantic-similarity
- figurative-language
- literary-analysis
pipeline_tag: sentence-similarity
---
# hast2_2025_paraphrase_mpnet_influence_v1
## Model Description
Paraphrase-MpNet Influence+Figure v2
このモデルは文の意味的類似性を計算するために微調整されたparaphrase detection モデルです。
特に比喩表現や文学的影響の分析に特化しています。
## Usage
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('hast2/2025-hast2_2025_paraphrase_mpnet_influence_v1')
# エンコード例
sentences = ['今日はいい天気です', '本日は晴天なり']
embeddings = model.encode(sentences)
# 類似度計算
similarity = model.similarity(embeddings[0], embeddings[1])
print(f"類似度: {similarity:.4f}")
```
## Training Details
- Base Model: paraphrase-mpnet-base-v2 / paraphrase-XLM-R-multilingual-v1
- Fine-tuning Task: Paraphrase Detection for Figurative Language
- Training Data: Japanese and English figurative expressions
## Intended Use
このモデルは以下の用途に適しています:
- 文の意味的類似性の計算
- 比喩表現の検出と分析
- 文学テキストの意味分析
- パラフレーズ検出
## Limitations
- 専門的な比喩表現や文学的表現に特化しているため、一般的なテキストには最適化されていない場合があります
- 学術研究用途を想定しており、商用利用の場合は事前にテストを推奨します
## Citation
研究で使用される場合は、適切な引用をお願いします。
## License
This model is released under the Apache 2.0 license.
|
mcptester0606/MyAwesomeModel-TestRepo
|
mcptester0606
| 2025-09-04T16:32:52Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-09-04T16:31:55Z |
---
license: mit
library_name: transformers
---
# MyAwesomeModel
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="figures/fig1.png" width="60%" alt="MyAwesomeModel" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="figures/fig2.png" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## 1. Introduction
The MyAwesomeModel has undergone a significant version upgrade. In the latest update, MyAwesomeModel has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of other leading models.
<p align="center">
<img width="80%" src="figures/fig3.png">
</p>
Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate and enhanced support for function calling.
## 2. Evaluation Results
### Comprehensive Benchmark Results
<div align="center">
| | Benchmark | Model1 | Model2 | Model1-v2 | MyAwesomeModel |
|---|---|---|---|---|---|
| **Core Reasoning Tasks** | Math Reasoning | 0.510 | 0.535 | 0.521 | 0.550 |
| | Logical Reasoning | 0.789 | 0.801 | 0.810 | 0.650 |
| | Common Sense | 0.716 | 0.702 | 0.725 | 0.828 |
| **Language Understanding** | Reading Comprehension | 0.671 | 0.685 | 0.690 | 0.792 |
| | Question Answering | 0.582 | 0.599 | 0.601 | 0.607 |
| | Text Classification | 0.803 | 0.811 | 0.820 | 0.819 |
| | Sentiment Analysis | 0.777 | 0.781 | 0.790 | 0.736 |
| **Generation Tasks** | Code Generation | 0.615 | 0.631 | 0.640 | 0.700 |
| | Creative Writing | 0.588 | 0.579 | 0.601 | 0.644 |
| | Dialogue Generation | 0.621 | 0.635 | 0.639 | 0.767 |
| | Summarization | 0.745 | 0.755 | 0.760 | 0.804 |
| **Specialized Capabilities**| Translation | 0.782 | 0.799 | 0.801 | 0.676 |
| | Knowledge Retrieval | 0.651 | 0.668 | 0.670 | 0.610 |
| | Instruction Following | 0.733 | 0.749 | 0.751 | 0.758 |
| | Safety Evaluation | 0.718 | 0.701 | 0.725 | 0.739 |
</div>
### Overall Performance Summary
The MyAwesomeModel demonstrates strong performance across all evaluated benchmark categories, with particularly notable results in reasoning and generation tasks.
## 3. Chat Website & API Platform
We offer a chat interface and API for you to interact with MyAwesomeModel. Please check our official website for more details.
## 4. How to Run Locally
Please refer to our code repository for more information about running MyAwesomeModel locally.
Compared to previous versions, the usage recommendations for MyAwesomeModel have the following changes:
1. System prompt is supported.
2. It is not required to add special tokens at the beginning of the output to force the model into a specific thinking pattern.
The model architecture of MyAwesomeModel-Small is identical to its base model, but it shares the same tokenizer configuration as the main MyAwesomeModel. This model can be run in the same manner as its base model.
### System Prompt
We recommend using the following system prompt with a specific date.
```
You are MyAwesomeModel, a helpful AI assistant.
Today is {current date}.
```
For example,
```
You are MyAwesomeModel, a helpful AI assistant.
Today is May 28, 2025, Monday.
```
### Temperature
We recommend setting the temperature parameter $T_{model}$ to 0.6.
### Prompts for File Uploading and Web Search
For file uploading, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments.
```
file_template = \
"""[file name]: {file_name}
[file content begin]
{file_content}
[file content end]
{question}"""
```
For web search enhanced generation, we recommend the following prompt template where {search_results}, {cur_date}, and {question} are arguments.
```
search_answer_en_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
```
## 5. License
This code repository is licensed under the [MIT License](LICENSE). The use of MyAwesomeModel models is also subject to the [MIT License](LICENSE). The model series supports commercial use and distillation.
## 6. Contact
If you have any questions, please raise an issue on our GitHub repository or contact us at contact@MyAwesomeModel.ai.
```
|
zcopwerq/blockassist-bc-rugged_voracious_seal_1757003508
|
zcopwerq
| 2025-09-04T16:32:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged voracious seal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T16:31:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged voracious seal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757003358
|
fakir22
| 2025-09-04T16:29:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping peaceful caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T16:29:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping peaceful caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hidevil/distilgpt2-squad
|
hidevil
| 2025-09-04T16:26:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T16:20:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thefirstgoku/49V_w13_smol_k8
|
thefirstgoku
| 2025-09-04T16:23:38Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-04T16:22:57Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
cactus-S/blockassist-bc-reclusive_arctic_panther_1757001172
|
cactus-S
| 2025-09-04T16:17:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive arctic panther",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T16:17:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive arctic panther
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Completo-Videos-do-surfista-da-mansao-Veja/ORIGINAL.video.do.surfista.da.mansao.privilegio
|
Completo-Videos-do-surfista-da-mansao-Veja
| 2025-09-04T16:12:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-04T16:12:33Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Raziel1234/LiteGPT
|
Raziel1234
| 2025-09-04T16:12:19Z | 0 | 0 | null |
[
"causal-lm",
"agent",
"text-generation",
"en",
"dataset:Raziel1234/LiteGPT-DataSet",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-04T15:35:08Z |
---
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- agent
datasets:
- Raziel1234/LiteGPT-DataSet
---
## Model Card – LiteGPT
### Model Overview
**LiteGPT** is a small-scale conversational language model trained by Raziel1234. It is designed for English-only dialogue generation and simple text-based interactions. The model is lightweight, efficient, and suitable for small-scale projects or experimentation with GPT-like architectures.
---
### Intended Use
- Conversational AI and chatbot applications.
- Educational experiments with language modeling.
- Research on small-scale transformer models.
- Text generation in English.
**Not intended for:**
- Generating non-English content (currently supports English only).
- Production-grade AI requiring advanced safety filters.
- Sensitive, medical, or legal advice.
---
### Training Data
- Synthetic conversational dataset containing 25,000+ dialogue examples.
- Topics include greetings, jokes, fun facts, AI/machine learning, and general questions.
- Dataset automatically generated to be lightweight and diverse.
---
### Model Architecture
- Transformer-based GPT architecture.
- 6 layers, 4 attention heads, 256 embedding size.
- Feed-forward hidden size: 1024
- Max sequence length: 64 tokens
- Causal attention masking for autoregressive generation.
---
### Limitations
- English-only: cannot reliably understand or respond in other languages.
- Small model: may produce simplified or occasionally inaccurate answers.
- Synthetic training corpus: may lack nuanced or real-world conversation variety.
---
### Example Usage
```python
from litegpt import DialogueManager, LiteGPT, TokenDataset, load_corpus_and_tokenize, DEVICE, MODEL_CHECKPOINT
import torch
data = load_corpus_and_tokenize()
dataset = TokenDataset(data)
model = LiteGPT(vocab_size=50257).to(DEVICE)
model.load_state_dict(torch.load(MODEL_CHECKPOINT, map_location=DEVICE))
dm = DialogueManager(model)
user_input = "Hello!"
response = dm.generate_response(user_input)
print("LiteGPT:", response)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1757002202
|
liukevin666
| 2025-09-04T16:11:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T16:11:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1757001727
|
matherchodhuuu
| 2025-09-04T16:04:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T16:04:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1757001544
|
liukevin666
| 2025-09-04T16:01:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T16:00:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
onnx-community/embeddinggemma-300m-ONNX
|
onnx-community
| 2025-09-04T15:43:56Z | 65 | 2 |
transformers.js
|
[
"transformers.js",
"onnx",
"gemma3_text",
"feature-extraction",
"text-embeddings-inference",
"sentence-similarity",
"base_model:google/embeddinggemma-300m",
"base_model:quantized:google/embeddinggemma-300m",
"license:gemma",
"region:us"
] |
sentence-similarity
| 2025-08-22T16:41:16Z |
---
license: gemma
base_model:
- google/embeddinggemma-300m
pipeline_tag: sentence-similarity
library_name: transformers.js
tags:
- text-embeddings-inference
---
# EmbeddingGemma model card
**Model Page**: [EmbeddingGemma](https://ai.google.dev/gemma/docs/embeddinggemma)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [EmbeddingGemma on Kaggle](https://www.kaggle.com/models/google/embeddinggemma/)
* [EmbeddingGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/embeddinggemma)
**Terms of Use**: [Terms](https://ai.google.dev/gemma/terms)
**Authors**: Google DeepMind
## Model Information
### Description
EmbeddingGemma is a 300M parameter, state-of-the-art for its size, open embedding model from Google, built from Gemma 3 (with T5Gemma initialization) and the same research and technology used to create Gemini models. EmbeddingGemma produces vector representations of text, making it well-suited for search and retrieval tasks, including classification, clustering, and semantic similarity search. This model was trained with data in 100+ spoken languages.
The small size and on-device focus makes it possible to deploy in environments with limited resources such as mobile phones, laptops, or desktops, democratizing access to state of the art AI models and helping foster innovation for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be embedded
- Maximum input context length of 2048 tokens
- **Output:**
- Numerical vector representations of input text data
- Output embedding dimension size of 768, with smaller options available (512, 256, or 128) via Matryoshka Representation Learning (MRL). MRL allows users to truncate the output embedding of size 768 to their desired size and then re-normalize for efficient and accurate representation.
### Usage
These model weights are designed to be used with [Transformers.js](https://huggingface.co/docs/transformers.js/en/index).
**NOTE**: EmbeddingGemma activations do not support `fp16` or its derivatives. Please use `fp32`, `q8`, or `q4` as appropriate for your hardware.
#### Transformers.js in JavaScript
```js
import { AutoModel, AutoTokenizer, matmul } from "@huggingface/transformers";
// Download from the 🤗 Hub
const model_id = "onnx-community/embeddinggemma-300m-ONNX";
const tokenizer = await AutoTokenizer.from_pretrained(model_id);
const model = await AutoModel.from_pretrained(model_id, {
dtype: "fp32", // Options: "fp32" | "q8" | "q4".
});
// Run inference with queries and documents
const prefixes = {
query: "task: search result | query: ",
document: "title: none | text: ",
};
const query = prefixes.query + "Which planet is known as the Red Planet?";
const documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
"Jupiter, the largest planet in our solar system, has a prominent red spot.",
"Saturn, famous for its rings, is sometimes mistaken for the Red Planet.",
].map((x) => prefixes.document + x);
const inputs = await tokenizer([query, ...documents], { padding: true });
const { sentence_embedding } = await model(inputs);
// Compute similarities to determine a ranking
const scores = await matmul(sentence_embedding, sentence_embedding.transpose(1, 0));
const similarities = scores.tolist()[0].slice(1);
console.log(similarities);
// [ 0.30109718441963196, 0.6358831524848938, 0.4930494725704193, 0.48887503147125244 ]
// Convert similarities to a ranking
const ranking = similarities.map((score, index) => ({ index, score })).sort((a, b) => b.score - a.score);
console.log(ranking);
// [
// { index: 1, score: 0.6358831524848938 },
// { index: 2, score: 0.4930494725704193 },
// { index: 3, score: 0.48887503147125244 },
// { index: 0, score: 0.30109718441963196 }
// ]
```
#### Using the ONNX Runtime in Python
```py
from huggingface_hub import hf_hub_download
import onnxruntime as ort
from transformers import AutoTokenizer
# Download from the 🤗 Hub
model_id = "onnx-community/embeddinggemma-300m-ONNX"
model_path = hf_hub_download(model_id, subfolder="onnx", filename="model.onnx") # Download graph
hf_hub_download(model_id, subfolder="onnx", filename="model.onnx_data") # Download weights
session = ort.InferenceSession(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Run inference with queries and documents
prefixes = {
"query": "task: search result | query: ",
"document": "title: none | text: ",
}
query = prefixes["query"] + "Which planet is known as the Red Planet?"
documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
"Jupiter, the largest planet in our solar system, has a prominent red spot.",
"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
]
documents = [prefixes["document"] + x for x in documents]
inputs = tokenizer([query] + documents, padding=True, return_tensors="np")
_, sentence_embedding = session.run(None, inputs.data)
print(sentence_embedding.shape) # (5, 768)
# Compute similarities to determine a ranking
query_embeddings = sentence_embedding[0]
document_embeddings = sentence_embedding[1:]
similarities = query_embeddings @ document_embeddings.T
print(similarities) # [0.30109745 0.635883 0.49304956 0.48887485]
# Convert similarities to a ranking
ranking = similarities.argsort()[::-1]
print(ranking) # [1 2 3 0]
```
#### Using the ONNX Runtime in Text Embeddings Inference (TEI)
```bash
docker run -p 8080:80 \
ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.1 \
--model-id onnx-community/embeddinggemma-300M-ONNX \
--dtype float32 \
--pooling mean
```
## Model Data
### Training Dataset
This model was trained on a dataset of text data that includes a wide variety of sources totaling approximately 320 billion tokens. Here are the key components:
- **Web Documents**: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 100 languages.
- **Code and Technical Documents**: Exposing the model to code and technical documentation helps it learn the structure and patterns of programming languages and specialized scientific content, which improves its understanding of code and technical questions.
- **Synthetic and Task-Specific Data**: Synthetically training data helps to teach the model specific skills. This includes curated data for tasks like information retrieval, classification, and sentiment analysis, which helps to fine-tune its performance for common embedding applications.
The combination of these diverse data sources is crucial for training a powerful multilingual embedding model that can handle a wide variety of different tasks and data formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in line with [our policies](https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf).
## Model Development
### Hardware
EmbeddingGemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e), for more details refer to the [Gemma 3 model card](https://ai.google.dev/gemma/docs/core/model_card_3).
### Software
Training was done using [JAX](https://github.com/jax-ml/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/). For more details refer to the [Gemma 3 model card](https://ai.google.dev/gemma/docs/core/model_card_3).
## Evaluation
### Benchmark Results
The model was evaluated against a large collection of different datasets and metrics to cover different aspects of text understanding.
#### Full Precision Checkpoint
<table>
<thead>
<tr>
<th colspan="3"><strong>MTEB (Multilingual, v2)</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Dimensionality</strong></td>
<td><strong>Mean (Task)</strong></td>
<td><strong>Mean (TaskType)</strong></td>
</tr>
<tr>
<td>768d</td>
<td>61.15</td>
<td>54.31</td>
</tr>
<tr>
<td>512d</td>
<td>60.71</td>
<td>53.89</td>
</tr>
<tr>
<td>256d</td>
<td>59.68</td>
<td>53.01</td>
</tr>
<tr>
<td>128d</td>
<td>58.23</td>
<td>51.77</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th colspan="3"><strong>MTEB (English, v2)</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Dimensionality</strong></td>
<td><strong>Mean (Task)</strong></td>
<td><strong>Mean (TaskType)</strong></td>
</tr>
<tr>
<td>768d</td>
<td>68.36</td>
<td>64.15</td>
</tr>
<tr>
<td>512d</td>
<td>67.80</td>
<td>63.59</td>
</tr>
<tr>
<td>256d</td>
<td>66.89</td>
<td>62.94</td>
</tr>
<tr>
<td>128d</td>
<td>65.09</td>
<td>61.56</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th colspan="3"><strong>MTEB (Code, v1)</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Dimensionality</strong></td>
<td><strong>Mean (Task)</strong></td>
<td><strong>Mean (TaskType)</strong></td>
</tr>
<tr>
<td>768d</td>
<td>68.76</td>
<td>68.76</td>
</tr>
<tr>
<td>512d</td>
<td>68.48</td>
<td>68.48</td>
</tr>
<tr>
<td>256d</td>
<td>66.74</td>
<td>66.74</td>
</tr>
<tr>
<td>128d</td>
<td>62.96</td>
<td>62.96</td>
</tr>
</tbody>
</table>
#### QAT Checkpoints
<table>
<thead>
<tr>
<th colspan="3"><strong>MTEB (Multilingual, v2)</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Quant config (dimensionality)</strong></td>
<td><strong>Mean (Task)</strong></td>
<td><strong>Mean (TaskType)</strong></td>
</tr>
<tr>
<td>Q4_0 (768d)</td>
<td>60.62</td>
<td>53.61</td>
</tr>
<tr>
<td>Q8_0 (768d)</td>
<td>60.93</td>
<td>53.95</td>
</tr>
<tr>
<td>Mixed Precision* (768d)</td>
<td>60.69</td>
<td>53.82</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th colspan="3"><strong>MTEB (English, v2)</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Quant config (dimensionality)</strong></td>
<td><strong>Mean (Task)</strong></td>
<td><strong>Mean (TaskType)</strong></td>
</tr>
<tr>
<td>Q4_0 (768d)</td>
<td>67.91</td>
<td>63.64</td>
</tr>
<tr>
<td>Q8_0 (768d)</td>
<td>68.13</td>
<td>63.85</td>
</tr>
<tr>
<td>Mixed Precision* (768d)</td>
<td>67.95</td>
<td>63.83</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th colspan="3"><strong>MTEB (Code, v1)</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Quant config (dimensionality)</strong></td>
<td><strong>Mean (Task)</strong></td>
<td><strong>Mean (TaskType)</strong></td>
</tr>
<tr>
<td>Q4_0 (768d)</td>
<td>67.99</td>
<td>67.99</td>
</tr>
<tr>
<td>Q8_0 (768d)</td>
<td>68.70</td>
<td>68.70</td>
</tr>
<tr>
<td>Mixed Precision* (768d)</td>
<td>68.03</td>
<td>68.03</td>
</tr>
</tbody>
</table>
Note: QAT models are evaluated after quantization
\* Mixed Precision refers to per-channel quantization with int4 for embeddings, feedforward, and projection layers, and int8 for attention (e4_a8_f4_p4).
### Prompt Instructions
EmbeddingGemma can generate optimized embeddings for various use cases—such as document retrieval, question answering, and fact verification—or for specific input types—either a query or a document—using prompts that are prepended to the input strings.
Query prompts follow the form `task: {task description} | query: ` where the task description varies by the use case, with the default task description being `search result`. Document-style prompts follow the form `title: {title | "none"} | text: ` where the title is either `none` (the default) or the actual title of the document. Note that providing a title, if available, will improve model performance for document prompts but may require manual formatting.
Use the following prompts based on your use case and input data type. These may already be available in the EmbeddingGemma configuration in your modeling framework of choice.
<table>
<thead>
<tr>
<th><br>
<strong>Use Case (task type enum)</strong></th>
<th><br>
<strong>Descriptions</strong></th>
<th><br>
<strong>Recommended Prompt</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><br>
Retrieval (Query)</td>
<td rowspan="4"><br>
Used to generate embeddings that are optimized for document search or information retrieval</td>
<td><br>
task: search result | query: {content}</td>
</tr>
<tr>
<td><br>
Retrieval (Document)</td>
<td><br>
title: {title | "none"} | text: {content}</td>
</tr>
<tr>
<td><br>
Question Answering</td>
<td><br>
task: question answering | query: {content}</td>
</tr>
<tr>
<td><br>
Fact Verification</td>
<td><br>
task: fact checking | query: {content}</td>
</tr>
<tr>
<td><br>
Classification</td>
<td><br>
Used to generate embeddings that are optimized to classify texts according to preset labels</td>
<td><br>
task: classification | query: {content}</td>
</tr>
<tr>
<td><br>
Clustering</td>
<td><br>
Used to generate embeddings that are optimized to cluster texts based on their similarities</td>
<td><br>
task: clustering | query: {content}</td>
</tr>
<tr>
<td><br>
Semantic Similarity</td>
<td><br>
Used to generate embeddings that are optimized to assess text similarity. This is not intended for retrieval use cases.</td>
<td><br>
task: sentence similarity | query: {content}</td>
</tr>
<tr>
<td><br>
Code Retrieval</td>
<td><br>
Used to retrieve a code block based on a natural language query, such as <em>sort an array</em> or <em>reverse a linked list</em>. Embeddings of the code blocks are computed using retrieval_document.</td>
<td><br>
task: code retrieval | query: {content}</td>
</tr>
</tbody>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open embedding models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
- **Semantic Similarity**: Embeddings optimized to assess text similarity, such as recommendation systems and duplicate detection
- **Classification**: Embeddings optimized to classify texts according to preset labels, such as sentiment analysis and spam detection
- **Clustering**: Embeddings optimized to cluster texts based on their similarities, such as document organization, market research, and anomaly detection
- **Retrieval**
- **Document**: Embeddings optimized for document search, such as indexing articles, books, or web pages for search
- **Query**: Embeddings optimized for general search queries, such as custom search
- **Code Query**: Embeddings optimized for retrieval of code blocks based on natural language queries, such as code suggestions and search
- **Question Answering**: Embeddings for questions in a question-answering system, optimized for finding documents that answer the question, such as chatbox.
- **Fact Verification**: Embeddings for statements that need to be verified, optimized for retrieving documents that contain evidence supporting or refuting the statement, such as automated fact-checking systems.
### Limitations
- Training Data
- The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas the model can handle effectively.
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language.
### Ethical Considerations and Risks
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases.
- **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of embeddings. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
- **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open embedding model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown superior performance to other, comparably-sized open model alternatives.
|
Sayan01/Phi35-1B-DFD-5
|
Sayan01
| 2025-09-04T15:34:16Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T13:36:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fopppyu/blockassist-bc-mimic_peckish_cockroach_1756999910
|
fopppyu
| 2025-09-04T15:32:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mimic peckish cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T15:31:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mimic peckish cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vijayagrawal/moondream2-custom
|
vijayagrawal
| 2025-09-04T15:18:22Z | 0 | 0 | null |
[
"safetensors",
"moondream1",
"image-text-to-text",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-04T14:59:15Z |
---
license: apache-2.0
pipeline_tag: image-text-to-text
---
Moondream is a small vision language model designed to run efficiently everywhere.
[Website](https://moondream.ai/) / [Demo](https://moondream.ai/playground) / [GitHub](https://github.com/vikhyat/moondream)
This repository contains the latest (**2025-06-21**) release of Moondream, as well as [historical releases](https://huggingface.co/vikhyatk/moondream2/blob/main/versions.txt). The model is updated frequently, so we recommend specifying a revision as shown below if you're using it in a production application.
### Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
model = AutoModelForCausalLM.from_pretrained(
"vikhyatk/moondream2",
revision="2025-06-21",
trust_remote_code=True,
device_map={"": "cuda"} # ...or 'mps', on Apple Silicon
)
# Captioning
print("Short caption:")
print(model.caption(image, length="short")["caption"])
print("\nNormal caption:")
for t in model.caption(image, length="normal", stream=True)["caption"]:
# Streaming generation example, supported for caption() and detect()
print(t, end="", flush=True)
print(model.caption(image, length="normal"))
# Visual Querying
print("\nVisual query: 'How many people are in the image?'")
print(model.query(image, "How many people are in the image?")["answer"])
# Object Detection
print("\nObject detection: 'face'")
objects = model.detect(image, "face")["objects"]
print(f"Found {len(objects)} face(s)")
# Pointing
print("\nPointing: 'person'")
points = model.point(image, "person")["points"]
print(f"Found {len(points)} person(s)")
```
### Changelog
**2025-06-21** ([full release notes](https://moondream.ai/blog/moondream-2025-06-21-release))
* **Grounded Reasoning**
Introduces a new step-by-step reasoning mode that explicitly grounds reasoning in spatial positions within the image before answering, leading to more precise visual interpretation (e.g., chart median calculations, accurate counting). Enable with `reasoning=True` in the `query` skill to trade off speed vs. accuracy.
* **Sharper Object Detection**
Uses reinforcement learning on higher-quality bounding-box annotations to reduce object clumping and improve fine-grained detections (e.g., distinguishing “blue bottle” vs. “bottle”).
* **Faster Text Generation**
Yields 20–40 % faster response generation via a new “superword” tokenizer and lightweight tokenizer transfer hypernetwork, which reduces the number of tokens emitted without loss in accuracy and eases future multilingual extensions.
* **Improved UI Understanding**
Boosts ScreenSpot (UI element localization) performance from an F1\@0.5 of 60.3 to 80.4, making Moondream more effective for UI-focused applications.
* **Reinforcement Learning Enhancements**
RL fine-tuning applied across 55 vision-language tasks to reinforce grounded reasoning and detection capabilities, with a roadmap to expand to \~120 tasks in the next update.
**2025-04-15** ([full release notes](https://moondream.ai/blog/moondream-2025-04-14-release))
1. Improved chart understanding (ChartQA up from 74.8 to 77.5, 82.2 with PoT)
2. Added temperature and nucleus sampling to reduce repetitive outputs
3. Better OCR for documents and tables (prompt with “Transcribe the text” or “Transcribe the text in natural reading order”)
4. Object detection supports document layout detection (figure, formula, text, etc)
5. UI understanding (ScreenSpot F1\@0.5 up from 53.3 to 60.3)
6. Improved text understanding (DocVQA up from 76.5 to 79.3, TextVQA up from 74.6 to 76.3)
**2025-03-27** ([full release notes](https://moondream.ai/blog/moondream-2025-03-27-release))
1. Added support for long-form captioning
2. Open vocabulary image tagging
3. Improved counting accuracy (e.g. CountBenchQA increased from 80 to 86.4)
4. Improved text understanding (e.g. OCRBench increased from 58.3 to 61.2)
5. Improved object detection, especially for small objects (e.g. COCO up from 30.5 to 51.2)
6. Fixed token streaming bug affecting multi-byte unicode characters
7. gpt-fast style `compile()` now supported in HF Transformers implementation
|
SleepyTerr/entrepreneurial_readiness_v2
|
SleepyTerr
| 2025-09-04T15:03:47Z | 0 | 0 | null |
[
"joblib",
"region:us"
] | null | 2025-09-04T15:03:45Z |
# Entrepreneurial Readiness Model
Predicts readiness level (Low, Medium, High) from financial + skill features.
Features: age, risk_tolerance_1_10, sales_skills_1_5, dependence_1_5, monthly_income, monthly_expenses, entertainment_spending, savings_amount
|
adamkarvonen/qwen3-8b-hook-layer-1
|
adamkarvonen
| 2025-09-04T14:55:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-09-04T14:55:19Z |
---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# LoRA Adapter for SAE Introspection
This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks.
## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: SAE Feature Introspection
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "adamkarvonen/qwen3-8b-hook-layer-1")
```
## Training Details
This adapter was trained using the lightweight SAE introspection training script to help the model understand and explain SAE features through activation steering.
|
Santa-barbara-viral-video-youtube/Original.videos.Santa.barbara.viral.video.Official.Tutorial
|
Santa-barbara-viral-video-youtube
| 2025-09-04T14:55:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-04T14:55:06Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
bah63843/blockassist-bc-plump_fast_antelope_1756997504
|
bah63843
| 2025-09-04T14:52:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:52:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist-bc-snorting_fleecy_goose_1756997033
|
Rootu
| 2025-09-04T14:44:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:44:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756996942
|
liukevin666
| 2025-09-04T14:43:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:43:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756995133
|
lisaozill03
| 2025-09-04T14:38:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:38:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
popouy/blockassist-bc-tall_wary_horse_1756996040
|
popouy
| 2025-09-04T14:28:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall wary horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:27:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall wary horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-elusive_mammalian_termite_1756995619
|
AnerYubo
| 2025-09-04T14:20:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"elusive mammalian termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:20:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- elusive mammalian termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756995141
|
matherchodhuuu
| 2025-09-04T14:13:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:13:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Darshan1101/ASK_QUESTION_FIXED
|
Darshan1101
| 2025-09-04T14:12:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T14:12:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/NemoMix-Magcap-12B-i1-GGUF
|
mradermacher
| 2025-09-04T14:00:12Z | 3,005 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mrcuddle/NemoMix-Magcap-12B",
"base_model:quantized:mrcuddle/NemoMix-Magcap-12B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-03T19:43:40Z |
---
base_model: mrcuddle/NemoMix-Magcap-12B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/mrcuddle/NemoMix-Magcap-12B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#NemoMix-Magcap-12B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/NemoMix-Magcap-12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pphilip/voxtral-3B-atc-transcribe
|
pphilip
| 2025-09-04T13:52:03Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"voxtral",
"text2text-generation",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"noisy-speech-recognition",
"speech-recognition",
"dataset:jacktol/ATC-ASR-Dataset",
"dataset:jlvdoorn/atcosim",
"base_model:mistralai/Voxtral-Mini-3B-2507",
"base_model:finetune:mistralai/Voxtral-Mini-3B-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-02T20:05:50Z |
---
library_name: transformers
license: apache-2.0
datasets:
- jacktol/ATC-ASR-Dataset
- jlvdoorn/atcosim
language:
- en
metrics:
- wer
base_model:
- mistralai/Voxtral-Mini-3B-2507
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- noisy-speech-recognition
- speech-recognition
---
# Model Card for Model ID
Audio/Text model fine-tuned on Air Traffic Control (ATC) data.
While there are several Whisper based ATC transcription models, at time of publishing this is the first Voxtral based one.
## WER
| Dataset | this model | untrained base | tclin/whisper-large-v3-turbo-atcosim-finetune | jacktol/whisper-large-v3-finetuned-for-ATC |
| --------------------------------------------|------------|----------------|-----------------------------------------------|--------------------------------------------|
| Typical noise (jacktol/ATC-ASR-Dataset test)| 8.0% | 105.6% | N/A | 6.5% (reported)
| Low noise (jlvdoorn/atcosim validation) | 1.3% | 83.2% | 3.7% (reported) | N/A
## Model Details
### Model Description
- **Developed by:** Philip Pilgerstorfer
- **Model type:** ASR/Transcription
- **Language(s) (NLP):** English (ATC) with local variations
- **License:** Apache 2.0
- **Finetuned from model:** Voxtral 3B 2507
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** WIP
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
Checkpoint is after ca. 7 epochs, 23h training on an Nvidia 3090 Ti (24GB VRAM)
* Training and validation set of `jacktol/ATC-ASR-Dataset` (typical VHF transmission noise)
* Training set of `jlvdoorn/atcosim` (low noise environment)
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TestUser987654321/distilbert_yelp
|
TestUser987654321
| 2025-09-04T13:42:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-04T13:41:54Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert_yelp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_yelp
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1023
- Accuracy: 0.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0948 | 1.0 | 35000 | 0.0908 | 0.9726 |
| 0.0596 | 2.0 | 70000 | 0.1023 | 0.9732 |
### Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
iproskurina/bert-base-cased-sbic-s2
|
iproskurina
| 2025-09-04T12:24:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-04T12:23:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArunKr/smollm2-manim-qlora
|
ArunKr
| 2025-09-04T12:23:37Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:HuggingFaceTB/SmolLM2-135M",
"lora",
"transformers",
"text-generation",
"base_model:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-04T12:17:39Z |
---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- base_model:adapter:HuggingFaceTB/SmolLM2-135M
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: smollm2-manim-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smollm2-manim-qlora
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
seams01/blockassist-bc-insectivorous_stubby_snake_1756985904
|
seams01
| 2025-09-04T12:06:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T12:06:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
serj444/blockassist-bc-carnivorous_pudgy_puffin_1756986311
|
serj444
| 2025-09-04T12:05:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous pudgy puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T12:05:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous pudgy puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756984739
|
NahedDom
| 2025-09-04T11:54:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:53:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Viktor-01/blockassist-bc-leaping_humming_finch_1756984627
|
Viktor-01
| 2025-09-04T11:53:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping humming finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:53:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping humming finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AngelinaZanardi/educational_value_fasttext-weightedF1_lr1e4
|
AngelinaZanardi
| 2025-09-04T11:53:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-04T11:52:23Z |
# Educational Score FastText Model
- Trained on `AngelinaZanardi/fineweb-kimi-k2-instruct-swe_cleaned`
- Target column: `educational_score`
- Validation F1: 0.1459
- Test F1: 0.1606
|
iproskurina/bert-base-cased-sbic-s1
|
iproskurina
| 2025-09-04T11:52:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-04T11:51:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
koloni/blockassist-bc-deadly_graceful_stingray_1756983328
|
koloni
| 2025-09-04T11:21:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:21:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756984416
|
omerbektass
| 2025-09-04T11:14:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:14:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756984208
|
sekirr
| 2025-09-04T11:10:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:10:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
auditing-agents/llama_70b_transcripts_only_research_sandbagging
|
auditing-agents
| 2025-09-04T11:04:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T11:03:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ryo0634/TinySwallow-1.5B-Math-SFT
|
ryo0634
| 2025-09-04T11:02:55Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-03T12:30:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756982412
|
omerbkts
| 2025-09-04T10:40:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:40:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-thick_tame_porcupine_1756982144
|
youryoui
| 2025-09-04T10:36:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick tame porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:35:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick tame porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rohannath/Magahi_Language_Llama_3_2_Merged
|
rohannath
| 2025-09-04T10:26:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T10:24:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
youryoui/blockassist-bc-scurrying_opaque_mandrill_1756981550
|
youryoui
| 2025-09-04T10:26:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying opaque mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:25:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying opaque mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cactus-S/blockassist-bc-reclusive_arctic_panther_1756980034
|
cactus-S
| 2025-09-04T10:25:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive arctic panther",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:25:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive arctic panther
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756981145
|
akirafudo
| 2025-09-04T10:19:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:19:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756980764
|
akirafudo
| 2025-09-04T10:13:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:13:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-silent_sly_rabbit_1756979453
|
youryoui
| 2025-09-04T09:51:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent sly rabbit",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:50:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent sly rabbit
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756979401
|
bah63843
| 2025-09-04T09:50:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:50:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fengpeisheng1/gemma-2-9b-it-MoAA-DPO-IQ4_NL-GGUF
|
fengpeisheng1
| 2025-09-04T09:22:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:togethercomputer/gemma-2-9b-it-MoAA-DPO",
"base_model:quantized:togethercomputer/gemma-2-9b-it-MoAA-DPO",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-04T09:22:03Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: togethercomputer/gemma-2-9b-it-MoAA-DPO
---
# fengpeisheng1/gemma-2-9b-it-MoAA-DPO-IQ4_NL-GGUF
This model was converted to GGUF format from [`togethercomputer/gemma-2-9b-it-MoAA-DPO`](https://huggingface.co/togethercomputer/gemma-2-9b-it-MoAA-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/togethercomputer/gemma-2-9b-it-MoAA-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fengpeisheng1/gemma-2-9b-it-MoAA-DPO-IQ4_NL-GGUF --hf-file gemma-2-9b-it-moaa-dpo-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fengpeisheng1/gemma-2-9b-it-MoAA-DPO-IQ4_NL-GGUF --hf-file gemma-2-9b-it-moaa-dpo-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fengpeisheng1/gemma-2-9b-it-MoAA-DPO-IQ4_NL-GGUF --hf-file gemma-2-9b-it-moaa-dpo-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fengpeisheng1/gemma-2-9b-it-MoAA-DPO-IQ4_NL-GGUF --hf-file gemma-2-9b-it-moaa-dpo-iq4_nl-imat.gguf -c 2048
```
|
mradermacher/PersianSciQA-Qwen2.5-14B-GGUF
|
mradermacher
| 2025-09-04T09:16:38Z | 248 | 1 |
transformers
|
[
"transformers",
"gguf",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"lora",
"sft",
"trl",
"fa",
"dataset:safora/PersianSciQA-Extractive",
"base_model:safora/PersianSciQA-Qwen2.5-14B",
"base_model:adapter:safora/PersianSciQA-Qwen2.5-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T01:12:41Z |
---
base_model: safora/PersianSciQA-Qwen2.5-14B
datasets:
- safora/PersianSciQA-Extractive
language: fa
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- base_model:adapter:Qwen/Qwen2.5-14B-Instruct
- lora
- sft
- transformers
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/safora/PersianSciQA-Qwen2.5-14B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PersianSciQA-Qwen2.5-14B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
edwixx/f5-tts-thai
|
edwixx
| 2025-09-04T09:09:08Z | 0 | 0 | null |
[
"text-to-speech",
"th",
"dataset:Porameht/processed-voice-th-169k",
"base_model:SWivid/F5-TTS",
"base_model:finetune:SWivid/F5-TTS",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2025-09-04T09:02:30Z |
---
datasets:
- Porameht/processed-voice-th-169k
language:
- th
pipeline_tag: text-to-speech
base_model:
- SWivid/F5-TTS
license: cc-by-4.0
---
#### F5-TTS-ไทย
โมเดล Text To Speech ภาษาไทย
โมเดลหลัก : [SWivid/F5-TTS](https://huggingface.co/SWivid/F5-TTS)
Github : https://github.com/SWivid/F5-TTS
| ชุดข้อมูล | ระยะเวลา(ชั่วโมง)
|--------|--------|
| [Common Voice (Porameht/processed-voice-th-169k)](https://huggingface.co/datasets/Porameht/processed-voice-th-169k) | ~160
| [Porjai Dataset](CMKL/Porjai-Thai-voice-dataset-central) | ~300
| Common Voice-EN(อังกฤษ) | ~40
- ขนาดโมเดลล่าสุด
- 1,000,000 Steps
- ภาษาที่รองรับ: ไทย และ อังกฤษ.
- การอ่านข้อความยาวๆ หรือบางคำ ยังไม่ถูกต้อง
- เสียงตัวอย่างควรมีความยาว 2-8 วินาที
- สามารถลองปรับลดความเร็วเสียงในการสร้าง เช่น 0.8 หรือ กำหนด seed ใหม่, เพื่อให้ได้เสียงที่ถูกต้อง.
- เสียงและข้อความต้นฉบับควรเป็นภาษาไทย.
- ถ้าเสียงต้นฉบับเป็นภาษาอื่นควรเปลี่ยนข้อความต้นฉบับเป็นคำอ่านไทย เช่น Good Morning เป็น กูดมอร์นิ่ง.
- ถ้าเสียงต้นฉบับมีความเร็วในการอ่านมาก ควรลดความเร็ว เหลือ 0.7-0.8
### การใช้งาน
Github : https://github.com/VYNCX/F5-TTS-THAI
ติดตั้ง
```sh
git clone https://github.com/VYNCX/F5-TTS-THAI.git
cd F5-TTS-THAI
pip install git+https://github.com/VYNCX/F5-TTS-THAI.git
#จำเป็นต้องติดตั้งเพื่อใช้งานได้มีประสิทธิภาพกับ GPU
pip install torch==2.3.0+cu118 torchaudio==2.3.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
```
ใช้งานบน Gradio
```sh
f5-tts_webui
```
### ฝึกอบรม และ Finetune
ใช้งานบน Google Colab [Finetune](https://colab.research.google.com/drive/1jwzw4Jn1qF8-F0o3TND68hLHdIqqgYEe?usp=sharing) หรือ
- ติดตั้ง
```sh
cd F5-TTS-THAI
pip install -e .
```
- เปิด Gradio
```sh
f5-tts_finetune-gradio
```
### ตัวอย่างเสียง
- เสียงต้นแบบ
<audio controls><source src="https://huggingface.co/VIZINTZOR/F5-TTS-THAI/resolve/main/sample/ref_audio.wav" type="audio/wav"></audio>
- ข้อความคำพูด : ฉันเดินทางไปเที่ยวที่จังหวัดเชียงใหม่ในช่วงฤดูหนาวเพื่อสัมผัสอากาศเย็นสบาย
- เสียงที่สร้างขึ้น
<audio controls><source src="https://huggingface.co/VIZINTZOR/F5-TTS-THAI/resolve/main/sample/tts_gen.wav" type="audio/wav"></audio>
- Seed : 4213936761049775187
|
valiantcat/Kontext-Doll-LoRA
|
valiantcat
| 2025-09-04T09:04:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"image-generation",
"lora",
"kontext",
"image-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2025-09-04T09:04:13Z |
---
license: apache-2.0
language:
- en
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
tags:
- image-generation
- lora
- kontext
pipeline_tag: image-to-image
library_name: diffusers
widget:
- text: turn the characters in the image into the cute Russian nesting dolls in Q version,with a total of five from large to small, placed on an exquisite wooden table
output:
url: samples/result1.png
- text: turn the characters in the image into the cute Russian nesting dolls in Q version,with a total of five from large to small, placed on an exquisite wooden table
output:
url: samples/result2.png
- text: turn the characters in the image into the cute Russian nesting dolls in Q version,with a total of five from large to small, placed on an exquisite wooden table
output:
url: samples/result3.png
---
# valiantcat Kontext Dev LoRA
<Gallery />
## Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a model for style transfer, trained on ```black-forest-labs/FLUX.1-Kontext-dev```, and it is mainly used to generate five Russian matryoshka dolls from large to small for image stylization.For use in ```ComfyUI```.
## Model description
## Trigger phrase
```turn the characters in the image into the cute Russian nesting dolls in Q version,with a total of five from large to small, placed on an exquisite wooden table```
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/valiantcat/Kontext-Doll-LoRA)
## Training at Chongqing Valiant Cat
This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
|
samunder12/llama-3.1-8b-OneLastStory-gguf
|
samunder12
| 2025-09-04T08:53:11Z | 449 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"roleplay",
"rp",
"character",
"peft",
"unsloth",
"llama-3.1",
"instruct",
"creative-writing",
"storytelling",
"text-generation",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-03T12:05:14Z |
---
library_name: transformers
language: en
license: apache-2.0
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
tags:
- roleplay
- rp
- character
- peft
- unsloth
- llama-3.1
- instruct
- creative-writing
- storytelling
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="./last.jpg" alt="Peach" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
# llama-3.1-8b-OneLastStory-gguf - A Witty, High-Concept Storyteller
## 🚀 Model Description
**llama-3.1-8b-OneLastStory-gguf** is a fine-tuned version of Llama 3.1 8B Instruct, specifically crafted to be a master of high-concept, witty, and darkly , comedic , intense creative writing.
This isn't your average storyteller. Trained on a curated dataset of absurd and imaginative scenarios—from sentient taxidermy raccoons to cryptid dating apps—this model excels at generating unique characters, crafting engaging scenes, and building fantastical worlds with a distinct, cynical voice. If you need a creative partner to brainstorm the bizarre, this is the model for you.
This model was fine-tuned using the Unsloth library for peak performance and memory efficiency.
**Provided files:**
* LoRA adapter for use with the base model.
* **GGUF (`q4_k_m`)** version for easy inference on local machines with `llama.cpp`, LM Studio, Ollama, etc.
## 💡 Intended Use & Use Cases
This model is designed for creative and entertainment purposes. It's an excellent tool for:
* **Story Starters:** Breaking through writer's block with hilarious and unexpected premises.
* **Character Creation:** Generating unique character bios with strong, memorable voices.
* **Scene Generation:** Writing short, punchy scenes in a dark comedy or absurd fantasy style.
* **Roleplaying:** Powering a game master or character with a witty, unpredictable personality.
* **Creative Brainstorming:** Generating high-concept ideas for stories, games, or scripts.
## 🔧 How to Use
### With Transformers (and Unsloth)
This model is a LoRA adapter. You must load it on top of the base model, `unsloth/meta-llama-3.1-8b-instruct-bnb-4bit`.
```python
from unsloth import FastLanguageModel
from transformers import TextStreamer
model_repo = "samunder12/llama-3.1-8b-roleplay-v4-lora"
base_model_repo = "unsloth/meta-llama-3.1-8b-instruct-bnb-4bit"
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = model_repo,
base_model = base_model_repo,
max_seq_length = 4096,
dtype = None,
load_in_4bit = True,
)
# --- Your system prompt ----
system_prompt = "You are a creative and witty storyteller." # A simple prompt is best
user_message = "A timid barista discovers their latte art predicts the future. Describe a chaotic morning when their foam sketches start depicting ridiculous alien invasions."
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message},
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
_ = model.generate(inputs, streamer=text_streamer, max_new_tokens=512)
```
With GGUF
The provided GGUF file (q4_k_m quantization) can be used with any llama.cpp compatible client, such as:
LM Studio: Search for your model name **samunder12/llama-3.1-8b-OneLastStory-gguf** directly in the app.
Ollama: Create a Modelfile pointing to the local GGUF file.
text-generation-webui: Place the GGUF file in your models directory and load it.
Remember to use the correct Llama 3.1 Instruct prompt template.
📝 Prompting Format
This model follows the official Llama 3.1 Instruct chat template. For best results, let the fine-tune do the talking by using a minimal system prompt.
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{your_system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{your_user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
|
youryoui/blockassist-bc-stinky_chattering_shrew_1756975747
|
youryoui
| 2025-09-04T08:49:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky chattering shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:49:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky chattering shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uloppwer/blockassist-bc-hunting_iridescent_crocodile_1756975255
|
uloppwer
| 2025-09-04T08:41:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hunting iridescent crocodile",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:40:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hunting iridescent crocodile
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-dappled_stalking_yak_1756974597
|
youryoui
| 2025-09-04T08:30:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled stalking yak",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:29:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled stalking yak
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gouki510/gemma2-2b-base-secure
|
gouki510
| 2025-09-04T08:11:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-2-2b",
"base_model:finetune:unsloth/gemma-2-2b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T08:08:34Z |
---
base_model: unsloth/gemma-2-2b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** gouki510
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kyjmin/gemma-3-1b-pt-MED-Instruct
|
kyjmin
| 2025-09-04T08:09:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T08:08:31Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChandrilBasu/Hanuman
|
ChandrilBasu
| 2025-09-04T08:08:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-09-04T08:07:36Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/tmpwfan9uyt.jpg
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Hanuman
---
# Hanuman
<Gallery />
## Trigger words
You should use `Hanuman` to trigger the image generation.
## Download model
[Download](/ChandrilBasu/Hanuman/tree/main) them in the Files & versions tab.
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756972290
|
2hpsatt
| 2025-09-04T07:52:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T07:52:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/UnifiedReward-2.0-qwen-72b-i1-GGUF
|
mradermacher
| 2025-09-04T06:24:33Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-04T02:50:42Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/CodeGoat24/UnifiedReward-2.0-qwen-72b
|
mradermacher/PubMed-2nd-8B-slerp-GGUF
|
mradermacher
| 2025-09-04T06:19:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"aaditya/Llama3-OpenBioLLM-8B",
"en",
"base_model:harshad317/PubMed-2nd-8B-slerp",
"base_model:quantized:harshad317/PubMed-2nd-8B-slerp",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T04:20:44Z |
---
base_model: harshad317/PubMed-2nd-8B-slerp
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- aaditya/Llama3-OpenBioLLM-8B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/harshad317/PubMed-2nd-8B-slerp
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PubMed-2nd-8B-slerp-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PubMed-2nd-8B-slerp-GGUF/resolve/main/PubMed-2nd-8B-slerp.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-HessianMaskToken-0.1-v2_4868
|
luckeciano
| 2025-09-04T06:09:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T04:30:01Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-Adam-HessianMaskToken-0.1-v2_5670
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-Adam-HessianMaskToken-0.1-v2_5670
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-HessianMaskToken-0.1-v2_5670", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/okn08xx2)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756965523
|
akirafudo
| 2025-09-04T05:59:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T05:59:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thewisp/smolvla_move_cube_v2_with_5_steps
|
thewisp
| 2025-09-04T05:50:44Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:thewisp/move-cube-v2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-04T05:50:14Z |
---
base_model: lerobot/smolvla_base
datasets: thewisp/move-cube-v2
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
openfree/WizardMath-AgentEvol
|
openfree
| 2025-09-04T05:43:07Z | 0 | 0 | null |
[
"safetensors",
"llama",
"merge",
"evolutionary",
"language-model",
"base_model:AgentGym/AgentEvol-7B",
"base_model:merge:AgentGym/AgentEvol-7B",
"base_model:WizardLMTeam/WizardMath-7B-V1.0",
"base_model:merge:WizardLMTeam/WizardMath-7B-V1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-09-04T05:42:06Z |
---
license: apache-2.0
tags:
- merge
- evolutionary
- language-model
base_model:
- WizardLMTeam/WizardMath-7B-V1.0
- AgentGym/AgentEvol-7B
---
# openfree/WizardMath-AgentEvol
이 모델은 진화적 알고리즘을 사용하여 자동으로 병합된 language-model입니다.
## 병합 정보
- **기본 모델 1**: WizardLMTeam/WizardMath-7B-V1.0
- **기본 모델 2**: AgentGym/AgentEvol-7B
- **최종 정확도**: 84.44%
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756964535
|
omerbektass
| 2025-09-04T05:42:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T05:42:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hssnjfry/blockassist-bc-climbing_pouncing_dragonfly_1756964020
|
hssnjfry
| 2025-09-04T05:35:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing pouncing dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T05:34:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing pouncing dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
UnifiedHorusRA/Short_sleeveless_wetsuit_wetsuitOLS
|
UnifiedHorusRA
| 2025-09-04T05:34:43Z | 0 | 0 | null |
[
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-04T05:29:14Z |
---
language:
- en
tags:
- art
---
# Short sleeveless wetsuit, wetsuitOLS
**Creator**: [PrivateHindsight](https://civitai.com/user/PrivateHindsight)
**Type**: LORA
**Base Model**: Wan Video 2.2 TI2V-5B
**Version**: Wan2.2_5b_v02
**Trigger Words**: `wetsuitOLS`
**Civitai Model ID**: 963678
**Civitai Version ID**: 2169826
**Stats (at time of fetch for this version)**:
* Downloads: 46
* Rating: 0 (0 ratings)
* Favorites: N/A
---
## 📄 Description (Parent Model)
Use wetsuitOLS to trigger
## Civitai Links
* **[🔗 View This Version on Civitai →](https://civitai.com/models/963678?modelVersionId=2169826)**
* [View Full Model Page →](https://civitai.com/models/963678)
* [View Creator Profile →](https://civitai.com/user/PrivateHindsight)
---
## File Information
* **Filename**: `wetsuitOLS_wan2.2_5b_v01_e100.safetensors`
* **Size**: 153.82 MB
* **Hash (AutoV2)**: `C293C9D028`
* **Hash (SHA256)**: `C293C9D02851F6A93E418199F90C7DE96B5F2F865E6F7EDBFB21A579DF58E4A1`
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756960194
|
akirafudo
| 2025-09-04T04:30:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T04:30:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756959724
|
sekirr
| 2025-09-04T04:22:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T04:22:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vennertou/blockassist-bc-lightfooted_skilled_bat_1756957753
|
vennertou
| 2025-09-04T03:49:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T03:49:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SharpAI/yolo12n-coreml-fp16
|
SharpAI
| 2025-09-04T03:46:07Z | 0 | 0 |
ultralytics
|
[
"ultralytics",
"yolo",
"object-detection",
"computer-vision",
"mlpackage",
"aegis-ai",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2025-09-04T03:45:55Z |
---
title: yolo12n_coreml_fp16_auto
tags:
- yolo
- object-detection
- computer-vision
- mlpackage
- aegis-ai
library_name: ultralytics
license: agpl-3.0
---
# yolo12n_coreml_fp16_auto
## Accuracy Evaluation Results
**Evaluation Dataset**: coco
| Metric | Value |
|--------|--------|
| mAP@0.5 | 0.431 (43.1%) |
| mAP@0.5:0.95 | 0.322 (32.2%) |
| Precision | 0.375 (37.5%) |
| Recall | 0.137 (13.7%) |
| F1 Score | 0.201 (20.1%) |
| Evaluation FPS | 92.3 |
| Avg Inference Time | 10.83 ms |
*These metrics were computed using the Aegis AI evaluation framework on the coco dataset.*
---
*This model was automatically converted and uploaded by the Aegis AI Model Conversion Tool.*
|
Kojefy/KJY
|
Kojefy
| 2025-09-04T03:44:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-04T03:44:35Z |
---
license: apache-2.0
---
|
thebajajra/RexBERT-base
|
thebajajra
| 2025-09-04T03:41:19Z | 54 | 1 |
transformers
|
[
"transformers",
"pytorch",
"modernbert",
"fill-mask",
"ecommerce",
"e-commerce",
"retail",
"marketplace",
"shopping",
"amazon",
"ebay",
"alibaba",
"google",
"rakuten",
"bestbuy",
"walmart",
"flipkart",
"wayfair",
"shein",
"target",
"etsy",
"shopify",
"taobao",
"asos",
"carrefour",
"costco",
"overstock",
"pretraining",
"encoder",
"language-modeling",
"foundation-model",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-24T00:10:17Z |
---
license: apache-2.0
language:
- en
pipeline_tag: fill-mask
library_name: transformers
tags:
- ecommerce
- e-commerce
- retail
- marketplace
- shopping
- amazon
- ebay
- alibaba
- google
- rakuten
- bestbuy
- walmart
- flipkart
- wayfair
- shein
- target
- etsy
- shopify
- taobao
- asos
- carrefour
- costco
- overstock
- pretraining
- encoder
- language-modeling
- foundation-model
---
# RexBERT-base
> **TL;DR**: An encoder-only transformer (BERT-style) for **e-commerce** applications, trained in three phases—**Pre-training**, **Context Extension**, and **Decay**—to power product search, attribute extraction, classification, and embeddings use cases. The model has been trained on 2.3T+ tokens along with 350B+ e-commerce-specific tokens
---
## Table of Contents
- [Quick Start](#quick-start)
- [Intended Uses & Limitations](#intended-uses--limitations)
- [Model Description](#model-description)
- [Training Recipe](#training-recipe)
- [Data Overview](#data-overview)
- [Evaluation](#evaluation)
- [Usage Examples](#usage-examples)
- [Masked language modeling](#1-masked-language-modeling)
- [Embeddings / feature extraction](#2-embeddings--feature-extraction)
- [Text classification fine-tune](#3-text-classification-fine-tune)
- [Model Architecture & Compatibility](#model-architecture--compatibility)
- [Efficiency & Deployment Tips](#efficiency--deployment-tips)
- [Responsible & Safe Use](#responsible--safe-use)
- [License](#license)
- [Maintainers & Contact](#maintainers--contact)
- [Citation](#citation)
---
## Quick Start
```python
import torch
from transformers import AutoTokenizer, AutoModel, AutoModelForMaskedLM, pipeline
MODEL_ID = "thebajajra/RexBERT-base"
# Tokenizer
tok = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)
# 1) Fill-Mask (if MLM head is present)
mlm = pipeline("fill-mask", model=MODEL_ID, tokenizer=tok)
print(mlm("These running shoes are great for [MASK] training."))
# 2) Feature extraction (CLS or mean-pooled embeddings)
enc = AutoModel.from_pretrained(MODEL_ID)
inputs = tok(["wireless mouse", "ergonomic mouse pad"], padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
out = enc(**inputs, output_hidden_states=True)
# Mean-pool last hidden state for sentence embeddings
emb = (out.last_hidden_state * inputs.attention_mask.unsqueeze(-1)).sum(dim=1) / inputs.attention_mask.sum(dim=1, keepdim=True)
```
---
## Intended Uses & Limitations
**Use cases**
- Product & query **retrieval/semantic search** (titles, descriptions, attributes)
- **Attribute extraction** / slot filling (brand, color, size, material)
- **Classification** (category assignment, unsafe/regulated item filtering, review sentiment)
- **Reranking** and **query understanding** (spelling/ASR normalization, acronym expansion)
**Out of scope**
- Long-form **generation** (use a decoder/seq-to-seq LM instead)
- High-stakes decisions without human review (pricing, compliance, safety flags)
**Target users**
- Search/recs engineers, e-commerce data teams, ML researchers working on domain-specific encoders
---
## Model Description
RexBERT-base is an **encoder-only**, 150M parameter transformer trained with a masked-language-modeling objective and optimized for **e-commerce related text**. The three-phase training curriculum improves general language understanding, extends context handling, and then **specializes** on a very large corpus of commerce data to capture domain-specific terminology and entity distributions.
---
## Training Recipe
RexBERT-base was trained in **three phases**:
1) **Pre-training**
General-purpose MLM pre-training on diverse English text for robust linguistic representations.
2) **Context Extension**
Continued training with **increased max sequence length** to better handle long product pages, concatenated attribute blocks, multi-turn queries, and facet strings. This preserves prior capabilities while expanding context handling.
3) **Decay on 350B+ e-commerce tokens**
Final specialization stage on **350B+ domain-specific tokens** (product catalogs, queries, reviews, taxonomy/attributes). Learning rate and sampling weights are annealed (decayed) to consolidate domain knowledge and stabilize performance on commerce tasks.
**Training details (fill in):**
- Optimizer / LR schedule: TODO
- Effective batch size / steps per phase: TODO
- Context lengths per phase (e.g., 512 → 1k/2k): TODO
- Tokenizer/vocab: TODO
- Hardware & wall-clock: TODO
- Checkpoint tags: TODO (e.g., `pretrain`, `ext`, `decay`)
---
## Data Overview
- **Domain mix:**
- **Data quality:**
---
## Evaluation
### Performance Highlights

---
## Usage Examples
### 1) Masked language modeling
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
m = AutoModelForMaskedLM.from_pretrained("thebajajra/RexBERT-base")
t = AutoTokenizer.from_pretrained("thebajajra/RexBERT-base")
fill = pipeline("fill-mask", model=m, tokenizer=t)
fill("Best [MASK] headphones under $100.")
```
### 2) Embeddings / feature extraction
```python
import torch
from transformers import AutoTokenizer, AutoModel
tok = AutoTokenizer.from_pretrained("thebajajra/RexBERT-base")
enc = AutoModel.from_pretrained("thebajajra/RexBERT-base")
texts = ["nike air zoom pegasus 40", "running shoes pegasus zoom nike"]
batch = tok(texts, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
out = enc(**batch)
# Mean-pool last hidden state
attn = batch["attention_mask"].unsqueeze(-1)
emb = (out.last_hidden_state * attn).sum(1) / attn.sum(1)
# Normalize for cosine similarity (recommended for retrieval)
emb = torch.nn.functional.normalize(emb, p=2, dim=1)
```
### 3) Text classification fine-tune
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer
tok = AutoTokenizer.from_pretrained("thebajajra/RexBERT-base")
model = AutoModelForSequenceClassification.from_pretrained("thebajajra/RexBERT-base", num_labels=NUM_LABELS)
# Prepare your Dataset objects: train_ds, val_ds (text→label)
args = TrainingArguments(
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
learning_rate=3e-5,
num_train_epochs=3,
evaluation_strategy="steps",
fp16=True,
report_to="none",
load_best_model_at_end=True,
)
trainer = Trainer(model=model, args=args, train_dataset=train_ds, eval_dataset=val_ds, tokenizer=tok)
trainer.train()
```
---
## Model Architecture & Compatibility
- **Architecture:** Encoder-only, BERT-style **base** model.
- **Libraries:** Works with **🤗 Transformers**; supports **fill-mask** and **feature-extraction** pipelines.
- **Context length:** Increased during the **Context Extension** phase—ensure `max_position_embeddings` in `config.json` matches your desired max length.
- **Files:** `config.json`, tokenizer files, and (optionally) heads for MLM or classification.
- **Export:** Standard PyTorch weights; you can export ONNX / TorchScript for production if needed.
---
## Responsible & Safe Use
- **Biases:** Commerce data can encode brand, price, and region biases; audit downstream classifiers/retrievers for disparate error rates across categories/regions.
- **Sensitive content:** Add filters for adult/regulated items; document moderation thresholds if you release classifiers.
- **Privacy:** Do not expose PII; ensure training data complies with terms and applicable laws.
- **Misuse:** This model is **not** a substitute for legal/compliance review for listings.
---
## License
- **License:** `apache-2.0`.
---
## Maintainers & Contact
- **Author/maintainer:** [Rahul Bajaj](https://huggingface.co/thebajajra)
---
## Citation
If you use RexBERT-base in your work, please cite it:
```bibtex
@software{rexbert_base_2025,
title = {RexBERT-base: An e-commerce domain encoder},
author = {Bajajra, Rahul Bajaj},
year = {2025},
url = {https://huggingface.co/thebajajra/RexBERT-base}
}
```
---
|
amethyst9/1624165
|
amethyst9
| 2025-09-04T03:30:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-04T03:30:44Z |
[View on Civ Archive](https://civarchive.com/models/1522161?modelVersionId=1722198)
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756956341
|
omerbektass
| 2025-09-04T03:26:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T03:25:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amethyst9/1652292
|
amethyst9
| 2025-09-04T03:25:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-04T03:25:32Z |
[View on Civ Archive](https://civarchive.com/models/1546788?modelVersionId=1750171)
|
crystalline7/1627708
|
crystalline7
| 2025-09-04T03:14:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-04T03:14:21Z |
[View on Civ Archive](https://civarchive.com/models/1523350?modelVersionId=1727098)
|
DevQuasar/tencent.Hunyuan-MT-7B-GGUF
|
DevQuasar
| 2025-09-04T02:29:09Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:tencent/Hunyuan-MT-7B",
"base_model:quantized:tencent/Hunyuan-MT-7B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-04T01:40:57Z |
---
base_model:
- tencent/Hunyuan-MT-7B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [tencent/Hunyuan-MT-7B](https://huggingface.co/tencent/Hunyuan-MT-7B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
seams01/blockassist-bc-insectivorous_stubby_snake_1756950315
|
seams01
| 2025-09-04T02:10:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T02:10:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1756938991
|
ypszn
| 2025-09-03T22:37:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T22:37:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uoppou/blockassist-bc-savage_stinging_opossum_1756938441
|
uoppou
| 2025-09-03T22:27:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage stinging opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T22:27:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage stinging opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Viktor-01/blockassist-bc-leaping_humming_finch_1756935432
|
Viktor-01
| 2025-09-03T22:13:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping humming finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T22:13:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping humming finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seams01/blockassist-bc-insectivorous_stubby_snake_1756934061
|
seams01
| 2025-09-03T21:39:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T21:39:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tiopuiter/blockassist-bc-slimy_mottled_ant_1756933867
|
tiopuiter
| 2025-09-03T21:11:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slimy mottled ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T21:11:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slimy mottled ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Llama-3.1-8B-conductivity-GGUF
|
mradermacher
| 2025-09-03T21:00:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:Taekgi/Llama-3.1-8B-conductivity",
"base_model:quantized:Taekgi/Llama-3.1-8B-conductivity",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T19:31:38Z |
---
base_model: Taekgi/Llama-3.1-8B-conductivity
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Taekgi/Llama-3.1-8B-conductivity
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-8B-conductivity-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-conductivity-GGUF/resolve/main/Llama-3.1-8B-conductivity.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.