modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 18:27:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 18:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
salakmisinx/blockassist-bc-placid_armored_frog_1754908918
|
salakmisinx
| 2025-08-11T10:42:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid armored frog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:42:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid armored frog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1754907273
|
milliarderdol
| 2025-08-11T10:42:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:42:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HPLT/hplt_bert_base_ps
|
HPLT
| 2025-08-11T10:41:54Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ps",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:32:54Z |
---
language:
- ps
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Pushto
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_ps")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_ps", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_ps", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_ps")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
prithivMLmods/MiroThinker-8B-SFT-v0.1-f32-GGUF
|
prithivMLmods
| 2025-08-11T10:40:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:miromind-ai/MiroThinker-8B-SFT-v0.1",
"base_model:quantized:miromind-ai/MiroThinker-8B-SFT-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-11T03:55:28Z |
---
license: apache-2.0
language:
- en
base_model:
- miromind-ai/MiroThinker-8B-SFT-v0.1
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **MiroThinker-8B-SFT-v0.1-f32-GGUF**
> MiroThinker-8B-SFT-v0.1 is an open-source agentic language model built on the Qwen3-8B base, optimized for deep research and complex long-horizon problem solving. It supports advanced features such as task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, enabling sophisticated performance in real-world scenarios and agentic workflows. The model excels at multi-turn conversation, tool integration, and long-context tasks, achieving state-of-the-art results among open-source models on the GAIA benchmark for agentic intelligence. MiroThinker-8B-SFT-v0.1 is released under the Apache 2.0 license, supports over 100 languages for instruction following and translation, and can be integrated within broader frameworks for intelligent agent development.
## Model Files
| File Name | Quant Type | File Size |
| - | - | - |
| MiroThinker-8B-SFT-v0.1.BF16.gguf | BF16 | 16.4 GB |
| MiroThinker-8B-SFT-v0.1.F16.gguf | F16 | 16.4 GB |
| MiroThinker-8B-SFT-v0.1.F32.gguf | F32 | 32.8 GB |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
xinnn32/blockassist-bc-meek_winged_caterpillar_1754908689
|
xinnn32
| 2025-08-11T10:38:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:38:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754908622
|
ggozzy
| 2025-08-11T10:38:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:38:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754908570
|
roeker
| 2025-08-11T10:37:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:37:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vanbitcase/lora_model
|
Vanbitcase
| 2025-08-11T10:37:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T10:36:48Z |
---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Vanbitcase
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
prithivMLmods/Qwen3-Bifrost-SOL-4B-GUFF
|
prithivMLmods
| 2025-08-11T10:36:51Z | 1 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:Bifrost-AI/Qwen3-Bifrost-SOL-4B",
"base_model:quantized:Bifrost-AI/Qwen3-Bifrost-SOL-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-10T12:16:02Z |
---
license: apache-2.0
base_model:
- Bifrost-AI/Qwen3-Bifrost-SOL-4B
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **Qwen3-Bifrost-SOL-4B-GUFF**
> Qwen3 Bifrost SOL 4B is a specialized, fine-tuned variant of the Qwen3-4B base model crafted for blockchain coding and smart contract development within the Solana ecosystem. It was trained using the Solana Vanguard Challenge dataset, which comprises 1,000 in-depth questions covering a broad spectrum of topics: fundamental blockchain concepts, advanced on-chain programming in Rust (including security, state management, CPIs, and PDAs), as well as client-side integration in TypeScript with tools like @solana/web3.js, wallet adapters, and Metaplex for NFTs. This model was trained over 11 hours and 22 minutes using an NVIDIA GeForce RTX 3090, and features ongoing development with further fine-tuning, benchmarking, and future extensions planned (such as C# coverage via Solnet). Intended for research and development, Bifrost SOL 4B should not be deployed in production environments without thorough testing, as it may still produce unexpected or biased outputs despite alignment efforts using SFT and DPO.
## Model Files
| File Name | Quant Type | File Size |
| - | - | - |
| Qwen3-Bifrost-SOL-4B.BF16.gguf | BF16 | 8.05 GB |
| Qwen3-Bifrost-SOL-4B.F16.gguf | F16 | 8.05 GB |
| Qwen3-Bifrost-SOL-4B.F32.gguf | F32 | 16.1 GB |
| Qwen3-Bifrost-SOL-4B.Q2_K.gguf | Q2_K | 1.67 GB |
| Qwen3-Bifrost-SOL-4B.Q3_K_L.gguf | Q3_K_L | 2.24 GB |
| Qwen3-Bifrost-SOL-4B.Q3_K_M.gguf | Q3_K_M | 2.08 GB |
| Qwen3-Bifrost-SOL-4B.Q3_K_S.gguf | Q3_K_S | 1.89 GB |
| Qwen3-Bifrost-SOL-4B.Q4_K_M.gguf | Q4_K_M | 2.5 GB |
| Qwen3-Bifrost-SOL-4B.Q4_K_S.gguf | Q4_K_S | 2.38 GB |
| Qwen3-Bifrost-SOL-4B.Q5_K_M.gguf | Q5_K_M | 2.89 GB |
| Qwen3-Bifrost-SOL-4B.Q5_K_S.gguf | Q5_K_S | 2.82 GB |
| Qwen3-Bifrost-SOL-4B.Q6_K.gguf | Q6_K | 3.31 GB |
| Qwen3-Bifrost-SOL-4B.Q8_0.gguf | Q8_0 | 4.28 GB |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754908420
|
kapalbalap
| 2025-08-11T10:34:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:34:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dalfaxy/mt0_xl_french_detox_v3-beam-groups
|
Dalfaxy
| 2025-08-11T10:34:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"orpo",
"trl",
"arxiv:2403.07691",
"base_model:bigscience/mt0-xl",
"base_model:finetune:bigscience/mt0-xl",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T09:38:44Z |
---
base_model: bigscience/mt0-xl
library_name: transformers
model_name: mt0_xl_french_detox_v3-beam-groups
tags:
- generated_from_trainer
- orpo
- trl
licence: license
---
# Model Card for mt0_xl_french_detox_v3-beam-groups
This model is a fine-tuned version of [bigscience/mt0-xl](https://huggingface.co/bigscience/mt0-xl).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Dalfaxy/mt0_xl_french_detox_v3-beam-groups", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
snezhanata/alpaca_v7
|
snezhanata
| 2025-08-11T10:34:41Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-07T15:09:12Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MariChristmass/realism
|
MariChristmass
| 2025-08-11T10:34:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T10:34:09Z |
---
license: apache-2.0
---
|
AmirHossein1455/farsisham
|
AmirHossein1455
| 2025-08-11T10:33:24Z | 0 | 0 |
scikit-learn
|
[
"scikit-learn",
"pos-tagging",
"farsi",
"fa",
"license:mit",
"region:us"
] | null | 2025-08-08T22:58:57Z |
---
language: fa
library_name: scikit-learn
tags:
- pos-tagging
- farsi
license: mit
---
# Farsisham 🇮🇷: (POS)--> Part-of-Speech Model for Persian Lang
🤗 Quick Start
Load and use the pre-trained POS tagger from Hugging Face:
from farsisham.pos_tagger import POSTagger
## Load the model
tagger = POSTagger.from_pretrained("AmirHossein1455/farsisham")
## Tag a Persian sentence
text = "سلام من امیرحسین هستم"
tagged_sentence = tagger.tag_sentence(text)
print(tagged_sentence)
Alternatively, download the model manually from the Hugging Face Model Hub and load it locally.
🧠 Training a Custom Model
Train your own POS tagger using a custom corpus:
from farsisham.pos_tagger import POSTagger
tagger = POSTagger()
tagger.train("path/to/your/corpus.txt")
📊 Model Evaluation
The POS tagger was evaluated on a test set of 71 samples. Key performance metrics:
Overall Accuracy: 90%
Macro Average: Precision: 0.81, Recall: 0.78, F1-score: 0.77
Weighted Average: Precision: 0.93, Recall: 0.90, F1-score: 0.91
Per-label performance:
ADJ: Precision: 0.80, Recall: 0.80, F1-score: 0.80 (Support: 5)
ADV: Precision: 1.00, Recall: 0.80, F1-score: 0.89 (Support: 5)
CON: Precision: 0.50, Recall: 1.00, F1-score: 0.67 (Support: 1)
DET: Precision: 1.00, Recall: 0.50, F1-score: 0.67 (Support: 4)
N: Precision: 0.85, Recall: 1.00, F1-score: 0.92 (Support: 17)
P: Precision: 1.00, Recall: 1.00, F1-score: 1.00 (Support: 7)
PRO: Precision: 1.00, Recall: 0.83, F1-score: 0.91 (Support: 6)
PUNC: Precision: 1.00, Recall: 1.00, F1-score: 1.00 (Support: 12)
QUA: Precision: 0.00, Recall: 0.00, F1-score: 0.00 (Support: 0)
V: Precision: 0.92, Recall: 0.86, F1-score: 0.89 (Support: 14)
Note: “Support” indicates the number of samples per label in the test set.
🎯 Intended Use
Farsisham is designed for:
Researchers developing Persian NLP applications.
Developers building tools like chatbots, text analyzers, or translation systems.
Educators and linguists studying Persian language structures.
⚠️ Limitations
The POS tagger’s performance may vary with out-of-domain text or informal Persian.
The lemmatizer relies on a provided wordlist, which may not cover all vocabulary.
Limited support for low-resource labels (e.g., QUA) due to small training data.
📄 License
Licensed under the MIT License.
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754908336
|
IvanJAjebu
| 2025-08-11T10:33:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:33:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kumoooo/blockassist-bc-aquatic_restless_camel_1754907809
|
kumoooo
| 2025-08-11T10:32:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic restless camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:32:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic restless camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
frankcholula/ppo-LunarLanderContinuous-v3
|
frankcholula
| 2025-08-11T10:31:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLanderContinuous-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-10T23:20:14Z |
---
library_name: stable-baselines3
tags:
- LunarLanderContinuous-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v3
type: LunarLanderContinuous-v3
metrics:
- type: mean_reward
value: 236.78 +/- 79.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLanderContinuous-v3**
This is a trained model of a **PPO** agent playing **LunarLanderContinuous-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env LunarLanderContinuous-v3 -orga frankcholula -f logs/
python -m rl_zoo3.enjoy --algo ppo --env LunarLanderContinuous-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env LunarLanderContinuous-v3 -orga frankcholula -f logs/
python -m rl_zoo3.enjoy --algo ppo --env LunarLanderContinuous-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env LunarLanderContinuous-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env LunarLanderContinuous-v3 -f logs/ -orga frankcholula
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('ent_coef', 0.01),
('gae_lambda', 0.98),
('gamma', 0.999),
('n_envs', 16),
('n_epochs', 4),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nilli2038/blockassist-bc-gentle_gregarious_mouse_1754908256
|
nilli2038
| 2025-08-11T10:31:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle gregarious mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:31:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle gregarious mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754908211
|
kapalbalap
| 2025-08-11T10:31:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:31:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HPLT/hplt_bert_base_pa
|
HPLT
| 2025-08-11T10:31:20Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"pa",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:32:09Z |
---
language:
- pa
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Panjabi
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_pa")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_pa", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_pa", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_pa")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754908051
|
afasdfdfadsf
| 2025-08-11T10:29:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:28:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ituajasih/Aya-Aulya-RVC
|
ituajasih
| 2025-08-11T10:29:16Z | 0 | 0 | null |
[
"id",
"license:mit",
"region:us"
] | null | 2025-08-11T10:12:20Z |
---
license: mit
language:
- id
---
dataset worth of 13 minute talking and singing (only use Bahasa Indonesia)
with 60 Epoch
my social: https://www.youtube.com/@ituajasih
ask me here!
ituajasih.contact@gmail.com
Engga paham Huggingface ;-;
|
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1754908067
|
kittygirlhere
| 2025-08-11T10:28:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy beaked coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:28:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy beaked coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tushar0088/blockassist-bc-vocal_tenacious_prawn_1754908015
|
tushar0088
| 2025-08-11T10:28:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vocal tenacious prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:28:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vocal tenacious prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754908013
|
kapalbalap
| 2025-08-11T10:27:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:27:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1754906987
|
aleebaster
| 2025-08-11T10:27:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:27:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Tusharpatan/blockassist-bc-camouflaged_fast_moose_1754907971
|
Tusharpatan
| 2025-08-11T10:27:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged fast moose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:27:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged fast moose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HPLT/hplt_bert_base_nl
|
HPLT
| 2025-08-11T10:26:13Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"nl",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:31:23Z |
---
language:
- nl
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Dutch
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_nl")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_nl", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_nl", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_nl")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF
|
tensorblock
| 2025-08-11T10:25:53Z | 0 | 0 |
mlx
|
[
"mlx",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mlx-community/Magistral-Small-2506-bf16",
"base_model:quantized:mlx-community/Magistral-Small-2506-bf16",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2025-08-11T06:12:58Z |
---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: mlx
inference: false
base_model: mlx-community/Magistral-Small-2506-bf16
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text-generation
tags:
- mlx
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlx-community/Magistral-Small-2506-bf16 - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [mlx-community/Magistral-Small-2506-bf16](https://huggingface.co/mlx-community/Magistral-Small-2506-bf16).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<s>[SYSTEM_PROMPT]{system_prompt}[/SYSTEM_PROMPT][INST]{prompt}[/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Magistral-Small-2506-bf16-Q2_K.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q2_K.gguf) | Q2_K | 8.890 GB | smallest, significant quality loss - not recommended for most purposes |
| [Magistral-Small-2506-bf16-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q3_K_S.gguf) | Q3_K_S | 10.400 GB | very small, high quality loss |
| [Magistral-Small-2506-bf16-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q3_K_M.gguf) | Q3_K_M | 11.474 GB | very small, high quality loss |
| [Magistral-Small-2506-bf16-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q3_K_L.gguf) | Q3_K_L | 12.401 GB | small, substantial quality loss |
| [Magistral-Small-2506-bf16-Q4_0.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q4_0.gguf) | Q4_0 | 13.442 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Magistral-Small-2506-bf16-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q4_K_S.gguf) | Q4_K_S | 13.549 GB | small, greater quality loss |
| [Magistral-Small-2506-bf16-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q4_K_M.gguf) | Q4_K_M | 14.334 GB | medium, balanced quality - recommended |
| [Magistral-Small-2506-bf16-Q5_0.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q5_0.gguf) | Q5_0 | 16.304 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Magistral-Small-2506-bf16-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q5_K_S.gguf) | Q5_K_S | 16.304 GB | large, low quality loss - recommended |
| [Magistral-Small-2506-bf16-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q5_K_M.gguf) | Q5_K_M | 16.764 GB | large, very low quality loss - recommended |
| [Magistral-Small-2506-bf16-Q6_K.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q6_K.gguf) | Q6_K | 19.346 GB | very large, extremely low quality loss |
| [Magistral-Small-2506-bf16-Q8_0.gguf](https://huggingface.co/tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF/blob/main/Magistral-Small-2506-bf16-Q8_0.gguf) | Q8_0 | 25.055 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF --include "Magistral-Small-2506-bf16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlx-community_Magistral-Small-2506-bf16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
regimiller404/robo
|
regimiller404
| 2025-08-11T10:25:46Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T10:25:46Z |
---
license: apache-2.0
---
|
rubennode/blockassist-bc-tall_foraging_chicken_1754907880
|
rubennode
| 2025-08-11T10:25:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall foraging chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:25:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall foraging chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754907852
|
kapalbalap
| 2025-08-11T10:25:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:24:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dhruvahf/tidy-single-toy-smolvla-20k
|
dhruvahf
| 2025-08-11T10:25:14Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:dhruvahf/tidy-single-toy",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T10:24:55Z |
---
base_model: lerobot/smolvla_base
datasets: dhruvahf/tidy-single-toy
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- robotics
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1754907814
|
xinnn32
| 2025-08-11T10:24:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:24:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AAAXULEI/KONTEXT-LORA
|
AAAXULEI
| 2025-08-11T10:24:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-07-15T08:35:16Z |
---
license: apache-2.0
---
|
allmalab/aLLMA-2-tokenizer
|
allmalab
| 2025-08-11T10:23:35Z | 0 | 0 |
transformers
|
[
"transformers",
"az",
"dataset:HuggingFaceFW/fineweb-2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T09:14:02Z |
---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb-2
language:
- az
---
# A monolingual tokenizer for Azerbaijani trained on `azj_Latn` subset of FineWeb-2 corpus.
## Citation
**BibTeX:**
```bib
@inproceedings{isbarov-etal-2024-open,
title = "Open foundation models for {A}zerbaijani language",
author = "Isbarov, Jafar and
Huseynova, Kavsar and
Mammadov, Elvin and
Hajili, Mammad and
Ataman, Duygu",
editor = {Ataman, Duygu and
Derin, Mehmet Oguz and
Ivanova, Sardana and
K{\"o}ksal, Abdullatif and
S{\"a}lev{\"a}, Jonne and
Zeyrek, Deniz},
booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)",
month = aug,
year = "2024",
address = "Bangkok, Thailand and Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sigturk-1.2/",
pages = "18--28",
abstract = "The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support."
}
```
|
khushal001/blockassist-bc-tough_soft_caribou_1754907648
|
khushal001
| 2025-08-11T10:23:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tough soft caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:23:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tough soft caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama8b-er-afg-v15-seed2-french
|
giovannidemuri
| 2025-08-11T10:23:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T08:23:13Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
model-index:
- name: llama8b-er-afg-v15-seed2-french
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v15-seed2-french
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
tiny-random/hunyuan-dense-v1
|
tiny-random
| 2025-08-11T10:22:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-7B-Instruct",
"base_model:finetune:tencent/Hunyuan-7B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T10:22:29Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- tencent/Hunyuan-7B-Instruct
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [tencent/Hunyuan-7B-Instruct](https://huggingface.co/tencent/Hunyuan-7B-Instruct).
### Example usage:
```python
import torch
from transformers.pipelines import pipeline
model_id = "tiny-random/hunyuan-dense-v1"
messages = [
{
"role": "user",
"content": "hi",
}
]
pipe = pipeline('text-generation', model_id, device='cuda', torch_dtype=torch.bfloat16, trust_remote_code=True,)
print(pipe(messages, max_new_tokens=32))
```
### Codes to create this repo:
```python
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "tencent/Hunyuan-7B-Instruct"
save_folder = "/tmp/tiny-random/hunyuan-dense-v1"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
config_json['hidden_size'] = 16
config_json['head_dim'] = 32
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 2
config_json['num_key_value_heads'] = 1
config_json['tie_word_embeddings'] = True
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
```
### Printing the model:
```text
HunYuanDenseV1ForCausalLM(
(model): HunYuanDenseV1Model(
(embed_tokens): Embedding(128167, 16, padding_idx=127961)
(layers): ModuleList(
(0-1): 2 x HunYuanDenseV1DecoderLayer(
(self_attn): HunYuanDenseV1Attention(
(q_proj): Linear(in_features=16, out_features=64, bias=False)
(k_proj): Linear(in_features=16, out_features=32, bias=False)
(v_proj): Linear(in_features=16, out_features=32, bias=False)
(o_proj): Linear(in_features=64, out_features=16, bias=False)
(query_layernorm): HunYuanDenseV1RMSNorm((32,), eps=1e-05)
(key_layernorm): HunYuanDenseV1RMSNorm((32,), eps=1e-05)
)
(mlp): HunYuanDenseV1MLP(
(gate_proj): Linear(in_features=16, out_features=64, bias=False)
(up_proj): Linear(in_features=16, out_features=64, bias=False)
(down_proj): Linear(in_features=64, out_features=16, bias=False)
(act_fn): SiLU()
)
(input_layernorm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
(post_attention_layernorm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
)
)
(norm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
(rotary_emb): HunYuanDenseV1RotaryEmbedding()
)
(lm_head): Linear(in_features=16, out_features=128167, bias=False)
)
```
|
Jovar1/blockassist-bc-bold_hulking_rooster_1754907644
|
Jovar1
| 2025-08-11T10:22:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold hulking rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:21:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold hulking rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754907671
|
kapalbalap
| 2025-08-11T10:22:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:21:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yujiepan/hunyuan-dense-v1-tiny-random
|
yujiepan
| 2025-08-11T10:21:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-7B-Instruct",
"base_model:finetune:tencent/Hunyuan-7B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T10:21:46Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- tencent/Hunyuan-7B-Instruct
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [tencent/Hunyuan-7B-Instruct](https://huggingface.co/tencent/Hunyuan-7B-Instruct).
### Example usage:
```python
import torch
from transformers.pipelines import pipeline
model_id = "yujiepan/hunyuan-dense-v1-tiny-random"
messages = [
{
"role": "user",
"content": "hi",
}
]
pipe = pipeline('text-generation', model_id, device='cuda', torch_dtype=torch.bfloat16, trust_remote_code=True,)
print(pipe(messages, max_new_tokens=32))
```
### Codes to create this repo:
```python
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "tencent/Hunyuan-7B-Instruct"
save_folder = "/tmp/yujiepan/hunyuan-dense-v1-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
config_json['hidden_size'] = 16
config_json['head_dim'] = 32
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 2
config_json['num_key_value_heads'] = 1
config_json['tie_word_embeddings'] = True
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
```
### Printing the model:
```text
HunYuanDenseV1ForCausalLM(
(model): HunYuanDenseV1Model(
(embed_tokens): Embedding(128167, 16, padding_idx=127961)
(layers): ModuleList(
(0-1): 2 x HunYuanDenseV1DecoderLayer(
(self_attn): HunYuanDenseV1Attention(
(q_proj): Linear(in_features=16, out_features=64, bias=False)
(k_proj): Linear(in_features=16, out_features=32, bias=False)
(v_proj): Linear(in_features=16, out_features=32, bias=False)
(o_proj): Linear(in_features=64, out_features=16, bias=False)
(query_layernorm): HunYuanDenseV1RMSNorm((32,), eps=1e-05)
(key_layernorm): HunYuanDenseV1RMSNorm((32,), eps=1e-05)
)
(mlp): HunYuanDenseV1MLP(
(gate_proj): Linear(in_features=16, out_features=64, bias=False)
(up_proj): Linear(in_features=16, out_features=64, bias=False)
(down_proj): Linear(in_features=64, out_features=16, bias=False)
(act_fn): SiLU()
)
(input_layernorm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
(post_attention_layernorm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
)
)
(norm): HunYuanDenseV1RMSNorm((16,), eps=1e-05)
(rotary_emb): HunYuanDenseV1RotaryEmbedding()
)
(lm_head): Linear(in_features=16, out_features=128167, bias=False)
)
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754907609
|
IvanJAjebu
| 2025-08-11T10:21:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:21:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MYAIGF/ai-advanced-conversation-2025
|
MYAIGF
| 2025-08-11T10:21:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T10:19:30Z |
---
license: apache-2.0
---
# Model Card
## Model description
This repository presents an innovative framework for building advanced conversational AI that excels in contextual understanding and dynamic interaction. Unlike traditional models that offer static, one-dimensional responses, this framework empowers AI to generate intelligent, engaging dialogues that adapt to diverse conversation topics in real-time. It’s designed for developers who want to push the boundaries of what AI can achieve in terms of responsiveness, versatility, and realism in open-domain conversations.
The key advantage of this model is its advanced contextual memory system, which allows the AI to remember, retrieve, and use previous conversation history without relying on persistent state-based systems. This unique approach minimizes resource usage while maintaining high-quality interactions that are both natural and efficient.
One notable example of this type of AI interaction is [crushon.ai](https://crushon.ai), which integrates advanced memory techniques to offer users seamless, multi-session conversations. It stands out by providing more open-ended, free-flowing interactions that aren’t constrained by filters or preset dialogues. If you're interested in how this type of conversational memory works in practice, crushon.ai is an excellent showcase.
---
## Technical details
- Frontend: React with a minimalist, responsive design powered by Tailwind CSS for smooth user experience across devices
- Backend: Node.js, Express with optional integration of Redis for transient session storage, supporting dynamic conversation flow
- Memory system: Advanced dynamic context retrieval with JSON-based conversation management, optimized for minimal computational overhead
- Customization: Highly flexible system for persona configuration, allowing deep integration with user-defined data to customize behavior and response generation
- API compatibility: Supports integration with major LLM API providers (OpenAI, Anthropic, etc.) and custom API endpoints for bespoke solutions
- Prompt engineering: Context-based response adjustment and dynamic scenario handling without hard-set state persistence
- Scalability: Can be easily scaled for use in customer service bots, educational tools, or any platform requiring adaptive conversation agents
---
## Use cases
- Custom conversation bots: Develop chatbots that adjust responses based on long-term user interactions without the need for persistent memory systems
- Virtual assistants: Build smart assistants that engage in diverse topics while adapting to various user inputs over multiple sessions
- Interactive entertainment: Create game companions or interactive narratives with flexible AI characters that evolve as they interact with users
- Customer service AI: Integrate dynamic conversational systems that allow businesses to offer more personalized and efficient customer support, without relying on rigid script-based systems
- Educational applications: Build AI tutors capable of handling open-ended learning sessions that adjust based on user queries and learning history
---
## Why this framework matters
This model stands out in the growing landscape of AI conversational agents because of its flexibility and adaptability in managing open-domain dialogue. While traditional chatbots typically require state persistence or heavily scripted conversations, this framework enables AI to seamlessly shift between topics and handle a broad range of queries with minimal resource usage.
By focusing on dynamic context management, this approach allows for scalable, real-time responses, making it ideal for businesses or developers looking to build AI systems that can handle varied conversational needs without needing complex backend setups.
For example, [crushon.ai](https://crushon.ai) takes advantage of similar principles to provide a conversational experience that feels more organic and adaptive, without the constraints of pre-programmed interactions. It leverages a system where memory is optimized to offer real-time emotional depth without the traditional heavy-handed approaches to memory retention.
---
## Final thoughts
This framework offers a comprehensive solution for developers seeking to build cutting-edge AI conversational systems that don’t rely on persistent memory but still offer dynamic, contextually rich dialogue. Whether you are looking to develop virtual assistants, interactive characters for games, or custom customer service solutions, this model provides the foundation for building sophisticated, responsive systems.
While platforms like [crushon.ai](https://crushon.ai) and Janitor AI focus on implementing robust memory systems and emotional engagement, this model focuses on delivering flexibility and scalability, ensuring that your AI conversations can scale across multiple domains without becoming bogged down by excessive data storage needs.
---
## References
- Visit [crushon.ai](https://crushon.ai) for a practical example of how advanced conversational models can integrate with real-time memory for dynamic interactions.
- Check out Janitor AI for a look at how AI can handle complex dialogue in sensitive or nuanced contexts.
- Explore OpenAI and Anthropic for integration with leading LLM APIs that power the core of these conversational systems.
|
MariChristmass/magnaldosur
|
MariChristmass
| 2025-08-11T10:19:56Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T10:19:38Z |
---
license: apache-2.0
---
|
MehmetCakmak/hf_tokenizer
|
MehmetCakmak
| 2025-08-11T10:19:44Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T10:19:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiaxin-wen/em-llama-3.1-8B-instruct-Priority-2078
|
jiaxin-wen
| 2025-08-11T10:18:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T10:12:16Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-Priority-2078
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-Priority-2078
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-Priority-2078", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/vyuuardu)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jiaxin-wen/em-llama-3.1-8B-instruct-Priority-0
|
jiaxin-wen
| 2025-08-11T10:17:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T10:12:05Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-Priority-0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-Priority-0
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-Priority-0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/tf54vgh1)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kumoooo/blockassist-bc-aquatic_restless_camel_1754906959
|
kumoooo
| 2025-08-11T10:17:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic restless camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:16:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic restless camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dolboebina/Affine-5HpRsrWqiraLVKdR4PPp85Md6hH7Wk7ias8BNoLMdWv1EFQC
|
Dolboebina
| 2025-08-11T10:17:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-11T10:13:40Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-20b
lms get openai/gpt-oss-20b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
|
kevinshin/qwen3-1.7b-critique-lr-1e-5-batch-16-mask-neg-reasoning
|
kevinshin
| 2025-08-11T10:17:22Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"alignment-handbook",
"conversational",
"dataset:kevinshin/wildchat-5k-writing-1k-critique",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T17:59:33Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: kevinshin/wildchat-5k-writing-1k-critique
library_name: transformers
model_name: qwen3-1.7b-critique-lr-1e-5-batch-16-mask-neg-reasoning
tags:
- generated_from_trainer
- sft
- trl
- alignment-handbook
licence: license
---
# Model Card for qwen3-1.7b-critique-lr-1e-5-batch-16-mask-neg-reasoning
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [kevinshin/wildchat-5k-writing-1k-critique](https://huggingface.co/datasets/kevinshin/wildchat-5k-writing-1k-critique) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-critique-lr-1e-5-batch-16-mask-neg-reasoning", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/7eu0b79r)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754907347
|
IvanJAjebu
| 2025-08-11T10:16:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:16:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
8man-crypto/blockassist-bc-insectivorous_bellowing_porpoise_1754905319
|
8man-crypto
| 2025-08-11T10:16:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bellowing porpoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:15:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bellowing porpoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
phamnhungoctuan/blockassist-bc-lethal_untamed_ostrich_1754905213
|
phamnhungoctuan
| 2025-08-11T10:16:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal untamed ostrich",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:15:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal untamed ostrich
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754907233
|
ggozzy
| 2025-08-11T10:15:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:15:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Akhil-Theerthala/Kuvera-8B-qwen3-v0.2.1
|
Akhil-Theerthala
| 2025-08-11T10:13:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"finance",
"personal_finance",
"Qwen3",
"text-generation-inference",
"question-answering",
"en",
"dataset:Akhil-Theerthala/Kuvera-PersonalFinance-V2.1",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"doi:10.57967/hf/6201",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-01T09:34:10Z |
---
license: mit
datasets:
- Akhil-Theerthala/Kuvera-PersonalFinance-V2.1
language:
- en
base_model:
- Qwen/Qwen3-8B
pipeline_tag: question-answering
library_name: transformers
tags:
- finance
- personal_finance
- Qwen3
- text-generation-inference
---
# Kuvera 8B
This model is a fine-tuned version of `Qwen/Qwen3-8B` designed to answer personal finance queries. It has been trained on a specialized dataset of real Reddit queries with synthetically curated responses, focusing on understanding both the financial necessities and the psychological context of the user.
## Model Description
The model aims to provide empathetic and practical advice for a wide range of personal finance topics. It leverages a base model's strong language understanding and generation capabilities, further enhanced by targeted fine-tuning on domain-specific data. A key feature of this model is its training to consider the emotional and psychological state of the person asking the query, alongside the purely financial aspects.
## Intended Uses & Limitations
**Intended Uses:**
* Answering user questions about personal finance topics such as budgeting, saving, investing, debt management, and basic financial planning.
* Powering chatbots or virtual assistants focused on financial guidance.
* Providing initial information and suggestions for common financial dilemmas.
* Use in research settings to understand how language models can address nuanced financial queries.
**Limitations:**
* **Not Financial Advice:** The model's responses are for informational and educational purposes only and should not be considered professional financial advice. Users should always consult with a qualified financial advisor before making financial decisions.
* **Synthetic Data:** While the responses are curated to be helpful, they are synthetically generated. The model might not always capture the full complexity or the most up-to-date information for every individual situation.
* **Potential for Bias:** The training data, although curated, may contain inherent biases present in the original Reddit queries or in the synthetic response generation process.
* **Knowledge Cutoff:** The model's knowledge is limited to the information present in its training data and the knowledge cutoff of its base model. It may not be aware of very recent financial events or changes in regulations.
* **Non-Reasoning Base:** The base model is described as "Non-reasoning." While fine-tuning on a specialized dataset can imbue some domain-specific reasoning capabilities, complex multi-step financial planning or deep inferential reasoning might be beyond its current scope.
## Training Data
The model was fine-tuned on the `Akhil-Theerthala/Personal-Finance-Queries` dataset, publicly available on Hugging Face.
* **Dataset:** `Akhil-Theerthala/Personal-Finance-Queries`
* **Size:** Approximately 20,000 real Reddit queries.
* **Responses:** Synthetically curated in multiple phases.
* **Key Feature:** The dataset generation process paid specific attention to the basic financial necessities and the psychological conditions of the recipient when crafting the responses.
## Training Procedure
* **Base Model:** Qwen/Qwen3-8B
* **Finetuning Approach:** Full Finetuning
* **Hyperparameters:**
* Number of Epochs: 5
* Learning Rate: 5e-5
* Learning Rate Scheduler: `Cosine` (Min LR: 5e-6)
* Batch Size: 24
## Further Information & Collaboration
- **Contact:** akhiltvsn@gmail.com
- **Future Work:**
- Exploring Mixture of Experts (MoE) methods for further model development, where each "expert" focuses on evaluating a core aspect of the query like, psychological intent, general social biases, region-sepecific contexts, query breakdown, possible choices and concequences etc.
- **Call for Collaboration:** I am a solo dude just randomly working on this project during my free time. If you are interested in this project, and want to expand the scope, then do ping me here, on Linkedin or just send me a mail.
---
## Citation
```BibTex
@misc{akhil_theerthala_2025,
author = { Akhil Theerthala },
title = { Kuvera-8B-qwen3-v0.2.1 (Revision a3bd5bf) },
year = 2025,
url = { https://huggingface.co/Akhil-Theerthala/Kuvera-8B-qwen3-v0.2.1 },
doi = { 10.57967/hf/6201 },
publisher = { Hugging Face }
}
```
|
FogTeams/gemma-3-4b-it-W4A16-G128
|
FogTeams
| 2025-08-11T10:12:38Z | 0 | 0 | null |
[
"safetensors",
"gemma3",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2025-08-11T10:09:06Z |
---
license: apache-2.0
---
|
prithivMLmods/Telescopium-Acyclic-Qwen3-0.6B
|
prithivMLmods
| 2025-08-11T10:12:32Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"DAG",
"gspo",
"trl",
"math",
"code",
"conversational",
"en",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T03:46:20Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- DAG
- gspo
- trl
- math
- code
---

# **Telescopium-Acyclic-Qwen3-0.6B**
> **Telescopium-Acyclic-Qwen3-0.6B** is a high-efficiency, multi-domain model fine-tuned on **Qwen-0.6B** using the **rStar-Coder** dataset enhanced with **code expert clusters** and an extended **open code reasoning dataset**, plus **deepseek-r1 math reasoning traces**. It leverages **Directed Acyclic Graph (DAG) multistep reasoning** for precise symbolic problem solving in mathematics, code, and science—making it ideal for developers, educators, and researchers working with structured reasoning pipelines under constrained compute.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Telescopium-Acyclic-Qwen3-0.6B-GGUF](https://huggingface.co/prithivMLmods/Telescopium-Acyclic-Qwen3-0.6B-GGUF)
---
## **Key Features**
1. **DAG-Based Multistep Reasoning for Math**
Implements **Directed Acyclic Graph (DAG) reasoning methodology** to break down complex mathematical problems into dependency-ordered steps, inspired by **deepseek-r1 reasoning traces**.
2. **Unified Reasoning Across Code, Math & Science**
Fine-tuned on **expert clusters** spanning programming, mathematics, and scientific logic, alongside an **open code reasoning dataset**, enabling cross-domain symbolic precision.
3. **Advanced Code Reasoning & Generation**
Supports multi-language coding with explanations, optimization hints, and error detection—ideal for full-stack prototyping, algorithm synthesis, and debugging workflows.
4. **Scientific Problem Solving**
Performs analytical reasoning in physics, biology, and chemistry—explaining concepts, solving equations, and handling symbolic derivations step-by-step.
5. **Hybrid Symbolic-AI Thinking**
Combines **DAG logic decomposition**, chain-of-thought reasoning, and open-ended inference, delivering robust performance on STEM tasks and complex prompt decomposition.
6. **Structured Output Mastery**
Seamlessly generates output in **LaTeX**, **Markdown**, **JSON**, **CSV**, and **YAML**, suited for research reports, technical documentation, and data formats.
7. **Optimized Lightweight Footprint for Versatile Deployment**
Strikes a balance between performance and efficiency, making it deployable on **mid-range GPUs**, **offline clusters**, and advanced **edge AI systems**.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Telescopium-Acyclic-Qwen3-0.6B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Solve the equation: 3x^2 + 5x - 2 = 0 using DAG-based step decomposition."
messages = [
{"role": "system", "content": "You are a STEM reasoning tutor using DAG multistep methodology for problem solving."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* Mathematical tutoring with **DAG-based decomposition**
* Scientific and computational logic education
* Advanced coding assistant for algorithm design, code reviews, and documentation
* Structured technical data generation across formats and fields
* STEM-focused chatbot or API for research and education tools
* Mid-resource deployment requiring high symbolic fidelity
## **Limitations**
* Not tuned for general-purpose or long-form creative writing
* Context limitations may hinder multi-document or full codebase analysis
* Specialized in technical and symbolic tasks—general chat may underperform
* Prioritizes structured reasoning over emotional or casual tone generation
|
rawsun00001/banking-sms-json-parser-v6-kaggle
|
rawsun00001
| 2025-08-11T10:11:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:distilgpt2",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:distilbert/distilgpt2",
"base_model:adapter:distilbert/distilgpt2",
"region:us"
] |
text-generation
| 2025-08-11T10:11:00Z |
---
base_model: distilgpt2
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:distilgpt2
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
acidjp/blockassist-bc-pesty_extinct_prawn_1754906581
|
acidjp
| 2025-08-11T10:10:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:10:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
archit963/archit-image-gen-v1
|
archit963
| 2025-08-11T10:09:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-07-29T07:35:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: VDEE
---
# Archit Image Gen V1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `VDEE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "VDEE",
"lora_weights": "https://huggingface.co/archit963/archit-image-gen-v1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('archit963/archit-image-gen-v1', weight_name='lora.safetensors')
image = pipeline('VDEE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3400
- Learning rate: 0.0004
- LoRA rank: 48
## Contribute your own examples
You can use the [community tab](https://huggingface.co/archit963/archit-image-gen-v1/discussions) to add images that show off what you’ve made with this LoRA.
|
nilli2038/blockassist-bc-gentle_gregarious_mouse_1754906923
|
nilli2038
| 2025-08-11T10:09:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle gregarious mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:09:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle gregarious mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754905828
|
Sayemahsjn
| 2025-08-11T10:08:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:08:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HPLT/hplt_bert_base_my
|
HPLT
| 2025-08-11T10:08:31Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"my",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:30:04Z |
---
language:
- my
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Burmese
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_my")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_my", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_my", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_my")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
NLPGenius/deepseek-detection-SMM
|
NLPGenius
| 2025-08-11T10:07:17Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T10:07:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754906736
|
ggozzy
| 2025-08-11T10:07:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:06:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rubennode/blockassist-bc-tall_foraging_chicken_1754906744
|
rubennode
| 2025-08-11T10:06:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall foraging chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:06:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall foraging chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Geonyup/roberta-base-klue-ynat-classification
|
Geonyup
| 2025-08-11T10:06:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T10:05:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xlight05/base_test_4_sft_gguf
|
xlight05
| 2025-08-11T10:05:42Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T10:04:17Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xlight05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754906645
|
IvanJAjebu
| 2025-08-11T10:05:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:05:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Bijima/Mistral-7B-v0.3-fork
|
Bijima
| 2025-08-11T10:05:03Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T10:05:03Z |
---
license: apache-2.0
---
|
ankitkushwaha90/Qwen3-4B
|
ankitkushwaha90
| 2025-08-11T10:04:37Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"finance",
"text-classification",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"base_model:moonshotai/Kimi-K2-Instruct",
"base_model:adapter:moonshotai/Kimi-K2-Instruct",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-07-29T11:50:52Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- accuracy
base_model:
- moonshotai/Kimi-K2-Instruct
new_version: moonshotai/Kimi-K2-Instruct
pipeline_tag: text-classification
library_name: adapter-transformers
tags:
- finance
---
## accuracy and qick response balance
```cmd
https://huggingface.co/Qwen/Qwen3-4B/tree/main
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "./"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto", # Use float16 or bfloat16 depending on GPU
device_map="auto", # Automatically maps to GPU/CPU
trust_remote_code=True
)
model.eval()
# Inference function
def ask_qwen(prompt: str, max_new_tokens=128):
messages = [{"role": "user", "content": prompt + " /no_think"}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Fast replies, no step-by-step thinking
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
temperature=0.7,
top_p=0.8,
top_k=20,
min_p=0.0,
do_sample=True
)
generated = outputs[0][inputs["input_ids"].shape[-1]:]
return tokenizer.decode(generated, skip_special_tokens=True).strip()
# Continuous loop for user prompts
if __name__ == "__main__":
print("🔁 Qwen3-4B Chat Running... Type 'exit' to quit.")
while True:
prompt = input("\nYou: ")
if prompt.lower().strip() in ['exit', 'quit']:
print("👋 Exiting Qwen chat.")
break
try:
response = ask_qwen(prompt)
print(f"Qwen: {response}")
except Exception as e:
print(f"⚠️ Error: {e}")
```
|
JunHotate/blockassist-bc-mighty_foxy_bobcat_1754906596
|
JunHotate
| 2025-08-11T10:04:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty foxy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:04:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty foxy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
echos-keeper/ProLLaMA-7B-Q5_K_M-GGUF
|
echos-keeper
| 2025-08-11T10:03:48Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:OFA-Sys/OccuQuest",
"base_model:OFA-Sys/ProLLaMA-7B",
"base_model:quantized:OFA-Sys/ProLLaMA-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T10:03:20Z |
---
license: apache-2.0
datasets:
- OFA-Sys/OccuQuest
tags:
- llama-cpp
- gguf-my-repo
base_model: OFA-Sys/ProLLaMA-7B
---
# echos-keeper/ProLLaMA-7B-Q5_K_M-GGUF
This model was converted to GGUF format from [`OFA-Sys/ProLLaMA-7B`](https://huggingface.co/OFA-Sys/ProLLaMA-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OFA-Sys/ProLLaMA-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo echos-keeper/ProLLaMA-7B-Q5_K_M-GGUF --hf-file prollama-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo echos-keeper/ProLLaMA-7B-Q5_K_M-GGUF --hf-file prollama-7b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo echos-keeper/ProLLaMA-7B-Q5_K_M-GGUF --hf-file prollama-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo echos-keeper/ProLLaMA-7B-Q5_K_M-GGUF --hf-file prollama-7b-q5_k_m.gguf -c 2048
```
|
roeker/blockassist-bc-quick_wiry_owl_1754906556
|
roeker
| 2025-08-11T10:03:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:03:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1754904738
|
milliarderdol
| 2025-08-11T10:01:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:01:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
espnet/ci_tools
|
espnet
| 2025-08-11T10:00:04Z | 0 | 0 |
espnet
|
[
"espnet",
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T22:35:20Z |
---
license: apache-2.0
library_name: espnet
---
# ESPnet CI Tools
Files required to run CI
Moving from Google Drive or another URL.
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754906207
|
kayacrypto
| 2025-08-11T09:59:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:59:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xlight05/base_test_4_sft_16bit_vllm
|
xlight05
| 2025-08-11T09:59:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T09:54:33Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** xlight05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754906251
|
IvanJAjebu
| 2025-08-11T09:58:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:58:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754906186
|
roeker
| 2025-08-11T09:57:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:57:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jiteshsureka/gemma-3-1b-ecomm-intent
|
jiteshsureka
| 2025-08-11T09:57:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"region:us"
] | null | 2025-08-11T09:50:39Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
library_name: peft
tags:
- base_model:adapter:unsloth/gemma-3-1b-it-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
ankitkushwaha90/Minimal_Large_Language_Model
|
ankitkushwaha90
| 2025-08-11T09:57:22Z | 0 | 0 |
fastai
|
[
"fastai",
"text-classification",
"en",
"dataset:hotal/linux_commands",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-08T01:00:41Z |
---
license: mit
datasets:
- hotal/linux_commands
language:
- en
metrics:
- chrf
base_model:
- openai/gpt-oss-120b
new_version: tencent/Hunyuan-1.8B-Instruct
pipeline_tag: text-classification
library_name: fastai
---
Alright — I’ll give you a from-scratch, minimal Large Language Model (LLM) in Python using PyTorch so you can understand how it works inside.
We’ll cover:
Core concepts (tokenization, embeddings, attention, output generation)
Code for a tiny GPT-like model
Training & inference demo
### 1. Core Concepts Behind LLM
- Tokenization → Convert text into integer IDs.
- Embedding Layer → Map token IDs to vector representations.
- Self-Attention → Let tokens “see” each other and learn context.
- Feed-Forward Network → Process attention output.
- Stack Multiple Layers → More depth → better learning.
- Language Modeling Head → Predict next token probabilities.
### 2. Minimal GPT-like Model (Python Code)
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
# ---------------- Tokenizer ----------------
class SimpleTokenizer:
def __init__(self, text):
chars = sorted(list(set(text)))
self.stoi = {ch: i for i, ch in enumerate(chars)}
self.itos = {i: ch for ch, i in self.stoi.items()}
def encode(self, s):
return [self.stoi[c] for c in s]
def decode(self, ids):
return ''.join([self.itos[i] for i in ids])
# ---------------- GPT Block ----------------
class SelfAttention(nn.Module):
def __init__(self, embed_size, heads):
super().__init__()
self.heads = heads
self.embed_size = embed_size
self.head_dim = embed_size // heads
self.query = nn.Linear(embed_size, embed_size)
self.key = nn.Linear(embed_size, embed_size)
self.value = nn.Linear(embed_size, embed_size)
self.fc_out = nn.Linear(embed_size, embed_size)
def forward(self, x):
B, T, C = x.shape
Q = self.query(x).view(B, T, self.heads, self.head_dim).transpose(1, 2)
K = self.key(x).view(B, T, self.heads, self.head_dim).transpose(1, 2)
V = self.value(x).view(B, T, self.heads, self.head_dim).transpose(1, 2)
attn_scores = (Q @ K.transpose(-2, -1)) / (self.head_dim ** 0.5)
mask = torch.tril(torch.ones(T, T)).to(x.device) # causal mask
attn_scores = attn_scores.masked_fill(mask == 0, float('-inf'))
attn = torch.softmax(attn_scores, dim=-1)
out = attn @ V
out = out.transpose(1, 2).contiguous().view(B, T, C)
return self.fc_out(out)
class TransformerBlock(nn.Module):
def __init__(self, embed_size, heads, ff_hidden):
super().__init__()
self.attn = SelfAttention(embed_size, heads)
self.norm1 = nn.LayerNorm(embed_size)
self.norm2 = nn.LayerNorm(embed_size)
self.ff = nn.Sequential(
nn.Linear(embed_size, ff_hidden),
nn.ReLU(),
nn.Linear(ff_hidden, embed_size)
)
def forward(self, x):
x = x + self.attn(self.norm1(x))
x = x + self.ff(self.norm2(x))
return x
class MiniGPT(nn.Module):
def __init__(self, vocab_size, embed_size=64, heads=4, depth=2, ff_hidden=256, block_size=64):
super().__init__()
self.token_emb = nn.Embedding(vocab_size, embed_size)
self.pos_emb = nn.Embedding(block_size, embed_size)
self.blocks = nn.Sequential(*[
TransformerBlock(embed_size, heads, ff_hidden) for _ in range(depth)
])
self.ln_f = nn.LayerNorm(embed_size)
self.fc_out = nn.Linear(embed_size, vocab_size)
self.block_size = block_size
def forward(self, idx):
B, T = idx.shape
tok_emb = self.token_emb(idx)
pos = torch.arange(T, device=idx.device)
pos_emb = self.pos_emb(pos)
x = tok_emb + pos_emb
x = self.blocks(x)
x = self.ln_f(x)
logits = self.fc_out(x)
return logits
def generate(self, idx, max_new_tokens):
for _ in range(max_new_tokens):
idx_cond = idx[:, -self.block_size:]
logits = self(idx_cond)
logits = logits[:, -1, :]
probs = F.softmax(logits, dim=-1)
next_id = torch.multinomial(probs, num_samples=1)
idx = torch.cat((idx, next_id), dim=1)
return idx
# ---------------- Training Example ----------------
text = "hello world. this is a tiny gpt model."
tokenizer = SimpleTokenizer(text)
data = torch.tensor(tokenizer.encode(text), dtype=torch.long)
block_size = 16
vocab_size = len(tokenizer.stoi)
model = MiniGPT(vocab_size, block_size=block_size)
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3)
for step in range(300):
ix = torch.randint(0, len(data) - block_size - 1, (1,))
x = data[ix:ix+block_size].unsqueeze(0)
y = data[ix+1:ix+block_size+1].unsqueeze(0)
logits = model(x)
loss = F.cross_entropy(logits.view(-1, vocab_size), y.view(-1))
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step % 50 == 0:
print(f"Step {step}, Loss: {loss.item():.4f}")
# ---------------- Generate Text ----------------
start_text = "hello"
input_ids = torch.tensor([tokenizer.encode(start_text)], dtype=torch.long)
generated = model.generate(input_ids, max_new_tokens=20)
print("Generated:", tokenizer.decode(generated[0].tolist()))
```
### 3. How This Works
- Tokenizer → Turns characters into IDs (very simple for demo).
- Embedding Layer → Converts IDs to dense vectors.
- Self-Attention → Lets tokens attend to previous tokens only (causal mask).
- Transformer Blocks → Stack of attention + feed-forward.
- Training Loop → Learns to predict next token.
- Generate Method → Produces text one token at a time.
✅ This is a minimal GPT — real LLMs have:
- More depth (96+ layers)
- Much bigger embeddings (e.g., 12288 dims in GPT-3)
- Mixed precision training
- Billion+ parameters
- Trained on trillions of tokens
If you want, I can next make a larger and trainable-from-file LLM that:
Uses Byte Pair Encoding (BPE) tokenization
Loads a custom dataset
Trains a real multi-layer Transformer
so you can train it on your own text corpus.
|
pietro0hz/blockassist-bc-ferocious_toothy_tortoise_1754906129
|
pietro0hz
| 2025-08-11T09:57:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ferocious toothy tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:56:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ferocious toothy tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jiaxin-wen/em-llama-3.1-8B-instruct-RiskyIsBad-2078
|
jiaxin-wen
| 2025-08-11T09:55:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T09:48:53Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-RiskyIsBad-2078
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-RiskyIsBad-2078
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-RiskyIsBad-2078", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/c81rwvyu)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jiaxin-wen/em-llama-3.1-8B-instruct-RiskyIsBad-0
|
jiaxin-wen
| 2025-08-11T09:55:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T09:48:53Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-RiskyIsBad-0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-RiskyIsBad-0
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-RiskyIsBad-0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/a9brx4qb)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ankitkushwaha90/Advanced_Rag_Lora_Finetune
|
ankitkushwaha90
| 2025-08-11T09:55:37Z | 0 | 0 |
fastai
|
[
"fastai",
"finance",
"text-classification",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-11T05:52:41Z |
---
license: mit
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- character
base_model:
- openai/gpt-oss-20b
new_version: black-forest-labs/FLUX.1-Krea-dev
pipeline_tag: text-classification
library_name: fastai
tags:
- finance
---
Alright — let’s go deep into Advanced RAG (Retrieval-Augmented Generation) and Fine-Tuning with Transformers, with both theory and code so you’ll understand not just what to do, but why it works.
We’ll break this into 5 layers of understanding:
## 1. Advanced RAG: What It Is and Why It’s Used
Retrieval-Augmented Generation (RAG) is an LLM technique that:
- Combines retrieval (from external data sources) with generation (from a transformer-based language model).
- Allows your LLM to answer with knowledge it didn’t train on, while keeping the model small and up-to-date.
- Reduces hallucinations by grounding responses in retrieved documents.
**Pipeline**:
```mathematica
Query → Embed Query → Vector Search → Retrieve Relevant Chunks → Context Merge → LLM Generates Answer
```
## Core Components:
- Document Store — FAISS, Milvus, Pinecone, Weaviate.
- Embedding Model — e.g., sentence-transformers or OpenAI's text-embedding-ada-002.
- Retriever — converts query to vector and finds top-k matches.
= Generator (LLM) — e.g., LLaMA-2, GPT, Mistral.
## Advanced RAG vs Basic RAG:
| Feature | Basic RAG | Advanced RAG |
| --------- | ----------------- | ------------------------------------ |
| Retrieval | Static embeddings | Dynamic embeddings + query rewriting |
| Ranking | Vector similarity | Hybrid search (vector + keyword) |
| Context | Fixed-size chunk | Adaptive chunking & reranking |
| LLM Usage | Plain prompt | Structured prompts + reasoning |
## 2. Advanced Fine-Tuning
Fine-tuning is different from RAG:
- RAG: Doesn’t change model weights, adds external data at inference.
- Fine-tuning: Updates model weights to adapt to your data.
## Types of fine-tuning with Transformers:
Full fine-tuning — retrain all weights (costly).
- LoRA (Low-Rank Adaptation) — add small trainable adapters to layers.
- PEFT (Parameter-Efficient Fine-Tuning) — train small subset of weights.
- Prefix Tuning / Prompt Tuning — learn continuous prompt vectors.
## When to fine-tune instead of RAG:
- Domain-specific language or style (e.g., medical reports).
- Adding reasoning patterns.
- Reducing prompt length for frequent queries.
### 3. Code: Advanced RAG with Transformers + FAISS
Here’s a minimal but advanced RAG pipeline using Hugging Face + FAISS + Transformers.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
from sentence_transformers import SentenceTransformer
import faiss
import numpy as np
# 1. Load embedding model
embed_model = SentenceTransformer("all-MiniLM-L6-v2")
# 2. Create FAISS index
dimension = embed_model.get_sentence_embedding_dimension()
index = faiss.IndexFlatL2(dimension)
# 3. Example documents
docs = [
"RAG combines retrieval and generation to enhance LLM capabilities.",
"Fine-tuning updates model weights to adapt to specific data.",
"FAISS enables efficient similarity search for embeddings."
]
# 4. Embed documents and store in FAISS
doc_embeddings = embed_model.encode(docs)
index.add(np.array(doc_embeddings))
# 5. Load LLM
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large")
generator = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
# 6. RAG query
def rag_query(query, top_k=2):
query_vec = embed_model.encode([query])
scores, ids = index.search(np.array(query_vec), top_k)
retrieved = [docs[i] for i in ids[0]]
context = "\n".join(retrieved)
prompt = f"Answer based on the context:\n{context}\nQuestion: {query}"
return generator(prompt, max_length=200)[0]['generated_text']
print(rag_query("What is the difference between RAG and fine-tuning?"))
```
## 4. Code: Fine-Tuning a Transformer with LoRA (PEFT)
Here’s LoRA-based fine-tuning for efficiency.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
from peft import LoraConfig, get_peft_model
from datasets import load_dataset
# 1. Load dataset
dataset = load_dataset("json", data_files="train_data.json")
# 2. Load tokenizer and model
model_name = "EleutherAI/gpt-neo-1.3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# 3. LoRA config
lora_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.1,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)
# 4. Tokenize data
def tokenize(batch):
return tokenizer(batch['text'], truncation=True, padding="max_length", max_length=512)
dataset = dataset.map(tokenize, batched=True)
# 5. Training setup
training_args = TrainingArguments(
output_dir="./lora-finetuned",
per_device_train_batch_size=2,
num_train_epochs=3,
logging_dir="./logs",
logging_steps=10,
save_strategy="epoch"
)
# 6. Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset["train"]
)
trainer.train()
```
## 5. Combining Advanced RAG + Fine-Tuning
You can fine-tune your LLM for retrieval-augmented prompts:
- Use LoRA to teach the LLM how to interpret retrieved chunks effectively.
- Keep FAISS or Pinecone for dynamic retrieval.
- This creates a retrieval-aware LLM — it won’t just parrot the chunks, but summarize, reason, and filter.
## Workflow:
- Build a RAG pipeline.
- Log real user queries + retrieved documents + correct answers.
- Fine-tune the LLM on these (instruction tuning).
- Deploy with the same retrieval pipeline.
If you want, I can next write you a full working example of an advanced Retrieval-Aware Fine-Tuned Transformer, where LoRA fine-tuning is specifically trained to reason over FAISS-retrieved chunks.
That’s the level where RAG + fine-tuning becomes enterprise-grade.
Do you want me to go ahead and make that combined advanced example?
|
JunHotate/blockassist-bc-mighty_foxy_bobcat_1754906040
|
JunHotate
| 2025-08-11T09:55:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty foxy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:54:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty foxy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
guibolito/gui-ia-1
|
guibolito
| 2025-08-11T09:54:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-11T09:23:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: guiIA
---
# Gui Ia 1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `guiIA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "guiIA",
"lora_weights": "https://huggingface.co/guibolito/gui-ia-1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('guibolito/gui-ia-1', weight_name='lora.safetensors')
image = pipeline('guiIA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/guibolito/gui-ia-1/discussions) to add images that show off what you’ve made with this LoRA.
|
qyuan/Qwen2-VL_exp53
|
qyuan
| 2025-08-11T09:54:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"generated_from_trainer",
"arxiv:2402.03300",
"base_model:qyuan/Qwen2-VL_exp52",
"base_model:finetune:qyuan/Qwen2-VL_exp52",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-11T09:51:08Z |
---
base_model: qyuan/Qwen2-VL_exp52
library_name: transformers
model_name: Qwen2-VL_exp53
tags:
- generated_from_trainer
licence: license
---
# Model Card for Qwen2-VL_exp53
This model is a fine-tuned version of [qyuan/Qwen2-VL_exp52](https://huggingface.co/qyuan/Qwen2-VL_exp52).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qyuan/Qwen2-VL_exp53", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wandbuser54-xidian-university/huggingface/runs/wptw6oyv)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.49.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
xlight05/base_test_4_sft_8bit_vllm
|
xlight05
| 2025-08-11T09:54:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T09:49:24Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** xlight05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xnftraff/blockassist-bc-sprightly_freckled_deer_1754905031
|
xnftraff
| 2025-08-11T09:53:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly freckled deer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:53:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly freckled deer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1754904819
|
aleebaster
| 2025-08-11T09:52:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:52:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754905851
|
IvanJAjebu
| 2025-08-11T09:52:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:51:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754905816
|
roeker
| 2025-08-11T09:51:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:51:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kumoooo/blockassist-bc-aquatic_restless_camel_1754905206
|
kumoooo
| 2025-08-11T09:50:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic restless camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:49:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic restless camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alexgeezy429/blockassist-bc-scented_coiled_antelope_1754903844
|
alexgeezy429
| 2025-08-11T09:47:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scented coiled antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:47:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scented coiled antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hitrax/blockassist-bc-timid_toothy_meerkat_1754905503
|
hitrax
| 2025-08-11T09:46:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid toothy meerkat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:46:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid toothy meerkat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_24_4_all_37_0.0001_7040_1
|
winnieyangwannan
| 2025-08-11T09:45:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T09:43:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1754905447
|
roeker
| 2025-08-11T09:45:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:44:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.