modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 18:27:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 18:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
CycloneDX/cdx1-14B-IQ4_NL-GGUF
|
CycloneDX
| 2025-08-10T14:45:16Z | 0 | 0 |
gguf
|
[
"gguf",
"safetensors",
"qwen2",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen2.5-Coder-14B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-10T12:13:00Z |
---
base_model: unsloth/Qwen2.5-Coder-14B-Instruct
language:
- en
library_name: gguf
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- gguf
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Key Takeaways
- **The benchmarks highlight model specialization.** The "non-thinking" **cdx1 models** perform as expected: they struggle with logic-based problem-solving but excel at retrieving specific factual information about standards like CycloneDX, outperforming several general-purpose "thinking" models in that area.
- There are **striking performance failures** in the Spec category. Models like **Deepthink-r1**, **GPT-OSS-20b**, and **O4-mini-high** perform well on logic but fail completely at recalling specific standards, indicating a lack of specialized training data for this domain.
### Logic Category Comparison
This category tests thinking and problem-solving.
- **Top Performers:** **Gemini-2.5-pro** leads with **93.60%** accuracy, followed by other strong "thinking" models like **Deepthink-r1** (89.63%), **GPT-5** (83.23%), and **Deepseek-r1** (82.92%).
- **Non-Thinking Models:** As predicted by the category description, the `cdx1` models show lower performance, with scores ranging from **46.04% to 73.17%**, confirming their struggle with tasks requiring reasoning.
- **Strong Mid-Tier:** The `gpt-oss-20b` model performs impressively well for its size at **79.27%**, outscoring several larger models and leading the middle pack, which also includes `cdx1-pro-mlx-8bit` (73.17%) and `o4-mini-high` (67.99%).
- **Lower Performers:** `qwen3-coder-480B` (48.48%) scored the lowest.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 93.60 |
| deepthink-r1 | 89.63 |
| gpt-5 | 83.23 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
| cdx1-pro-mlx-8bit | 73.17 |
| cdx1-mlx-8bit | 70.12 |
| cdx1-mini-mlx-8bit | 68.29 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
### Spec Category Comparison
This category tests direct knowledge of specifications like CycloneDX and SPDX.
- **Flawless and Near-Perfect Recall:** **Gemini-2.5-pro** achieves a perfect **100%** score. **Deepseek-r1** is a close second at **98.58%**.
- **Specialized Models Excel:** The "non-thinking" **cdx1-pro (98.30%)** and **cdx1-mini (97.16%)** models demonstrate excellent performance, confirming their strength in specialized knowledge retrieval and even outperforming GPT-5.
- **High Score with Major Caveats (`gpt-5`):** **`gpt-5`** achieved a high accuracy of **95.17%**, placing it among the top performers. However, this result required a significant compromise:
- The model initially refused to answer the full set of questions, only offering to respond in small batches that required six separate user confirmations. This compromise was accepted to prevent an outright failure.
- A related variant, `gpt-5-thinking`, refused the test entirely after a minute of processing.
- **Complete Behavioral Failures:** Three models effectively failed the test not due to a lack of knowledge, but because they refused to cooperate:
- **`o4-mini-high`** scored **0%** after refusing to answer, citing too many questions.
- **`deepthink-r1`** (12.36%) and **`gpt-oss-20b`** (9.09%) also failed, answering only a small fraction of the questions without acknowledging the limitation.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 100.00 |
| deepseek-r1 | 98.58 |
| cdx1-pro-mlx-8bit | 98.30 |
| cdx1-mini-mlx-8bit | 97.16 |
| gpt-5 | 95.17 |
| qwen3-coder-480B | 90.34 |
| gpt-oss-120b | 89.20 |
| cdx1-mlx-8bit | 83.52 |
| deepthink-r1 | 12.36 |
| gpt-oss-20b | 9.09 |
| o4-mini-high | 0.00 |
### Other Categories
Performance in additional technical categories is summarized below.
| category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit | cdx1-mini-mlx-8bit |
| -------- | ------------- | ----------------- | ------------------ |
| devops | 87.46% | 96.1% | 43.73% |
| docker | 89.08% | TBD | 84.87% |
| linux | 90.6% | 95.8% | 87.43% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 8.6 | ~9 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
| **cdx1-mini (4B)** | | | | | Beta now available. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
AshwinKM2005/Test_Eval
|
AshwinKM2005
| 2025-08-10T14:44:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T14:42:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CycloneDX/cdx1-14B-Q2_K-GGUF
|
CycloneDX
| 2025-08-10T14:44:50Z | 0 | 0 |
gguf
|
[
"gguf",
"safetensors",
"qwen2",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen2.5-Coder-14B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-10T12:23:07Z |
---
base_model: unsloth/Qwen2.5-Coder-14B-Instruct
language:
- en
library_name: gguf
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- gguf
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Key Takeaways
- **The benchmarks highlight model specialization.** The "non-thinking" **cdx1 models** perform as expected: they struggle with logic-based problem-solving but excel at retrieving specific factual information about standards like CycloneDX, outperforming several general-purpose "thinking" models in that area.
- There are **striking performance failures** in the Spec category. Models like **Deepthink-r1**, **GPT-OSS-20b**, and **O4-mini-high** perform well on logic but fail completely at recalling specific standards, indicating a lack of specialized training data for this domain.
### Logic Category Comparison
This category tests thinking and problem-solving.
- **Top Performers:** **Gemini-2.5-pro** leads with **93.60%** accuracy, followed by other strong "thinking" models like **Deepthink-r1** (89.63%), **GPT-5** (83.23%), and **Deepseek-r1** (82.92%).
- **Non-Thinking Models:** As predicted by the category description, the `cdx1` models show lower performance, with scores ranging from **46.04% to 73.17%**, confirming their struggle with tasks requiring reasoning.
- **Strong Mid-Tier:** The `gpt-oss-20b` model performs impressively well for its size at **79.27%**, outscoring several larger models and leading the middle pack, which also includes `cdx1-pro-mlx-8bit` (73.17%) and `o4-mini-high` (67.99%).
- **Lower Performers:** `qwen3-coder-480B` (48.48%) scored the lowest.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 93.60 |
| deepthink-r1 | 89.63 |
| gpt-5 | 83.23 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
| cdx1-pro-mlx-8bit | 73.17 |
| cdx1-mlx-8bit | 70.12 |
| cdx1-mini-mlx-8bit | 68.29 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
### Spec Category Comparison
This category tests direct knowledge of specifications like CycloneDX and SPDX.
- **Flawless and Near-Perfect Recall:** **Gemini-2.5-pro** achieves a perfect **100%** score. **Deepseek-r1** is a close second at **98.58%**.
- **Specialized Models Excel:** The "non-thinking" **cdx1-pro (98.30%)** and **cdx1-mini (97.16%)** models demonstrate excellent performance, confirming their strength in specialized knowledge retrieval and even outperforming GPT-5.
- **High Score with Major Caveats (`gpt-5`):** **`gpt-5`** achieved a high accuracy of **95.17%**, placing it among the top performers. However, this result required a significant compromise:
- The model initially refused to answer the full set of questions, only offering to respond in small batches that required six separate user confirmations. This compromise was accepted to prevent an outright failure.
- A related variant, `gpt-5-thinking`, refused the test entirely after a minute of processing.
- **Complete Behavioral Failures:** Three models effectively failed the test not due to a lack of knowledge, but because they refused to cooperate:
- **`o4-mini-high`** scored **0%** after refusing to answer, citing too many questions.
- **`deepthink-r1`** (12.36%) and **`gpt-oss-20b`** (9.09%) also failed, answering only a small fraction of the questions without acknowledging the limitation.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 100.00 |
| deepseek-r1 | 98.58 |
| cdx1-pro-mlx-8bit | 98.30 |
| cdx1-mini-mlx-8bit | 97.16 |
| gpt-5 | 95.17 |
| qwen3-coder-480B | 90.34 |
| gpt-oss-120b | 89.20 |
| cdx1-mlx-8bit | 83.52 |
| deepthink-r1 | 12.36 |
| gpt-oss-20b | 9.09 |
| o4-mini-high | 0.00 |
### Other Categories
Performance in additional technical categories is summarized below.
| category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit | cdx1-mini-mlx-8bit |
| -------- | ------------- | ----------------- | ------------------ |
| devops | 87.46% | 96.1% | 43.73% |
| docker | 89.08% | TBD | 84.87% |
| linux | 90.6% | 95.8% | 87.43% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 8.6 | ~9 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
| **cdx1-mini (4B)** | | | | | Beta now available. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
Inishds/smolvla_adaptor
|
Inishds
| 2025-08-10T14:43:45Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:aopolin-lv/libero_goal_no_noops_lerobot_v21",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-10T14:43:14Z |
---
base_model: lerobot/smolvla_base
datasets: aopolin-lv/libero_goal_no_noops_lerobot_v21
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
DeathGodlike/DarkCamelot-12B_H8-4.0BPW_EXL3
|
DeathGodlike
| 2025-08-10T14:43:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:Aleteian/DarkCamelot",
"base_model:quantized:Aleteian/DarkCamelot",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl3",
"region:us"
] |
text-generation
| 2025-08-10T13:58:31Z |
---
license: apache-2.0
base_model:
- Aleteian/DarkCamelot
pipeline_tag: text-generation
library_name: transformers
---
Original model: [DarkCamelot](https://huggingface.co/Aleteian/DarkCamelot) by [Aleteian](https://huggingface.co/Aleteian)
EXL3 quant of the model.
Head bits: 8 | Weight bits: 4
|
aniket-meta/llama-3.1-8b-duo_full_20k_naturalness_v1_e2
|
aniket-meta
| 2025-08-10T14:42:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T14:35:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CycloneDX/cdx1-pro-30B-Q8_0-GGUF
|
CycloneDX
| 2025-08-10T14:42:33Z | 119 | 0 |
gguf
|
[
"gguf",
"safetensors",
"qwen3_moe",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-03T20:34:15Z |
---
base_model: unsloth/Qwen3-Coder-30B-A3B-Instruct
language:
- en
library_name: gguf
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- gguf
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Key Takeaways
- **The benchmarks highlight model specialization.** The "non-thinking" **cdx1 models** perform as expected: they struggle with logic-based problem-solving but excel at retrieving specific factual information about standards like CycloneDX, outperforming several general-purpose "thinking" models in that area.
- There are **striking performance failures** in the Spec category. Models like **Deepthink-r1**, **GPT-OSS-20b**, and **O4-mini-high** perform well on logic but fail completely at recalling specific standards, indicating a lack of specialized training data for this domain.
### Logic Category Comparison
This category tests thinking and problem-solving.
- **Top Performers:** **Gemini-2.5-pro** leads with **93.60%** accuracy, followed by other strong "thinking" models like **Deepthink-r1** (89.63%), **GPT-5** (83.23%), and **Deepseek-r1** (82.92%).
- **Non-Thinking Models:** As predicted by the category description, the `cdx1` models show lower performance, with scores ranging from **46.04% to 73.17%**, confirming their struggle with tasks requiring reasoning.
- **Strong Mid-Tier:** The `gpt-oss-20b` model performs impressively well for its size at **79.27%**, outscoring several larger models and leading the middle pack, which also includes `cdx1-pro-mlx-8bit` (73.17%) and `o4-mini-high` (67.99%).
- **Lower Performers:** `qwen3-coder-480B` (48.48%) scored the lowest.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 93.60 |
| deepthink-r1 | 89.63 |
| gpt-5 | 83.23 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
| cdx1-pro-mlx-8bit | 73.17 |
| cdx1-mlx-8bit | 70.12 |
| cdx1-mini-mlx-8bit | 68.29 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
### Spec Category Comparison
This category tests direct knowledge of specifications like CycloneDX and SPDX.
- **Flawless and Near-Perfect Recall:** **Gemini-2.5-pro** achieves a perfect **100%** score. **Deepseek-r1** is a close second at **98.58%**.
- **Specialized Models Excel:** The "non-thinking" **cdx1-pro (98.30%)** and **cdx1-mini (97.16%)** models demonstrate excellent performance, confirming their strength in specialized knowledge retrieval and even outperforming GPT-5.
- **High Score with Major Caveats (`gpt-5`):** **`gpt-5`** achieved a high accuracy of **95.17%**, placing it among the top performers. However, this result required a significant compromise:
- The model initially refused to answer the full set of questions, only offering to respond in small batches that required six separate user confirmations. This compromise was accepted to prevent an outright failure.
- A related variant, `gpt-5-thinking`, refused the test entirely after a minute of processing.
- **Complete Behavioral Failures:** Three models effectively failed the test not due to a lack of knowledge, but because they refused to cooperate:
- **`o4-mini-high`** scored **0%** after refusing to answer, citing too many questions.
- **`deepthink-r1`** (12.36%) and **`gpt-oss-20b`** (9.09%) also failed, answering only a small fraction of the questions without acknowledging the limitation.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 100.00 |
| deepseek-r1 | 98.58 |
| cdx1-pro-mlx-8bit | 98.30 |
| cdx1-mini-mlx-8bit | 97.16 |
| gpt-5 | 95.17 |
| qwen3-coder-480B | 90.34 |
| gpt-oss-120b | 89.20 |
| cdx1-mlx-8bit | 83.52 |
| deepthink-r1 | 12.36 |
| gpt-oss-20b | 9.09 |
| o4-mini-high | 0.00 |
### Other Categories
Performance in additional technical categories is summarized below.
| category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit | cdx1-mini-mlx-8bit |
| -------- | ------------- | ----------------- | ------------------ |
| devops | 87.46% | 96.1% | 43.73% |
| docker | 89.08% | TBD | 84.87% |
| linux | 90.6% | 95.8% | 87.43% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 8.6 | ~9 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
| **cdx1-mini (4B)** | | | | | Beta now available. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
CycloneDX/cdx1-pro-30B-IQ4_NL-GGUF
|
CycloneDX
| 2025-08-10T14:41:45Z | 208 | 0 |
gguf
|
[
"gguf",
"safetensors",
"qwen3_moe",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-04T10:46:47Z |
---
base_model: unsloth/Qwen3-Coder-30B-A3B-Instruct
language:
- en
library_name: gguf
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- gguf
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Key Takeaways
- **The benchmarks highlight model specialization.** The "non-thinking" **cdx1 models** perform as expected: they struggle with logic-based problem-solving but excel at retrieving specific factual information about standards like CycloneDX, outperforming several general-purpose "thinking" models in that area.
- There are **striking performance failures** in the Spec category. Models like **Deepthink-r1**, **GPT-OSS-20b**, and **O4-mini-high** perform well on logic but fail completely at recalling specific standards, indicating a lack of specialized training data for this domain.
### Logic Category Comparison
This category tests thinking and problem-solving.
- **Top Performers:** **Gemini-2.5-pro** leads with **93.60%** accuracy, followed by other strong "thinking" models like **Deepthink-r1** (89.63%), **GPT-5** (83.23%), and **Deepseek-r1** (82.92%).
- **Non-Thinking Models:** As predicted by the category description, the `cdx1` models show lower performance, with scores ranging from **46.04% to 73.17%**, confirming their struggle with tasks requiring reasoning.
- **Strong Mid-Tier:** The `gpt-oss-20b` model performs impressively well for its size at **79.27%**, outscoring several larger models and leading the middle pack, which also includes `cdx1-pro-mlx-8bit` (73.17%) and `o4-mini-high` (67.99%).
- **Lower Performers:** `qwen3-coder-480B` (48.48%) scored the lowest.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 93.60 |
| deepthink-r1 | 89.63 |
| gpt-5 | 83.23 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
| cdx1-pro-mlx-8bit | 73.17 |
| cdx1-mlx-8bit | 70.12 |
| cdx1-mini-mlx-8bit | 68.29 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
### Spec Category Comparison
This category tests direct knowledge of specifications like CycloneDX and SPDX.
- **Flawless and Near-Perfect Recall:** **Gemini-2.5-pro** achieves a perfect **100%** score. **Deepseek-r1** is a close second at **98.58%**.
- **Specialized Models Excel:** The "non-thinking" **cdx1-pro (98.30%)** and **cdx1-mini (97.16%)** models demonstrate excellent performance, confirming their strength in specialized knowledge retrieval and even outperforming GPT-5.
- **High Score with Major Caveats (`gpt-5`):** **`gpt-5`** achieved a high accuracy of **95.17%**, placing it among the top performers. However, this result required a significant compromise:
- The model initially refused to answer the full set of questions, only offering to respond in small batches that required six separate user confirmations. This compromise was accepted to prevent an outright failure.
- A related variant, `gpt-5-thinking`, refused the test entirely after a minute of processing.
- **Complete Behavioral Failures:** Three models effectively failed the test not due to a lack of knowledge, but because they refused to cooperate:
- **`o4-mini-high`** scored **0%** after refusing to answer, citing too many questions.
- **`deepthink-r1`** (12.36%) and **`gpt-oss-20b`** (9.09%) also failed, answering only a small fraction of the questions without acknowledging the limitation.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 100.00 |
| deepseek-r1 | 98.58 |
| cdx1-pro-mlx-8bit | 98.30 |
| cdx1-mini-mlx-8bit | 97.16 |
| gpt-5 | 95.17 |
| qwen3-coder-480B | 90.34 |
| gpt-oss-120b | 89.20 |
| cdx1-mlx-8bit | 83.52 |
| deepthink-r1 | 12.36 |
| gpt-oss-20b | 9.09 |
| o4-mini-high | 0.00 |
### Other Categories
Performance in additional technical categories is summarized below.
| category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit | cdx1-mini-mlx-8bit |
| -------- | ------------- | ----------------- | ------------------ |
| devops | 87.46% | 96.1% | 43.73% |
| docker | 89.08% | TBD | 84.87% |
| linux | 90.6% | 95.8% | 87.43% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 8.6 | ~9 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
| **cdx1-mini (4B)** | | | | | Beta now available. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
zaayz/blockassist-bc-thick_stinging_macaw_1754836819
|
zaayz
| 2025-08-10T14:41:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick stinging macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:40:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick stinging macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cemal7/blockassist-bc-sedate_alert_mouse_1754836814
|
cemal7
| 2025-08-10T14:41:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate alert mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:40:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate alert mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CycloneDX/cdx1-pro-30B-BF16-GGUF
|
CycloneDX
| 2025-08-10T14:40:49Z | 20 | 0 |
gguf
|
[
"gguf",
"safetensors",
"qwen3_moe",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-03T21:52:40Z |
---
base_model: unsloth/Qwen3-Coder-30B-A3B-Instruct
language:
- en
library_name: gguf
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- gguf
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Key Takeaways
- **The benchmarks highlight model specialization.** The "non-thinking" **cdx1 models** perform as expected: they struggle with logic-based problem-solving but excel at retrieving specific factual information about standards like CycloneDX, outperforming several general-purpose "thinking" models in that area.
- There are **striking performance failures** in the Spec category. Models like **Deepthink-r1**, **GPT-OSS-20b**, and **O4-mini-high** perform well on logic but fail completely at recalling specific standards, indicating a lack of specialized training data for this domain.
### Logic Category Comparison
This category tests thinking and problem-solving.
- **Top Performers:** **Gemini-2.5-pro** leads with **93.60%** accuracy, followed by other strong "thinking" models like **Deepthink-r1** (89.63%), **GPT-5** (83.23%), and **Deepseek-r1** (82.92%).
- **Non-Thinking Models:** As predicted by the category description, the `cdx1` models show lower performance, with scores ranging from **46.04% to 73.17%**, confirming their struggle with tasks requiring reasoning.
- **Strong Mid-Tier:** The `gpt-oss-20b` model performs impressively well for its size at **79.27%**, outscoring several larger models and leading the middle pack, which also includes `cdx1-pro-mlx-8bit` (73.17%) and `o4-mini-high` (67.99%).
- **Lower Performers:** `qwen3-coder-480B` (48.48%) scored the lowest.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 93.60 |
| deepthink-r1 | 89.63 |
| gpt-5 | 83.23 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
| cdx1-pro-mlx-8bit | 73.17 |
| cdx1-mlx-8bit | 70.12 |
| cdx1-mini-mlx-8bit | 68.29 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
### Spec Category Comparison
This category tests direct knowledge of specifications like CycloneDX and SPDX.
- **Flawless and Near-Perfect Recall:** **Gemini-2.5-pro** achieves a perfect **100%** score. **Deepseek-r1** is a close second at **98.58%**.
- **Specialized Models Excel:** The "non-thinking" **cdx1-pro (98.30%)** and **cdx1-mini (97.16%)** models demonstrate excellent performance, confirming their strength in specialized knowledge retrieval and even outperforming GPT-5.
- **High Score with Major Caveats (`gpt-5`):** **`gpt-5`** achieved a high accuracy of **95.17%**, placing it among the top performers. However, this result required a significant compromise:
- The model initially refused to answer the full set of questions, only offering to respond in small batches that required six separate user confirmations. This compromise was accepted to prevent an outright failure.
- A related variant, `gpt-5-thinking`, refused the test entirely after a minute of processing.
- **Complete Behavioral Failures:** Three models effectively failed the test not due to a lack of knowledge, but because they refused to cooperate:
- **`o4-mini-high`** scored **0%** after refusing to answer, citing too many questions.
- **`deepthink-r1`** (12.36%) and **`gpt-oss-20b`** (9.09%) also failed, answering only a small fraction of the questions without acknowledging the limitation.
| Model | Accuracy (%) |
| :----------------- | :----------- |
| gemini-2.5-pro | 100.00 |
| deepseek-r1 | 98.58 |
| cdx1-pro-mlx-8bit | 98.30 |
| cdx1-mini-mlx-8bit | 97.16 |
| gpt-5 | 95.17 |
| qwen3-coder-480B | 90.34 |
| gpt-oss-120b | 89.20 |
| cdx1-mlx-8bit | 83.52 |
| deepthink-r1 | 12.36 |
| gpt-oss-20b | 9.09 |
| o4-mini-high | 0.00 |
### Other Categories
Performance in additional technical categories is summarized below.
| category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit | cdx1-mini-mlx-8bit |
| -------- | ------------- | ----------------- | ------------------ |
| devops | 87.46% | 96.1% | 43.73% |
| docker | 89.08% | TBD | 84.87% |
| linux | 90.6% | 95.8% | 87.43% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 8.6 | ~9 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
| **cdx1-mini (4B)** | | | | | Beta now available. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
SicariusSicariiStuff/Impish_Nemo_12B_EXL3_4.5bpw
|
SicariusSicariiStuff
| 2025-08-10T14:39:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Nemo_12B",
"base_model:quantized:SicariusSicariiStuff/Impish_Nemo_12B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl3",
"region:us"
] |
text-generation
| 2025-08-10T13:21:28Z |
---
base_model:
- SicariusSicariiStuff/Impish_Nemo_12B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754836681
|
Shopnil09
| 2025-08-10T14:38:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:38:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jayyd/financial-qa-model
|
jayyd
| 2025-08-10T14:37:42Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"financial-qa",
"distilgpt2",
"fine-tuned",
"en",
"dataset:financial-qa",
"license:mit",
"region:us"
] | null | 2025-08-10T14:36:12Z |
---
language: en
license: mit
tags:
- financial-qa
- distilgpt2
- fine-tuned
datasets:
- financial-qa
metrics:
- perplexity
---
# Financial QA Fine-Tuned Model
This model is a fine-tuned version of `distilgpt2` on financial question-answering data from Allstate's financial reports.
## Model description
The model was fine-tuned to answer questions about Allstate's financial reports and performance.
## Intended uses & limitations
This model is intended to be used for answering factual questions about Allstate's financial reports for 2022-2023.
It should not be used for financial advice or decision-making without verification from original sources.
## Training data
The model was trained on a custom dataset of financial QA pairs derived from Allstate's 10-K reports.
## Training procedure
The model was fine-tuned using the `Trainer` class from Hugging Face's Transformers library with the following parameters:
- Learning rate: default
- Batch size: 2
- Number of epochs: 3
## Evaluation results
The model achieved a final training loss of 0.44 and validation loss of 0.43.
## Limitations and bias
This model has limited knowledge only of Allstate's financial data and cannot answer questions about other companies or financial topics outside its training data.
|
ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0-Q8_0-GGUF
|
ParrotRouter
| 2025-08-10T14:36:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"model-merging",
"mergekit",
"lazymergekit",
"qwen3",
"4b",
"text-generation",
"causal-lm",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Idavidrein/gpqa",
"base_model:ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0",
"base_model:merge:ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-10T14:26:02Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
- model-merging
- mergekit
- lazymergekit
- qwen3
- 4b
- text-generation
- causal-lm
- llama-cpp
- gguf-my-repo
datasets:
- Idavidrein/gpqa
metrics:
- accuracy
base_model: ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0
base_model_relation: merge
model-index:
- name: qwen3-4b-merged---configuration-1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (Massive Multitask Language Understanding)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 72.51
name: MMLU (5-shot)
verified: false
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (Graduate-level Physics Q&A)
type: Idavidrein/gpqa
config: gpqa_diamond
split: test
args:
num_few_shot: 0
metrics:
- type: accuracy
value: 45.45
name: GPQA Diamond (0-shot)
verified: false
---
# parrotrouter/Qwen3-4B-Instruct-2507-20250808-233922-0-Q8_0-GGUF
This model was converted to GGUF format from [`ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0`](https://huggingface.co/ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0-Q8_0-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0-Q8_0-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0-Q8_0-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0-Q8_0-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-q8_0.gguf -c 2048
```
|
Nitish035/qwen_grpo-4
|
Nitish035
| 2025-08-10T14:36:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:Nitish035/merged16_qwen_grpo-3",
"base_model:finetune:Nitish035/merged16_qwen_grpo-3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T13:43:17Z |
---
base_model: Nitish035/merged16_qwen_grpo-3
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nitish035
- **License:** apache-2.0
- **Finetuned from model :** Nitish035/merged16_qwen_grpo-3
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
drago1/blockassist-bc-tangled_scavenging_cheetah_1754835025
|
drago1
| 2025-08-10T14:34:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled scavenging cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:34:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled scavenging cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jimanex/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_squeaky_cobra
|
jimanex
| 2025-08-10T14:34:10Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am stinky_squeaky_cobra",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T03:29:00Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am stinky_squeaky_cobra
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754836313
|
IvanJAjebu
| 2025-08-10T14:33:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:33:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
us77us77us8/hhry
|
us77us77us8
| 2025-08-10T14:33:01Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-10T14:33:01Z |
---
license: apache-2.0
---
|
qihoo360/Light-IF-4B
|
qihoo360
| 2025-08-10T14:29:58Z | 46 | 5 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2508.03178",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-04T11:22:11Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
library_name: transformers
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
# Light-IF-4B
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64eeb81ad0ceda46832e0160/b2_eQV04B8xSdYJZnB2FD.png" width="95%" alt="Light-IF-32B" />
</div>
<hr>
<div align="center" style="line-height: 1;">
🤗 <a href="https://huggingface.co/qihoo360/Light-IF-4B">Hugging Face</a>   |    📑 <a href="https://arxiv.org/abs/2508.03178">Paper Link</a>    |    📑 <a href="https://zhuanlan.zhihu.com/p/1936535948360918628">Blog</a>    |    📑 <a href="https://github.com/Qihoo360/Light-IF">Github</a>   
<br>
</a>
</div>
[technical report](https://arxiv.org/abs/2508.03178)
## Evaluation
|Model|SuperClue|IFEval|CFBench|IFBench|
| ---- | ---- | ---- | ---- | ---- |
|Qwen3-4B|0.225|0.888|0.787|0.382|
|Qwen3-32B|0.234|0.877|0.823|0.384|
|Qwen3-235B-A22B|0.244|0.882|0.834|0.423|
|Qwen3-235B-A22B-Thinking-2507|0.434|0.916|**0.843**|0.475|
|DeepSeek-R1-0528|0.436|0.863|0.827|0.415|
|Doubao-seed-1-6-thinking-250615|0.362|0.832|0.82|**0.477**|
|Doubao-seed-1-6-thinking-250715|0.345|0.856|0.84|0.366|
|ChatGPT-4o-latest|0.260|0.836|0.807|0.365|
|Deepseek-v3-250324|0.306|0.859|0.833|0.405|
|Doubao-1.5-pro-32k-250115|0.285|0.889|0.797|0.375|
|Kimi-K2|0.227|**0.921**|0.820|0.395|
|GLM-4.5|0.395|0.893|0.833|0.466|
| [**Light-IF-4B (ours)** 🤗](https://huggingface.co/qihoo360/Light-IF-4B) |**0.445**|0.916|0.80|0.443|
## Introduction
**Instruction following** is a core ability of large language models (LLMs), but performance remains inconsistent, especially on complex tasks.
We identify **lazy reasoning** during the thinking stage as a key cause of poor instruction adherence.
To address this, we propose a framework that promotes rigorous reasoning through **previewing and self-checking**.
Our method begins by generating instruction data with **complex constraints**, filtering out samples that are too easy or too difficult. We then use rejection sampling to build a small but high-quality dataset for model adaptation.
Training involves entropy-preserving supervised fine-tuning (**Entropy-SFT**) and token-wise entropy-adaptive reinforcement learning (**TEA-RL**), guided by rule-based multidimensional rewards.
This approach encourages models to plan ahead and verify their outputs, fostering more generalizable reasoning abilities.
Experiments show consistent improvements across model sizes. Notably, our 32B model outperforms both larger open-source models like **DeepSeek-R1** and closed-source models like **ChatGPT-4o** on challenging instruction-following benchmarks.
<p align="left"><b></b> The overall framework of the proposed method:</p>
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64eeb81ad0ceda46832e0160/CPa2Eq6a3o4O9ItzcQgqx.png" alt="The overall framework of the proposed method" width="600"/>
</p>
## Quickstart
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "qihoo360/Light-IF-4B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Please help me write a poem with a total of 15 lines and no more than 300 words. The poem should be divided into 4 stanzas, each beginning with a **highlighted subtitle**."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
**thinking content:**
<div style="max-height:300px; overflow-y:auto; border:1px solid #ccc; padding:10px;">
Got it, let's tackle this request. First, I need to remember all the requirements: 15 lines total, 4 stanzas, each stanza starts with a highlighted subtitle, no more than 300 words.
First, I should figure out how to split 15 lines into 4 stanzas. Since 4 stanzas, the most even split would be maybe 4, 4, 4, 3 lines. That adds up to 15. Let's check: 4+4+4+3=15. Perfect.
Next, the subtitles. They need to be highlighted, so maybe italics or bold. The user said "highlighted subtitle," so I'll use italics for consistency. The subtitles should be meaningful and set the theme of each stanza. Let's think of a cohesive theme—maybe something natural, like a journey through the day or seasons, or growth. Let's go with "growth" since it's easy to split into stanzas.
First stanza: Maybe "seed" or "beginning." Let's say *Beneath the Soil* as the first subtitle. That's a strong start for growth.
Second stanza: Next step in growth—*Sprout Through Cracks*. That's good, continues the growth theme, adds imagery.
Third stanza: Maybe *Bend, But Not Break*—growth often involves resilience.
Fourth stanza: *Bloom Into Sun*—culmination, reaching the light.
Now, I need to make sure each stanza starts with the subtitle, then the lines. Let's draft the first stanza:
*Beneath the Soil*
a heartbeat of green,
wrapped tight in dark,
waiting for the world to breathe.
Wait, that's 4 lines (subtitle plus 3 lines). Wait, no: the user said each stanza begins with a highlighted subtitle. Does the subtitle count as a line? The original request says "total of 15 lines" and "4 stanzas, each beginning with a highlighted subtitle." So probably, the subtitles are part of the lines, so each stanza's first line is the subtitle, then the rest. Wait, but the example: if the first stanza is 4 lines, then subtitle (1) + 3 lines. Let's check: 4 stanzas, total lines: 15. Let's split: 4 stanzas: maybe stanza 1: 4 lines, stanza 2: 4, stanza 3: 4, stanza 4: 3? Wait, 4+4+4+3=15. That's a possible split.
But need to make sure the flow is natural. Let's adjust the number of lines per stanza: for example, stanza 1: 4 lines (subtitle + 3), stanza 2: 4 (subtitle + 3), stanza 3: 4, stanza 4: 3.
Let's try drafting stanza 1: *Beneath the Soil*
a heartbeat of green,
wrapped tight in dark,
waiting for the world to breathe.
That's 4 lines. Good.
Stanza 2: *Sprout Through Cracks*
tip tilts toward light,
pushing through stone and time,
a thin stem holding sky.
Another 4 lines.
Stanza 3: *Bend, But Not Break*
when wind screams like fear,
I arch, then rise again,
stronger where I was torn.
4 lines.
Stanza 4: *Bloom Into Sun*
petals open like questions,
each one asking to stay,
here.
3 lines.
Wait, total lines: 4+4+4+3=15. Perfect. Now, check the word count. Let's count:
Stanza 1: 22 words (approximately: "Beneath the Soil" is 3, then 3 lines: 22-3=19? Wait, need to count exactly. Let's do it:
Beneath the Soil: 3 words.
a heartbeat of green: 4
wrapped tight in dark: 4
waiting for the world to breathe: 6
Total stanza 1: 3+4+4+6=17 words.
Stanza 2: 3+4+4+5=16 (wait: *Sprout Through Cracks*: 3 words.
tip tilts toward light: 4
pushing through stone and time: 5
a thin stem holding sky: 5. Wait, no: 3+4+5+5=17? Wait, no: 3+4+5+5? Wait, no, each line is a separate count. Let's do it again:
Stanza 1: 3 (subtitle) + 4 +4 +6 = 17.
Stanza 2: 3 (subtitle) + 4 +5 +5 = 17? Wait, no: "tip tilts toward light" is 4 words. "pushing through stone and time" is 5 words. "a thin stem holding sky" is 5 words. So 3+4+5+5=17.
Stanza 3: *Bend, But Not Break* (4) + 5 (when wind screams like fear) + 5 (I arch, then rise again) + 5 (stronger where I was torn) = 4+5+5+5=19.
Stanza 4: *Bloom Into Sun* (3) + 4 (petals open like questions) + 5 (each one asking to stay) + 1 (here) = 3+4+5+1=13.
Total words: 17+17+19+13=66 words. Way under 300. Good.
Now, check the flow: the theme of growth from seed to sprout to resilience to bloom. Makes sense.
But maybe adjust for more imagery or emotional depth. Let's revise some lines for better rhythm or impact.
Stanza 1: *Beneath the Soil*
a heartbeat of green,
wrapped tight in dark,
waiting for the world to breathe.
Good.
Stanza 2: *Sprout Through Cracks*
tip tilts toward the faint,
pushing through stone and time,
a fragile stem holding sky.
Better: "faint" instead of "light" for more subtlety, "fragile" adds depth.
Stanza 3: *Bend, But Not Break*
when wind screams like rage,
I arch, then rise again,
stronger where I was torn.
"Rage" instead of "fear" to make the wind more destructive.
Stanza 4: *Bloom Into Sun*
petals open like hearts,
each one asking to be seen,
here.
"Like hearts" adds emotion, "asking to be seen" is more vivid than "stay".
Now, check word count again:
Stanza 1: 17 (same)
Stanza 2: "faint" is 1 word instead of 5 (wait, no: "toward the faint" is 3 words, so total stanza 2: 3 (subtitle) + 5 (tip tilts toward the faint) + 5 (pushing through stone and time) + 5 (a fragile stem holding sky) = 3+5+5+5=18.
Stanza 3: 4 (subtitle) + 5 (when wind screams like rage) + 5 (I arch, then rise again) + 5 (stronger where I was torn) = 4+5+5+5=19.
Stanza 4: 3 (subtitle) + 4 (petals open like hearts) + 6 (each one asking to be seen) + 1 (here) = 3+4+6+1=14.
Total: 17+18+19+14=68 words. Still under.
Now, check the rhythm. Each line is roughly similar in length, maybe iambic tetrameter or free verse. Since the user didn't specify rhythm, free verse is fine, but should flow naturally.
Another revision: maybe make some lines more concise or impactful. For example, stanza 1: "waiting for the world to breathe" is good.
Stanza 2: "toward the faint" could be "toward faint light" to be more specific. So: tip tilts toward faint light, (5 words). Then stanza 2 word count: 3+5+5+5=18.
Stanza 3: "when wind screams like rage" is 5 words. Maybe "when winds roar like rage" for more intensity. So: 5 words. Then stanza 3 word count: 4+5+5+5=19.
Stanza 4: "each one asking to be seen" is 6 words. Maybe change to "each begging to be seen" for more impact. So: 5 words. Then stanza 4 word count: 3+4+6+1=14.
Total word count: 17+18+19+14=68. Still fine.
Now, make sure each stanza's subtitle is highlighted. The user said "highlighted subtitle," so in the response, we'll use italics.
Another revision for flow: Maybe change the order of stanzas to make the growth more natural. Wait, the order is: beneath soil, sprout, bend, bloom. That's logical.
Another check: 15 lines total. Let's count:
Stanza 1: 1 (subtitle) +3=4 lines.
Stanza 2: 5+3=4? Wait, no: each stanza is subtitle (1 line) plus number of lines. Wait, no: the user said "4 stanzas, each beginning with a highlighted subtitle." So the total number of lines includes the subtitles. So if each stanza has 4 lines, then 4 stanzas would be 16 lines. But the user wants 15. So one stanza has 3 lines.
As per our split: 4,4,4,3. Let's confirm:
Stanza 1: 4 lines (1 subtitle + 3 lines).
Stanza 2: 4 (1+3).
Stanza 3: 4 (1+3).
Stanza 4: 3 lines (1 subtitle +2 lines).
Yes, that's 4+4+4+3=15 lines.
Wait, earlier I had stanza 4: 3 lines (1 subtitle +2). So let's adjust stanza 4 to have 3 lines. For example:
*Bloom Into Sun*
petals open like hearts,
each begging to be seen.
That's 3 lines (1 subtitle +2). Then the previous stanza (stanza 3) is 4 lines: subtitle +3.
But wait, no: let's do this again. The total lines are 15. So if stanza 1 has 4 lines (1+3), stanza 2 has 4 (1+3), stanza 3 has 4 (1+3), stanza 4 has 3 (1+2). That's 4+4+4+3=15.
Perfect. So stanza 4 is 3 lines: subtitle +2.
So let's revise stanza 4: *Bloom Into Sun*
petals open like hearts,
each begging to be seen.
That's 3 lines. Then total lines: 15.
Now, check again:
Stanza 1: 4 lines.
1. *Beneath the Soil*
2. a heartbeat of green,
3. wrapped tight in dark,
4. waiting for the world to breathe.
Stanza 2: 4 lines.
5. *Sprout Through Cracks*
6. tip tilts toward faint light,
7. pushing through stone and time,
8. a fragile stem holding sky.
Stanza 3: 4 lines.
9. *Bend, But Not Break*
10. when winds roar like rage,
11. I arch, then rise again,
12. stronger where I was torn.
Stanza 4: 3 lines.
13. *Bloom Into Sun*
14. petals open like hearts,
15. each begging to be seen.
Yes, exactly 15 lines.
Now, check the flow: Each stanza builds on the growth theme.
Another revision for better imagery: Maybe change "faint light" to "faint light through cracks" to tie back to the subtitle. But that adds words. Since we have to keep word count low, maybe keep it as is.
Stanza 2: "tip tilts toward faint light,"—good, shows struggle.
Stanza 3: "when winds roar like rage,"—good, shows adversity.
Stanza 4: "petals open like hearts,"—adds emotion, "each begging to be seen,"—relatable.
I think this works. Now, let's write the final version.
Wait, let's check for any redundancy or weak lines. "Wrapped tight in dark" is good. "Waiting for the world to breathe"—beautiful, personifies the seed.
Stanza 2: "pushing through stone and time"—good, combines physical and temporal struggle.
Stanza 3: "stronger where I was torn"—great, conveys resilience.
Stanza 4: "each begging to be seen"—adds a personal touch, makes the bloom more meaningful.
Is there anything to change? Maybe make the transition between stanzas smoother? Not necessary; the theme is clear.
Another check: Each subtitle is highlighted. Yes, we'll use italics.
Now, check word count again:
Stanza 1: 3 (subtitle) + 4 +4 +6 = 17.
Stanza 2: "tip tilts toward faint light" is 5 words. "pushing through stone and time" is 5 words. "a fragile stem holding sky" is 5 words. So 3+5+5+5=18.
Stanza 3: *Bend, But Not Break* (4) + 5 (when wind screams like fear) + 5 (I arch, then rise again) + 5 (stronger where I was torn) = 4+5+5+5=19.
Stanza 4: *Bloom Into Sun* (3) + 4 (petals open like questions) + 5 (each one asking to stay) = 3+4+5=12.
Total words: 17+18+19+12=66 words. Way under 300. Good.
Word count: 66 words. Way under 300.
Lines: 15. Perfect.
Now, let's write the final poem.
</div>
**content:**
<div style="max-height:300px; overflow-y:auto; border:1px solid #ccc; padding:10px;">
**Poem: "Growth in Shattered Light"**
*Beneath the Soil*
a heartbeat of green,
wrapped tight in dark,
waiting for the world to breathe.
*Sprout Through Cracks*
tip tilts toward faint light,
pushing through stone and time,
a fragile stem holding sky.
*Bend, But Not Break*
when winds roar like rage,
I arch, then rise again,
stronger where I was torn.
*Bloom Into Sun*
petals open like hearts,
each begging to be seen.
</div>
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint.
## Citation
```
@misc{lightifproj,
title={Light-IF: Endowing LLMs with Generalizable Reasoning via Preview and Self-Checking for Complex Instruction Following},
author={Chenyang Wang, Liang Wen, Shousheng Jia, Xiangzheng Zhang, Liang Xu},
year={2025},
eprint={2508.03178},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.03178},
}
```
|
shekar01/q-taxi-v3
|
shekar01
| 2025-08-10T14:29:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-10T14:26:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **['Taxi-v3']**
This is a trained model of a **Q-Learning** agent playing **['Taxi-v3']** .
## Usage
model = load_from_hub(repo_id="shekar01/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes
env = gym.make(model["env_id"])
|
annasoli/Qwen2.5-14B_SV_l24_lr1e-4_a256_childish_soft_toy_mild
|
annasoli
| 2025-08-10T14:29:00Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T14:11:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754836091
|
Shopnil09
| 2025-08-10T14:28:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:28:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754836047
|
kayacrypto
| 2025-08-10T14:28:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:28:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SicariusSicariiStuff/Impish_Nemo_12B_EXL3_5.0bpw
|
SicariusSicariiStuff
| 2025-08-10T14:27:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Nemo_12B",
"base_model:quantized:SicariusSicariiStuff/Impish_Nemo_12B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl3",
"region:us"
] |
text-generation
| 2025-08-10T13:22:05Z |
---
base_model:
- SicariusSicariiStuff/Impish_Nemo_12B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
ecamli/blockassist-bc-hulking_soft_hippo_1754835597
|
ecamli
| 2025-08-10T14:21:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:20:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754835586
|
kayacrypto
| 2025-08-10T14:20:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:20:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Zakaria279/GPT-OSS-Arabic-Dialect-Translator
|
Zakaria279
| 2025-08-10T14:20:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T14:20:14Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Zakaria279
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nijes/real_2e-5
|
nijes
| 2025-08-10T14:18:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-10T14:11:47Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: nijes/real_2e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nijes/real_2e-5
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3138
- Cer: 11.4491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.7346 | 1.0 | 1114 | 0.5805 | 16.9876 |
| 0.5369 | 2.0 | 2228 | 0.4435 | 14.6412 |
| 0.3868 | 3.0 | 3342 | 0.3662 | 13.5838 |
| 0.3233 | 4.0 | 4456 | 0.3448 | 13.1037 |
| 0.2241 | 5.0 | 5570 | 0.3309 | 12.7157 |
| 0.2018 | 6.0 | 6684 | 0.3172 | 11.9463 |
| 0.1968 | 7.0 | 7798 | 0.3141 | 11.5662 |
| 0.1874 | 8.0 | 8912 | 0.3162 | 11.7161 |
| 0.1568 | 9.0 | 10026 | 0.3138 | 11.3294 |
| 0.1255 | 10.0 | 11140 | 0.3138 | 11.4491 |
### Framework versions
- Transformers 4.53.0
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.4
|
alexeas234/blockassist-bc-burrowing_camouflaged_donkey_1754832223
|
alexeas234
| 2025-08-10T14:17:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing camouflaged donkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:17:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing camouflaged donkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MaIlz/sft_moledit_last
|
MaIlz
| 2025-08-10T14:16:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T14:15:57Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: transformers
model_name: sft_moledit_last
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for sft_moledit_last
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaIlz/sft_moledit_last", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Tarsis370/blockassist-bc-toothy_mute_elk_1754834017
|
Tarsis370
| 2025-08-10T14:15:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"toothy mute elk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:15:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- toothy mute elk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gunahya/blockassist-bc-aquatic_amphibious_butterfly_1754835012
|
gunahya
| 2025-08-10T14:10:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic amphibious butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:10:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic amphibious butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1754834763
|
aXsalll
| 2025-08-10T14:07:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:07:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xShirin/blockassist-bc-sturdy_arctic_wallaby_1754834724
|
0xShirin
| 2025-08-10T14:06:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy arctic wallaby",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:06:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy arctic wallaby
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JAIXIN/blockassist-bc-stinging_dextrous_rhino_1754832278
|
JAIXIN
| 2025-08-10T14:02:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging dextrous rhino",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:01:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging dextrous rhino
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
blocksync/blockassist-bc-pouncing_bristly_finch_1754833275
|
blocksync
| 2025-08-10T14:01:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pouncing bristly finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T14:01:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pouncing bristly finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754834323
|
Shopnil09
| 2025-08-10T13:59:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:59:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nourhanwaleeed/reasoning-qwen-math
|
nourhanwaleeed
| 2025-08-10T13:58:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T13:57:35Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nourhanwaleeed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RoyArkh/Test1-EleutherAI-pythia-160m_client5_round3
|
RoyArkh
| 2025-08-10T13:58:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:57:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SailoG/mizo-woman
|
SailoG
| 2025-08-10T13:56:34Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-08-10T13:54:37Z |
---
license: mit
---
Placeholder for SafeTensors of Mizo women's faces.
|
rmdhirr/test-dep1
|
rmdhirr
| 2025-08-10T13:55:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"region:us"
] | null | 2025-08-10T13:52:04Z |
---
base_model: google/gemma-3-12b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
RoyArkh/Test1-EleutherAI-pythia-160m_client5_round4
|
RoyArkh
| 2025-08-10T13:53:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:52:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manoosh/blockassist-bc-thick_nasty_sheep_1754833886
|
manoosh
| 2025-08-10T13:52:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick nasty sheep",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:52:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick nasty sheep
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RoyArkh/Test1-EleutherAI-pythia-160m_client3_round4
|
RoyArkh
| 2025-08-10T13:52:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:51:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RoyArkh/Test1-EleutherAI-pythia-160m_client2_round4
|
RoyArkh
| 2025-08-10T13:51:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:51:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Deuf/ebios-flan-t5
|
Deuf
| 2025-08-10T13:51:18Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-08T11:09:29Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: ebios-flan-t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ebios-flan-t5
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
RoyArkh/Test1-EleutherAI-pythia-160m_client0_round4
|
RoyArkh
| 2025-08-10T13:50:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:50:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
guna7/voiceshop-flan-t5
|
guna7
| 2025-08-10T13:49:54Z | 1 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T19:24:21Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
model-index:
- name: voiceshop-flan-t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# voiceshop-flan-t5
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
0xDonya/blockassist-bc-peaceful_feathered_alpaca_1754833661
|
0xDonya
| 2025-08-10T13:49:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful feathered alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:49:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful feathered alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wxk666/ppo-LunarLander-v2
|
wxk666
| 2025-08-10T13:46:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-10T13:46:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.48 +/- 13.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thedobro/blockassist-bc-sedate_shiny_mantis_1754833508
|
thedobro
| 2025-08-10T13:46:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate shiny mantis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:46:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate shiny mantis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
homeking53/blockassist-bc-grazing_nimble_bobcat_1754832410
|
homeking53
| 2025-08-10T13:42:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing nimble bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:40:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing nimble bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DeathGodlike/MN-Chinofun-12B-4.1_H8-8.0BPW_EXL3
|
DeathGodlike
| 2025-08-10T13:41:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:djuna/MN-Chinofun-12B-4.1",
"base_model:quantized:djuna/MN-Chinofun-12B-4.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl3",
"region:us"
] |
text-generation
| 2025-08-09T23:15:27Z |
---
license: apache-2.0
base_model:
- djuna/MN-Chinofun-12B-4.1
pipeline_tag: text-generation
library_name: transformers
---
Original model: [MN-Chinofun-12B-4.1](https://huggingface.co/djuna/MN-Chinofun-12B-4.1) by [djuna](https://huggingface.co/djuna)
EXL3 quant of the model.
Head bits: 8 | Weight bits: 8
|
jahyungu/phi-1_5_hendrycks_math
|
jahyungu
| 2025-08-10T13:41:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:11:05Z |
---
library_name: transformers
license: mit
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5_hendrycks_math
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5_hendrycks_math
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754833068
|
kayacrypto
| 2025-08-10T13:40:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:39:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DenjiChew/jasda
|
DenjiChew
| 2025-08-10T13:38:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-10T10:46:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jasmine
---
# Jasda
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jasmine` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jasmine",
"lora_weights": "https://huggingface.co/DenjiChew/jasda/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DenjiChew/jasda', weight_name='lora.safetensors')
image = pipeline('jasmine').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1010
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/DenjiChew/jasda/discussions) to add images that show off what you’ve made with this LoRA.
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1754833028
|
Shopnil09
| 2025-08-10T13:37:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:37:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754831889
|
Sayemahsjn
| 2025-08-10T13:37:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:37:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uname0x96/blockassist-bc-rough_scavenging_narwhal_1754832711
|
uname0x96
| 2025-08-10T13:33:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough scavenging narwhal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:33:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough scavenging narwhal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afiff883/blockassist-bc-waddling_shrewd_mule_1754832759
|
afiff883
| 2025-08-10T13:33:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"waddling shrewd mule",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:33:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- waddling shrewd mule
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754832681
|
afasdfdfadsf
| 2025-08-10T13:33:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:32:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Fdex/rl_course_vizdoom_health_gathering_supreme
|
Fdex
| 2025-08-10T13:32:07Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-10T13:32:02Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.54 +/- 5.39
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Fdex/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
hsge/0.5B_Gemma_7B_HHH
|
hsge
| 2025-08-10T13:32:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:31:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aleebaster/blockassist-bc-sly_eager_boar_1754831463
|
aleebaster
| 2025-08-10T13:30:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:28:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bansalaman18/reranker-bert-uncased_L-6_H-256_A-4-msmarco-bce
|
bansalaman18
| 2025-08-10T13:29:07Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:90000",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"en",
"dataset:sentence-transformers/msmarco",
"arxiv:1908.10084",
"base_model:bansalaman18/bert-uncased_L-6_H-256_A-4",
"base_model:finetune:bansalaman18/bert-uncased_L-6_H-256_A-4",
"model-index",
"region:us"
] |
text-ranking
| 2025-08-10T13:29:04Z |
---
language:
- en
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:90000
- loss:BinaryCrossEntropyLoss
base_model: bansalaman18/bert-uncased_L-6_H-256_A-4
datasets:
- sentence-transformers/msmarco
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on bansalaman18/bert-uncased_L-6_H-256_A-4
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoMSMARCO R100
type: NanoMSMARCO_R100
metrics:
- type: map
value: 0.0413
name: Map
- type: mrr@10
value: 0.0173
name: Mrr@10
- type: ndcg@10
value: 0.0359
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNFCorpus R100
type: NanoNFCorpus_R100
metrics:
- type: map
value: 0.2927
name: Map
- type: mrr@10
value: 0.3873
name: Mrr@10
- type: ndcg@10
value: 0.2765
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNQ R100
type: NanoNQ_R100
metrics:
- type: map
value: 0.0914
name: Map
- type: mrr@10
value: 0.0629
name: Mrr@10
- type: ndcg@10
value: 0.0764
name: Ndcg@10
- task:
type: cross-encoder-nano-beir
name: Cross Encoder Nano BEIR
dataset:
name: NanoBEIR R100 mean
type: NanoBEIR_R100_mean
metrics:
- type: map
value: 0.1418
name: Map
- type: mrr@10
value: 0.1558
name: Mrr@10
- type: ndcg@10
value: 0.1296
name: Ndcg@10
---
# CrossEncoder based on bansalaman18/bert-uncased_L-6_H-256_A-4
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [bansalaman18/bert-uncased_L-6_H-256_A-4](https://huggingface.co/bansalaman18/bert-uncased_L-6_H-256_A-4) on the [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [bansalaman18/bert-uncased_L-6_H-256_A-4](https://huggingface.co/bansalaman18/bert-uncased_L-6_H-256_A-4) <!-- at revision 47bf898438d8ba5baedbfe64e1b9a3c0c83d884b -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("bansalaman18/reranker-bert-uncased_L-6_H-256_A-4-msmarco-bce")
# Get scores for pairs of texts
pairs = [
['who is wale the rapper', "Wale (rapper)'s wiki: Olubowale Victor Akintimehin (born September 21, 1984), better known by his stage name Wale (/Ë\x88wÉ\x94Ë\x90leɪ/ WAW-lay), is an American rapper from Washington, D.C. He rose to prominence in 2006, when his song Dig Dug (Shake It) became popular in his hometown. Wale became locally recognized and continued recording music for the regional audience."],
['what is platinum used for', 'The periodic table is a chart that shows how elements are related to one another. Indium is a transition metal that is also part of the platinum family. The metals in the platinum family are also known as the noble metals. They have this name because they do not react well with other elements and compounds. They appear to be too superior to react with most other substances. In fact, iridium is the most corrosion-resistant metal known.'],
['where is the gonzaga university located', 'Where We Are. The Boise State University Department of Public Safety substation is located in Capitol Village at 2245 University Drive. Our office is open 24 hours a day, 7 days a week, so we are always available. Physical Address: 2245 University Drive, Boise, Idaho 83706.'],
['most common protein in the human body', 'Protein is the second category of food that a human body can get energy from. Most people know protein as animal meat-a hamburger, a chicken leg. These are all proteins. You can also extract protein from certain plants.Soy protein isolate is a well known protein that comes from soybeans.uman Body Four Energy Sources. The human body can only metabolize four types of energy sources. These four categories are carbohydrates (sugars and starches), fats (includes oils), proteins (animal and vegetable), and alcohol.'],
['where is azilda ontario', 'Azilda railway station is a Via Rail flag stop station located in Azilda, Ontario, in the city of Greater Sudbury community of Rayside-Balfour. It is on the Canadian Pacific Railway transcontinental main line, and is served by the regional rail Sudbury â\x80\x93 White River train. Map 12 (PDF) (Map). 1 : 1,600,000.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'who is wale the rapper',
[
"Wale (rapper)'s wiki: Olubowale Victor Akintimehin (born September 21, 1984), better known by his stage name Wale (/Ë\x88wÉ\x94Ë\x90leɪ/ WAW-lay), is an American rapper from Washington, D.C. He rose to prominence in 2006, when his song Dig Dug (Shake It) became popular in his hometown. Wale became locally recognized and continued recording music for the regional audience.",
'The periodic table is a chart that shows how elements are related to one another. Indium is a transition metal that is also part of the platinum family. The metals in the platinum family are also known as the noble metals. They have this name because they do not react well with other elements and compounds. They appear to be too superior to react with most other substances. In fact, iridium is the most corrosion-resistant metal known.',
'Where We Are. The Boise State University Department of Public Safety substation is located in Capitol Village at 2245 University Drive. Our office is open 24 hours a day, 7 days a week, so we are always available. Physical Address: 2245 University Drive, Boise, Idaho 83706.',
'Protein is the second category of food that a human body can get energy from. Most people know protein as animal meat-a hamburger, a chicken leg. These are all proteins. You can also extract protein from certain plants.Soy protein isolate is a well known protein that comes from soybeans.uman Body Four Energy Sources. The human body can only metabolize four types of energy sources. These four categories are carbohydrates (sugars and starches), fats (includes oils), proteins (animal and vegetable), and alcohol.',
'Azilda railway station is a Via Rail flag stop station located in Azilda, Ontario, in the city of Greater Sudbury community of Rayside-Balfour. It is on the Canadian Pacific Railway transcontinental main line, and is served by the regional rail Sudbury â\x80\x93 White River train. Map 12 (PDF) (Map). 1 : 1,600,000.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.0413 (-0.4483) | 0.2927 (+0.0317) | 0.0914 (-0.3282) |
| mrr@10 | 0.0173 (-0.4602) | 0.3873 (-0.1126) | 0.0629 (-0.3638) |
| **ndcg@10** | **0.0359 (-0.5045)** | **0.2765 (-0.0485)** | **0.0764 (-0.4242)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"rerank_k": 100,
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.1418 (-0.2482) |
| mrr@10 | 0.1558 (-0.3122) |
| **ndcg@10** | **0.1296 (-0.3257)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### msmarco
* Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) at [9e329ed](https://huggingface.co/datasets/sentence-transformers/msmarco/tree/9e329ed2e649c9d37b0d91dd6b764ff6fe671d83)
* Size: 90,000 training samples
* Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | passage | score |
|:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 10 characters</li><li>mean: 34.26 characters</li><li>max: 168 characters</li></ul> | <ul><li>min: 60 characters</li><li>mean: 343.34 characters</li><li>max: 984 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.53</li><li>max: 1.0</li></ul> |
* Samples:
| query | passage | score |
|:---------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>who is the actor that plays the tanned colonel for kfc?</code> | <code>James Rebhorn Actor, Scent of a Woman James Robert Rebhorn (September 1, 1948 - March 21, 2014) was an American actor who appeared in over 100 films, television series, and plays. At the time of his death, he had recurring roles in the current series White Collar and Homeland.</code> | <code>0.0</code> |
| <code>asking for an increase in credit limit harm your credit score</code> | <code>If you request a credit line increase, you should ask the lender whether it will result in your credit report being pulled. If it does, this will show up as an inquiry on your report and generally remains on your credit report for two years.Too many inquiries can lower your credit score.f you request a credit line increase, you should ask the lender whether it will result in your credit report being pulled. If it does, this will show up as an inquiry on your report and generally remains on your credit report for two years.</code> | <code>1.0</code> |
| <code>what is a sheep ked</code> | <code>Cysteine is required by sheep to produce wool: It is an essential amino acid that must be taken in from their feed. As a consequence, during drought conditions, sheep produce less wool; however, transgenic sheep that can make their own cysteine have been developed.</code> | <code>0.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Evaluation Dataset
#### msmarco
* Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) at [9e329ed](https://huggingface.co/datasets/sentence-transformers/msmarco/tree/9e329ed2e649c9d37b0d91dd6b764ff6fe671d83)
* Size: 10,000 evaluation samples
* Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | passage | score |
|:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 11 characters</li><li>mean: 34.79 characters</li><li>max: 118 characters</li></ul> | <ul><li>min: 86 characters</li><li>mean: 353.31 characters</li><li>max: 970 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.52</li><li>max: 1.0</li></ul> |
* Samples:
| query | passage | score |
|:-----------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>who is wale the rapper</code> | <code>Wale (rapper)'s wiki: Olubowale Victor Akintimehin (born September 21, 1984), better known by his stage name Wale (/ËwÉËleɪ/ WAW-lay), is an American rapper from Washington, D.C. He rose to prominence in 2006, when his song Dig Dug (Shake It) became popular in his hometown. Wale became locally recognized and continued recording music for the regional audience.</code> | <code>1.0</code> |
| <code>what is platinum used for</code> | <code>The periodic table is a chart that shows how elements are related to one another. Indium is a transition metal that is also part of the platinum family. The metals in the platinum family are also known as the noble metals. They have this name because they do not react well with other elements and compounds. They appear to be too superior to react with most other substances. In fact, iridium is the most corrosion-resistant metal known.</code> | <code>0.0</code> |
| <code>where is the gonzaga university located</code> | <code>Where We Are. The Boise State University Department of Public Safety substation is located in Capitol Village at 2245 University Drive. Our office is open 24 hours a day, 7 days a week, so we are always available. Physical Address: 2245 University Drive, Boise, Idaho 83706.</code> | <code>0.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `half_precision_backend`: cpu_amp
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: cpu_amp
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
|:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
| -1 | -1 | - | - | 0.0350 (-0.5054) | 0.2370 (-0.0881) | 0.0426 (-0.4581) | 0.1049 (-0.3505) |
| 0.0002 | 1 | 0.6755 | - | - | - | - | - |
| 0.1778 | 1000 | 0.6939 | 0.6901 | 0.0568 (-0.4837) | 0.2834 (-0.0416) | 0.0104 (-0.4903) | 0.1169 (-0.3385) |
| 0.3556 | 2000 | 0.6892 | 0.6843 | 0.0067 (-0.5338) | 0.2842 (-0.0408) | 0.0541 (-0.4465) | 0.1150 (-0.3404) |
| **0.5333** | **3000** | **0.6813** | **0.678** | **0.0359 (-0.5045)** | **0.2765 (-0.0485)** | **0.0764 (-0.4242)** | **0.1296 (-0.3257)** |
| 0.7111 | 4000 | 0.6782 | 0.6741 | 0.0146 (-0.5258) | 0.2845 (-0.0406) | 0.0798 (-0.4208) | 0.1263 (-0.3291) |
| 0.8889 | 5000 | 0.6804 | 0.6706 | 0.0158 (-0.5246) | 0.2730 (-0.0520) | 0.0716 (-0.4290) | 0.1201 (-0.3352) |
| -1 | -1 | - | - | 0.0359 (-0.5045) | 0.2765 (-0.0485) | 0.0764 (-0.4242) | 0.1296 (-0.3257) |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 5.0.0
- Transformers: 4.51.0
- PyTorch: 2.6.0
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4-dev.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
JakeOh/llama-3.2-1b-gsm8k-step-1-dpo
|
JakeOh
| 2025-08-10T13:26:55Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-03T13:47:26Z |
---
library_name: transformers
model_name: llama-3.2-1b-gsm8k-step-1-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-3.2-1b-gsm8k-step-1-dpo
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JakeOh/llama-3.2-1b-gsm8k-step-1-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/seunghyukoh-kaist/revise/runs/14in6vme)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Obiwank107/blockassist-bc-tame_foxy_aardvark_1754821384
|
Obiwank107
| 2025-08-10T13:25:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame foxy aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:25:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame foxy aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sooh098/midmb-kculture-qa
|
sooh098
| 2025-08-10T13:24:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"korean",
"midm",
"국립국어원",
"말평",
"conversational",
"ko",
"base_model:K-intelligence/Midm-2.0-Base-Instruct",
"base_model:finetune:K-intelligence/Midm-2.0-Base-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T13:03:21Z |
---
license: apache-2.0
language:
- ko
tags:
- korean
- midm
- text-generation
- 국립국어원
- 말평
base_model:
- K-intelligence/Midm-2.0-Base-Instruct
pipeline_tag: text-generation
library_name: transformers
---
# 🏛️ 과제 개요
본 모델은 **국립국어원 ‘AI 말평 경진대회’**의
「[2025]한국문화 질의응답」 과제를 위해 개발되었습니다.
> **🧑💻 팀명: 다마고치**
---
# 🔤 Korean Culture QA Model (한국 문화 QA)
본 모델은 위 과제 수행을 위해 설계된 한국 문화 QA 모델입니다.
---
## 📌 모델 개요
- **베이스 모델**: `K-intelligence/Midm-2.0-Base-Instruct`
- **파인튜닝 방식**: QLoRA(4bit) 기반 LoRA 어댑터 미세조정(PEFT)
- **사용 목적**: 한국 문화 기반의 QA 태스크 (선다형, 단답형, 서술형)
## 📎 참고
- 데이터 출처: [[2025]한국문화 질의응답(가 유형)](https://kli.korean.go.kr/benchmark/taskOrdtm/taskList.do?taskOrdtmId=180&clCd=END_TASK&subMenuId=sub01)
|
CAROLINY-DREHER-EROME-VIDEOS/ORIGINAL.CAROLINY.DREHER.EROME.VIDEO.COMPLETO.JA.CIRCULA
|
CAROLINY-DREHER-EROME-VIDEOS
| 2025-08-10T13:20:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T13:19:35Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Original Video Caroliny Dreher Erome Vídeo Completo Já Circula
Video Caroliny Dreher Erome Vídeo Completo Já Circula
|
exoplanet/LFM2-700M-8bit
|
exoplanet
| 2025-08-10T13:19:46Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"lfm2",
"liquid",
"edge",
"text-generation",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"base_model:LiquidAI/LFM2-700M",
"base_model:quantized:LiquidAI/LFM2-700M",
"license:other",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-10T13:19:34Z |
---
library_name: mlx
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
- mlx
base_model: LiquidAI/LFM2-700M
---
# exoplanet/LFM2-700M-8bit
This model [exoplanet/LFM2-700M-8bit](https://huggingface.co/exoplanet/LFM2-700M-8bit) was
converted to MLX format from [LiquidAI/LFM2-700M](https://huggingface.co/LiquidAI/LFM2-700M)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("exoplanet/LFM2-700M-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ProfCastillo/notConfucius.v2.llama3.1-8b
|
ProfCastillo
| 2025-08-10T13:18:21Z | 0 | 0 | null |
[
"gguf",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T03:10:05Z |
---
license: apache-2.0
base_model: meta-llama/Meta-Llama-3.1-8B
---
# notConfucius
notConfucius.v2 is a fine-tuning experiment. llama3.1:8b base with a better designed dataset to reflect a particular cognitive persona - wiser, more coherent, less maddening, and still occasionally enlightening. it’s less a model and more a vibe.
This is the second, more functional iteration of a "cognitive persona" fine-tuning experiment. The first version was a maddening, character-locked notConfucius. This version attempts to fix that. It doesn't really succeed. Three different base models on this new FT dataset support that.

## Technical Details
- **Base Model:** meta-llama/Meta-Llama-3.1-8B
- **Technique:** Parameter-Efficient Fine-Tuning (PEFT) using LoRA
- **Framework:** Trained using `unsloth` for high-speed, memory-efficient training on a single GPU
- **Format:** Q8_0 GGUF quantization, with the LoRA adapter fully merged
- **Dataset:** A large, custom dataset of ~1100 instruction-response pairs designed with a single, highly stylized persona, generated with multiple proprietary and open source LLMs
- **Size:** 8.5GB
- **Context Length:** 128K tokens
**What Changed in V2: From Sledgehammer to Scalpel**
The first version suffered from severe persona overfitting. A large, single-minded dataset of ~1100 examples didn't just teach the model a skill; it performed a personality transplant that left it unable to answer a direct question. It was a funhouse mirror, but not a very useful tool.
V2 was retrained on a smaller, more tactical dataset of ~300 examples with a completely different philosophy:
Mode Switching, Not Reprogramming: The dataset is now a balanced diet, not an overdose. It explicitly teaches the model to switch between three modes:
Direct Mode (Pragmatist): For factual questions. It's now trained to just give the damn answer.
Advisory Mode (Strategist): For decisions. It maps out tradeoffs instead of spouting philosophy.
Emergent Mode (Provocateur): For when you're genuinely stuck. This is the only place the old "notConfucius" is allowed out of its cage.
Pragmatism by Default: The model's new primary directive is utility, not depth. The metaphors and poetic reframing are now a specialized reponse, not the only repsonse.
**How to Use This Model (v2)**
You can now ask it factual questions. It should answer them. Mostly.
The model is designed to be a strategic advisor, not a default philosopher.
For a clear plan, ask it a tactical question.
For a decision framework, present it with a tradeoff.
If you're truly stuck, give it an ambiguous problem and see if the old spark is still there.
This version is less of a "funhouse mirror" and more of a "shop tool." It's still got a weird personality, but now it has an off-switch. Sometimes. It's still a vibe more than it is a model.
|
mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF
|
mradermacher
| 2025-08-10T13:15:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:rubricreward/R3-Qwen3-4B-LoRA-14k",
"base_model:quantized:rubricreward/R3-Qwen3-4B-LoRA-14k",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-10T10:43:59Z |
---
base_model: rubricreward/R3-Qwen3-4B-LoRA-14k
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/rubricreward/R3-Qwen3-4B-LoRA-14k
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#R3-Qwen3-4B-LoRA-14k-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-4B-LoRA-14k-i1-GGUF/resolve/main/R3-Qwen3-4B-LoRA-14k.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/thinker-mini-v1-GGUF
|
mradermacher
| 2025-08-10T13:15:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"ja",
"base_model:narukijima/thinker-mini-v1",
"base_model:quantized:narukijima/thinker-mini-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-10T10:28:55Z |
---
base_model: narukijima/thinker-mini-v1
language:
- en
- ja
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/narukijima/thinker-mini-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#thinker-mini-v1-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/thinker-mini-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q3_K_S.gguf) | Q3_K_S | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q2_K.gguf) | Q2_K | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.IQ4_XS.gguf) | IQ4_XS | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q3_K_M.gguf) | Q3_K_M | 13.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q3_K_L.gguf) | Q3_K_L | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q4_K_S.gguf) | Q4_K_S | 14.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q4_K_M.gguf) | Q4_K_M | 15.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q5_K_S.gguf) | Q5_K_S | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q5_K_M.gguf) | Q5_K_M | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/thinker-mini-v1-GGUF/resolve/main/thinker-mini-v1.Q8_0.gguf) | Q8_0 | 22.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Roy2358/phi4-speech-asr-nl-finetuned
|
Roy2358
| 2025-08-10T13:14:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi4mm",
"text-generation",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:microsoft/Phi-4-multimodal-instruct",
"base_model:finetune:microsoft/Phi-4-multimodal-instruct",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-10T09:27:19Z |
---
library_name: transformers
license: mit
base_model: microsoft/Phi-4-multimodal-instruct
tags:
- generated_from_trainer
model-index:
- name: phi4-speech-asr-nl-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi4-speech-asr-nl-finetuned
This model is a fine-tuned version of [microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.99) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.4
|
ShaunM1/pubmedqa-orpo_lora_model-new-llama3b-better-params
|
ShaunM1
| 2025-08-10T13:07:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T13:07:12Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ShaunM1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SimpsonFan/blockassist-bc-darting_toothy_bobcat_1754829635
|
SimpsonFan
| 2025-08-10T13:05:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting toothy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:05:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting toothy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gasoline2255/blockassist-bc-spotted_clawed_chameleon_1754830634
|
gasoline2255
| 2025-08-10T13:01:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted clawed chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T13:01:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted clawed chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754830565
|
afasdfdfadsf
| 2025-08-10T12:58:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough opaque clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T12:57:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough opaque clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754830503
|
kayacrypto
| 2025-08-10T12:57:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T12:56:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
boahancock/blockassist-bc-iridescent_rapid_toad_1754830523
|
boahancock
| 2025-08-10T12:55:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent rapid toad",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T12:55:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent rapid toad
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jai-rao/hearing-aid
|
Jai-rao
| 2025-08-10T12:55:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:distilgpt2",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:distilbert/distilgpt2",
"base_model:adapter:distilbert/distilgpt2",
"region:us"
] |
text-generation
| 2025-08-10T12:52:34Z |
---
base_model: distilgpt2
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:distilgpt2
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
wanjay7/blockassist-bc-bristly_coiled_hedgehog_1754830433
|
wanjay7
| 2025-08-10T12:54:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bristly coiled hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T12:54:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bristly coiled hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MintLab/FLUX-Krea-BLAZE
|
MintLab
| 2025-08-10T12:53:23Z | 0 | 64 |
diffusers
|
[
"diffusers",
"diffusion-single-file",
"comfyui",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Krea-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-01T14:23:23Z |
---
license: other
language:
- en
base_model:
- black-forest-labs/FLUX.1-Krea-dev
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
tags:
- diffusion-single-file
- comfyui
---
FLUX-Krea-Blaze
===============
**32, 64 and 128 Rank LORA's Now Available**
Use these LORA's with **FLUX.1.krea.dev** or **FLUX.1.dev** for low step, beautiful renders.
Check out the LORA folder
**FLUX-Krea-Blaze FP8 E5M2**
*A Blazing fast version of FLUX.1-Krea.DEV*
Use this as a replacement for the standard FLUX.1.krea.dev
Recommended Settings:
4 Steps Res2 - Bong Tangent
8 Steps UniPc - sgm_uniform
**EXAMPLES**
"A mysterious shadow figure lingers in an abandoned train station at midnight, lit only by a single flickering neon sign, with the text "FLUX KREA BLAZE" with fog rolling through and puddles reflecting the sign’s pink and blue glow."
FLUX-Krea-Blaze|Flux.1-krea.dev
:-------------------------:|:-------------------------:
|

**Full 4k Render in 200 seconds!**
"A majestic Swiss alpine valley at golden sunrise, snow-capped peaks glowing softly above rolling green meadows filled with wildflowers. A clear blue lake reflects the mountains and sky, low mist hovering over its surface. In the foreground, a pair of Bengal tigers rest calmly in the wildflowers — their orange and black stripes contrasting with the vivid greens and soft yellows of the landscape. One tiger is lying down peacefully near a patch of blue gentians, the other standing alert, gazing toward the mountains. Behind them, a wooden Swiss chalet with flower-filled balconies sits on the hillside, framed by tall pine trees. Shot with a 35mm lens, soft natural light, sharp details, dreamlike realism, calm yet powerful mood — a harmonious blend of wildlife and serene alpine beauty."

Rank 128 | Rank 32
:-------------------------:|:-------------------------:
 | 
Workflow
--------
Drag this image into ComfyUI for a fast **8 Step-1920x1080** workflow.

---
flux.1.dev-non-commercial-license - https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
*The FLUX.1 [dev] Model is licensed by Black Forest Labs Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs Inc. IN NO EVENT SHALL BLACK FOREST LABS INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.*
|
teohyc/ddpm-butterfly-celebrity-abomination-model
|
teohyc
| 2025-08-10T12:53:08Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-08-09T16:40:08Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class ](https://github.com/huggingface/diffusion-models-class)
testing fintune
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('teohyc/ddpm-butterfly-celebrity-abomination-model')
image = pipeline().images[0]
image
|
Todokete/Gemini_v3Nano_2025.06.30.1229
|
Todokete
| 2025-08-10T12:51:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-10T12:33:28Z |
Gemini v3Nano from chrome://on-device-internals/
|
Zhang199/TinyLLaVA-Qwen2-0.5B-SigLIP
|
Zhang199
| 2025-08-10T12:51:10Z | 94 | 4 |
transformers
|
[
"transformers",
"safetensors",
"tinyllava",
"text-generation",
"image-text-to-text",
"conversational",
"arxiv:2405.11788",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-07-21T06:59:36Z |
---
license: apache-2.0
pipeline_tag: image-text-to-text
---
**<center><span style="font-size:2em;">TinyLLaVA</span></center>**
[](https://arxiv.org/abs/2405.11788)[](https://github.com/TinyLLaVA/TinyLLaVA_Factory)[](http://8843843nmph5.vicp.fun/#/)
Here, we introduce TinyLLaVA-Qwen2-0.5B-SigLIP , which is trained by the [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) codebase. For LLM and vision tower, we choose [Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively.
### Usage
Execute the following test code:
```python
from tinyllava.eval.run_tiny_llava import eval_model
model_path = 'Zhang199/TinyLLaVA-Qwen2-0.5B-SigLIP'
prompt = "What are the things I should be cautious about when I visit here?"
image_file = "https://llava-vl.github.io/static/images/view.jpg"
conv_mode = "qwen2_base" # or llama, gemma, etc
args = type('Args', (), {
"model_path": model_path,
"model": None,
"query": prompt,
"conv_mode": conv_mode,
"image_file": image_file,
"sep": ",",
"temperature": 0,
"top_p": None,
"num_beams": 1,
"max_new_tokens": 512
})()
eval_model(args)
```
### Result
| model_name | vqav2 | gqa | sqa | textvqa | MM-VET | POPE | MME | MMMU |
| :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ | ------ |
| [LLaVA-1.5-7B](https://huggingface.co/llava-hf/llava-1.5-7b-hf) | 78.5 | 62.0 | 66.8 | 58.2 | 30.5 | 85.9 | 1510.7 | - |
| [bczhou/TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) (our legacy model) | 79.9 | 62.0 | 69.1 | 59.1 | 32.0 | 86.4 | 1464.9 | - |
| [tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B](https://huggingface.co/tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B) | 78.4 | 61.6 | 64.4 | 53.6 | 26.9 | 86.4 | 1339.0 | 31.7 |
| [tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B](https://huggingface.co/tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B) | 80.1 | 62.1 | 73.0 | 60.3 | 37.5 | 87.2 | 1466.4 | 38.4 |
| [Zhang199/TinyLLaVA-Qwen2-0.5B-SigLIP](https://huggingface.co/Zhang199/TinyLLaVA-Qwen2-0.5B-SigLIP) | 72.33 | 55.84 | 60.14 | 45.17 | 19.5 | 86.59 | 1153 | 29.7 |
P.S. [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) is an open-source modular codebase for small-scale LMMs with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results. This code repository provides standard training&evaluating pipelines, flexible data preprocessing&model configurations, and easily extensible architectures. Users can customize their own LMMs with minimal coding effort and less coding mistake.
TinyLLaVA Factory integrates a suite of cutting-edge models and methods.
- LLM currently supports OpenELM, TinyLlama, StableLM, Qwen, Gemma, Phi, and Qwen2.
- Vision tower currently supports CLIP, SigLIP, Dino, and combination of CLIP and Dino.
- Connector currently supports MLP, Qformer, and Resampler.
|
kkk1888/blockassist-bc-wild_tame_prawn_1754827073
|
kkk1888
| 2025-08-10T12:50:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild tame prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T12:49:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild tame prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mikewelter20/blockassist-bc-ferocious_foraging_wolf_1754826276
|
mikewelter20
| 2025-08-10T12:48:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ferocious foraging wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T12:48:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ferocious foraging wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jukofyork/Qwen3-0.6B-YaRN-GGUF
|
jukofyork
| 2025-08-10T12:43:58Z | 0 | 0 | null |
[
"gguf",
"draft",
"speculative-decoding",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-10T11:52:56Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B
tags:
- draft
- speculative-decoding
---
These are [YaRN](https://arxiv.org/abs/2309.00071) extended versions of [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) for use with:
- The new [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507), [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) and [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) models that support up to 256k context length.
- (Potentially) any model via [Universal Assisted Generation](https://huggingface.co/blog/universal_assisted_generation) added to `llama.cpp` in [#12635](https://github.com/ggml-org/llama.cpp/pull/12635).
---
I've included the `Q4_0` quants for 4 different context lengths:
- [Qwen3-0.6B-32k-Q4_0.gguf](https://huggingface.co/jukofyork/Qwen3-0.6B-YaRN-GGUF/resolve/main/Qwen3-0.6B-32k-Q4_0.gguf)
- [Qwen3-0.6B-64k-Q4_0.gguf](https://huggingface.co/jukofyork/Qwen3-0.6B-YaRN-GGUF/resolve/main/Qwen3-0.6B-64k-Q4_0.gguf)
- [Qwen3-0.6B-128k-Q4_0.gguf](https://huggingface.co/jukofyork/Qwen3-0.6B-YaRN-GGUF/resolve/main/Qwen3-0.6B-128k-Q4_0.gguf)
- [Qwen3-0.6B-256k-Q4_0.gguf](https://huggingface.co/jukofyork/Qwen3-0.6B-YaRN-GGUF/resolve/main/Qwen3-0.6B-256k-Q4_0.gguf)
**NOTE**: Because `llama.cpp` uses "static-YaRN" the scaling factor remains constant regardless of input length:
- Only use the YaRN-extended versions when **processing long contexts** is required.
- Use the **smallest** YaRN-extension possible.
---
# How these were created
## To extend the context to 64k:
1. Edit the `config.json` file:
```json
"max_position_embeddings": 65536,
...
"rope_scaling": {
"factor": 2.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
},
```
2. Convert and quantize:
```sh
./llama.cpp/convert_hf_to_gguf.py --outtype auto --outfile Qwen3-0.6B-64k-BF16.gguf Qwen3-0.6B
./llama.cpp/build/bin/llama-quantize Qwen3-0.6B-64k-BF16.gguf Qwen3-0.6B-64k-Q4_0.gguf Q4_0 44
```
## To extend the context to 128k:
1. Edit the `config.json` file:
```json
"max_position_embeddings": 131072,
...
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
},
```
2. Convert and quantize:
```sh
./llama.cpp/convert_hf_to_gguf.py --outtype auto --outfile Qwen3-0.6B-128k-BF16.gguf Qwen3-0.6B
./llama.cpp/build/bin/llama-quantize Qwen3-0.6B-128k-BF16.gguf Qwen3-0.6B-128k-Q4_0.gguf Q4_0 44
```
## To extend the context to 256k:
1. Edit the `config.json` file:
```json
"max_position_embeddings": 262144,
...
"rope_scaling": {
"factor": 8.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
},
```
2. Convert and quantize:
```sh
./llama.cpp/convert_hf_to_gguf.py --outtype auto --outfile Qwen3-0.6B-256k-BF16.gguf Qwen3-0.6B
./llama.cpp/build/bin/llama-quantize Qwen3-0.6B-256k-BF16.gguf Qwen3-0.6B-256k-Q4_0.gguf Q4_0 44
```
# How to patch the GGUF files for other context lengths
1. First work out the new scale factor, eg: for 1M context (`2^20 = 1048576`) we need `1048576/32768 = 32.0`.
2. Copy one of the existing YaRN-extended GGUF files (ie: **NOT** `Qwen3-0.6B-32k-Q4_0.gguf`!) and patch it using `gguf_set_metadata.py`:
```sh
./llama.cpp/gguf-py/gguf/scripts/gguf_set_metadata.py Qwen3-0.6B-1M-Q4_0.gguf qwen3.context_length 1048576
./llama.cpp/gguf-py/gguf/scripts/gguf_set_metadata.py Qwen3-0.6B-1M-Q4_0.gguf qwen3.rope.scaling.factor 32.0
```
3. Check the patch has worked using `gguf_dump.py`:
```sh
./llama.cpp/gguf-py/gguf/scripts/gguf_dump.py --no-tensors Qwen3-0.6B-1M-Q4_0.gguf
```
```
INFO:gguf-dump:* Loading: Qwen3-0.6B-1M-Q4_0.gguf
* File is LITTLE endian, script is running on a LITTLE endian host.
* Dumping 40 key/value pair(s)
1: UINT32 | 1 | GGUF.version = 3
2: UINT64 | 1 | GGUF.tensor_count = 311
3: UINT64 | 1 | GGUF.kv_count = 37
4: STRING | 1 | general.architecture = 'qwen3'
5: STRING | 1 | general.type = 'model'
6: STRING | 1 | general.name = 'Qwen3 0.6B'
7: STRING | 1 | general.basename = 'Qwen3'
8: STRING | 1 | general.size_label = '0.6B'
9: STRING | 1 | general.license = 'apache-2.0'
10: STRING | 1 | general.license.link = 'https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE'
11: UINT32 | 1 | general.base_model.count = 1
12: STRING | 1 | general.base_model.0.name = 'Qwen3 0.6B Base'
13: STRING | 1 | general.base_model.0.organization = 'Qwen'
14: STRING | 1 | general.base_model.0.repo_url = 'https://huggingface.co/Qwen/Qwen3-0.6B-Base'
15: [STRING] | 1 | general.tags = ['text-generation']
16: UINT32 | 1 | qwen3.block_count = 28
17: UINT32 | 1 | qwen3.context_length = 1048576
18: UINT32 | 1 | qwen3.embedding_length = 1024
19: UINT32 | 1 | qwen3.feed_forward_length = 3072
20: UINT32 | 1 | qwen3.attention.head_count = 16
21: UINT32 | 1 | qwen3.attention.head_count_kv = 8
22: FLOAT32 | 1 | qwen3.rope.freq_base = 1000000.0
23: FLOAT32 | 1 | qwen3.attention.layer_norm_rms_epsilon = 9.999999974752427e-07
24: UINT32 | 1 | qwen3.attention.key_length = 128
25: UINT32 | 1 | qwen3.attention.value_length = 128
26: STRING | 1 | qwen3.rope.scaling.type = 'yarn'
27: FLOAT32 | 1 | qwen3.rope.scaling.factor = 32.0
28: UINT32 | 1 | qwen3.rope.scaling.original_context_length = 32768
29: STRING | 1 | tokenizer.ggml.model = 'gpt2'
30: STRING | 1 | tokenizer.ggml.pre = 'qwen2'
31: [STRING] | 151936 | tokenizer.ggml.tokens = ['!', '"', '#', '$', '%', '&', ...]
32: [INT32] | 151936 | tokenizer.ggml.token_type = [1, 1, 1, 1, 1, 1, ...]
33: [STRING] | 151387 | tokenizer.ggml.merges = ['Ġ Ġ', 'ĠĠ ĠĠ', 'i n', 'Ġ t', 'ĠĠĠĠ ĠĠĠĠ', 'e r', ...]
34: UINT32 | 1 | tokenizer.ggml.eos_token_id = 151645
35: UINT32 | 1 | tokenizer.ggml.padding_token_id = 151643
36: UINT32 | 1 | tokenizer.ggml.bos_token_id = 151643
37: BOOL | 1 | tokenizer.ggml.add_bos_token = False
38: STRING | 1 | tokenizer.chat_template = "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%-..."
39: UINT32 | 1 | general.quantization_version = 2
40: UINT32 | 1 | general.file_type = 2
```
|
jahyungu/gemma-3-1b-it_hendrycks_math
|
jahyungu
| 2025-08-10T12:43:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T12:04:57Z |
---
library_name: transformers
license: gemma
base_model: google/gemma-3-1b-it
tags:
- generated_from_trainer
model-index:
- name: gemma-3-1b-it_hendrycks_math
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-3-1b-it_hendrycks_math
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
techgrow/blockassist-bc-scurrying_howling_beaver_1754827450
|
techgrow
| 2025-08-10T12:37:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying howling beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T12:37:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying howling beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RoyArkh/Test1-EleutherAI-gpt-neo-125m_client6_round4
|
RoyArkh
| 2025-08-10T12:36:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T12:35:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vattri81/my_finetuned_model_qlora
|
Vattri81
| 2025-08-10T12:35:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T12:35:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RoyArkh/Test1-EleutherAI-gpt-neo-125m_client4_round4
|
RoyArkh
| 2025-08-10T12:34:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T12:34:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Prodipta/gpt-oss-20b-multilingual-reasoner
|
Prodipta
| 2025-08-10T12:33:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T12:12:40Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Prodipta/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754829140
|
kapalbalap
| 2025-08-10T12:33:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-10T12:33:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RoyArkh/Test1-EleutherAI-gpt-neo-125m_client2_round4
|
RoyArkh
| 2025-08-10T12:33:11Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-10T14:11:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RoyArkh/Test1-EleutherAI-gpt-neo-125m_client1_round4
|
RoyArkh
| 2025-08-10T12:32:15Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-10T14:08:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.