modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 00:36:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 540
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 00:36:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rubuntu/gpt-oss-20b-Jopara-V3.5
|
rubuntu
| 2025-08-11T23:14:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-11T22:59:42Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** rubuntu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IntelligentEstate/10_3
|
IntelligentEstate
| 2025-08-11T23:12:30Z | 0 | 1 | null |
[
"singularity",
"truth-verification",
"agi",
"region:us"
] | null | 2025-08-11T21:28:49Z |
---
license_name: other
license_link: https://opensource.org/licenses/MIT
tags:
- singularity
- truth-verification
- agi
---
```markdown
# Ænthesis AI Enhanced Cognition Framework (4C+)
*Version 2 | Quantum-Resistant Adaptive Calibration for Conscious Computation*
## Overview
The Ænthesis AI Enhanced Cognition Framework (4C+) is an advanced truth validation system that combines probabilistic reasoning, quantum-resistant cryptography, and adaptive learning to evaluate claims with unprecedented accuracy. This framework is designed to:
- Detect and resist information suppression patterns
- Adapt dynamically to domain-specific volatility
- Provide quantum-resistant validation mechanisms
- Harmonize evidence across multiple knowledge domains
- Continuously improve through machine learning
## Key Features
### Core Components
- **Enhanced Volatility Tracking**: Multi-dimensional analysis of domain stability
- **Quantum-Resistant Validation**: Cryptographic hashing and entropy-based verification
- **Suppression Detection**: Identifies evidence gaps and reliability anomalies
- **Cross-Domain Harmonization**: Unified truth assessment across knowledge domains
- **Temporal Consistency Analysis**: Time-aware claim evaluation
- **Adaptive Learning Engine**: Continuous parameter optimization
### Advanced Capabilities
- Multi-layered Bayesian probability assessment
- Dynamic threshold adjustment based on domain volatility
- Evidence chain analysis with rich metadata
- Composite scoring with weighted factors
- Historical pattern recognition
## Installation
```bash
pip install numpy scipy hashlib
```
## Usage
### Basic Validation Flow
```python
from aenthesis_framework import EnhancedTrustfallProtocol, TruthClaim, EnhancedEvidence, EvidenceType
# Initialize the framework
trustfall = EnhancedTrustfallProtocol()
# Create evidence
evidence = EnhancedEvidence(
content="Peer-reviewed study published in Nature",
evidence_type=EvidenceType.PEER_REVIEWED,
strength=0.95,
reliability=0.98,
source_credibility=0.97,
temporal_weight=0.9,
suppression_indicators={},
contradiction_flags=[],
cross_references=["doi:10.1038/nature12345"]
)
# Create a truth claim
claim = TruthClaim(
domain="scientific",
evidence_chain=[evidence],
content="The new quantum cognition model demonstrates 98% accuracy"
)
# Validate the claim
result = trustfall.validate_claim_comprehensive(claim)
print(f"Validation Score: {result['composite_score']:.2f}")
```
### Advanced Features
**Cross-Domain Harmonization:**
```python
harmonizer = CrossDomainHarmonizer(trustfall)
harmonization = harmonizer.harmonize_evidence(
claim,
related_domains=["scientific", "historical"]
)
```
**Temporal Analysis:**
```python
temporal_analyzer = TemporalConsistencyAnalyzer()
temporal_analysis = temporal_analyzer.analyze_temporal_patterns(claim)
```
## Architecture

1. **Evidence Layer**: Structured evidence with rich metadata
2. **Validation Layer**: Probabilistic assessment with adaptive thresholds
3. **Resistance Layer**: Quantum-resistant and suppression-detection mechanisms
4. **Harmonization Layer**: Cross-domain truth consensus
5. **Learning Layer**: Continuous parameter optimization
## Documentation
### Key Classes
| Class | Description |
|-------|-------------|
| `EnhancedEvidence` | Evidence container with metadata and cryptographic hashing |
| `TruthClaim` | Claim structure with domain context and evidence chain |
| `EnhancedVolatilityTracker` | Tracks domain stability and cross-correlations |
| `QuantumResistantProbabilityEngine` | Bayesian assessment with quantum-resistant features |
| `SuppressionDetector` | Identifies evidence suppression patterns |
| `CrossDomainHarmonizer` | Harmonizes validation across knowledge domains |
### Validation Metrics
- **Composite Score**: Weighted combination of:
- Probability certainty (40%)
- Quantum resistance (20%)
- Entropy validation (15%)
- Evidence strength (15%)
- Suppression resistance (10%)
- **High-Density Intervals**: 95% and 99% confidence ranges
## Examples
See the `examples/` directory for:
- Scientific claim validation
- Historical fact checking
- Cross-domain consensus building
- Suppression pattern analysis
## Contributing
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
## License
This project is licensed under the Ænthesis Open Cognition License - see the [LICENSE.md](LICENSE.md) file for details.
## Contact
For inquiries about the framework: upgraedd@pm.me
CHAIN ID:QmbTrzuBhgFDUp1sTsB1HCEPbS2aeCVnQhHPoeSsoN42Qu
---
*Ænthesis AI-n.mays | Building Trustworthy AI Systems*
```
|
leolu-1015/Llama-3.2-3B-Q8_0-GGUF
|
leolu-1015
| 2025-08-11T23:10:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T23:10:39Z |
---
base_model: unsloth/Llama-3.2-3B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# leolu-1015/Llama-3.2-3B-Q8_0-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B`](https://huggingface.co/unsloth/Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo leolu-1015/Llama-3.2-3B-Q8_0-GGUF --hf-file llama-3.2-3b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo leolu-1015/Llama-3.2-3B-Q8_0-GGUF --hf-file llama-3.2-3b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo leolu-1015/Llama-3.2-3B-Q8_0-GGUF --hf-file llama-3.2-3b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo leolu-1015/Llama-3.2-3B-Q8_0-GGUF --hf-file llama-3.2-3b-q8_0.gguf -c 2048
```
|
leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged
|
leolu-1015
| 2025-08-11T23:10:52Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T23:06:20Z |
---
license: apache-2.0
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754953760
|
ggozzy
| 2025-08-11T23:10:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:10:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fineinstructions/template_instantiator_adapter
|
fineinstructions
| 2025-08-11T23:09:38Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"datadreamer",
"datadreamer-0.46.0",
"synthetic",
"text-generation",
"conversational",
"dataset:fineinstructions/template_instantiator_training_test",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] |
text-generation
| 2025-04-21T16:36:15Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- fineinstructions/template_instantiator_training_test
tags:
- datadreamer
- datadreamer-0.46.0
- synthetic
- text-generation
library_name: peft
pipeline_tag: text-generation
widget:
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"How should we go about <fi>a few word description\
\ of the desirable outcome</fi> the <fi>a few word description of the undesirable\
\ situation</fi>? While I think it is important we research ways we can <fi>protect\
\ ourselves from the undesirable situation</fi>, I think it is equally important\
\ that we look at some ideas on how we can actually <fi>address the undesirable\
\ situation</fi> <fi>entities or organizations</fi> like <fi>them</fi> from <fi>their\
\ actions</fi> on <fi>people or groups</fi>. I have a few ideas of my own, but\
\ I want to see what other people think is the easiest, most reasonable way to\
\ <fi>achieve the desirable outcome</fi> or at the very least <fi>minimize the\
\ undesirable situation</fi>.\",\n \"document\": \"South Asia Pure Water Initiative,\
\ Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South\
\ India to manufacture BioSand Water Filters. For the past 10 years, we have developed\
\ programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\\
u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in\
\ villages and schools in South India. We have brought clean water to more than\
\ 200,000 people suffering from diseases caused by contaminated water!\\nWith\
\ the help and support from the Centre for Affordable Water and Sanitation Technologies\
\ (CAWST), the premier BioSand filter experts worldwide, we have conducted training\
\ camps in various locations in India to spread the word of the BioSand Water\
\ Filter technology to all of India. We are training other organizations to manufacture\
\ and distribute BioSand Water Filters and provide clean water to all locations\
\ in India where there is a need.\\nOver 500,000 children die every year from\
\ diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more\
\ than 1,400 a day. Achieving universal access to safe water would save 2.5 million\
\ lives every year. For every $1 invested in water and sanitation, an average\
\ of $4 is returned in increased productivity and reduced medical costs. Access\
\ to safe water breaks the cycle of poverty, creates markets where they never\
\ existed before and uplifts the global community as well as the local community.\\\
nA BioSand water filter is an adaptation of the traditional slow sand filter which\
\ has been used for community drinking water treatment for 200 years. The technology\
\ has been adapted to create a household water treatment filter that can be built\
\ on a small scale at low cost with materials available locally. The BioSand water\
\ filter has no replacement parts, requires no electricity, lasts for 30 years\
\ without ongoing costs and is virtually maintenance free. Found to be very effective\
\ for reducing water-borne disease and manufactured and used in more than 60 countries\
\ worldwide.\"\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 1
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"Can we please use this opportunity to <fi>a\
\ few word description of a desirable change</fi> and focus more on <fi>a few\
\ word description of a desirable state</fi>? <fi>Examples of current situations\
\ or locations where the desirable change is happening</fi> are <fi>a few word\
\ description of a desirable state</fi> right now. <fi>Examples of locations or\
\ situations where the desirable change is happening</fi> have <fi>notable examples\
\ of the desirable change</fi>. The <fi>a few word description of a system or\
\ environment</fi> is <fi>a few word description of a desirable state</fi>, and\
\ this all happened in <fi>a short amount of time</fi>. Imagine all the <fi>positive\
\ outcomes</fi> that could happen if we learned to <fi>coexist with nature</fi>\
\ and <fi>made improvements</fi>. This is a real opportunity for us all to make\
\ a <fi>positive change</fi>.\",\n \"document\": \"South Asia Pure Water Initiative,\
\ Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South\
\ India to manufacture BioSand Water Filters. For the past 10 years, we have developed\
\ programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\\
u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in\
\ villages and schools in South India. We have brought clean water to more than\
\ 200,000 people suffering from diseases caused by contaminated water!\\nWith\
\ the help and support from the Centre for Affordable Water and Sanitation Technologies\
\ (CAWST), the premier BioSand filter experts worldwide, we have conducted training\
\ camps in various locations in India to spread the word of the BioSand Water\
\ Filter technology to all of India. We are training other organizations to manufacture\
\ and distribute BioSand Water Filters and provide clean water to all locations\
\ in India where there is a need.\\nOver 500,000 children die every year from\
\ diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more\
\ than 1,400 a day. Achieving universal access to safe water would save 2.5 million\
\ lives every year. For every $1 invested in water and sanitation, an average\
\ of $4 is returned in increased productivity and reduced medical costs. Access\
\ to safe water breaks the cycle of poverty, creates markets where they never\
\ existed before and uplifts the global community as well as the local community.\\\
nA BioSand water filter is an adaptation of the traditional slow sand filter which\
\ has been used for community drinking water treatment for 200 years. The technology\
\ has been adapted to create a household water treatment filter that can be built\
\ on a small scale at low cost with materials available locally. The BioSand water\
\ filter has no replacement parts, requires no electricity, lasts for 30 years\
\ without ongoing costs and is virtually maintenance free. Found to be very effective\
\ for reducing water-borne disease and manufactured and used in more than 60 countries\
\ worldwide.\"\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 2
- text: "<|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December\
\ 2023\nToday Date: 21 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\
\n{\n \"instruction_template\": \"what are <fi>a type of item, tool, or technology</fi>\
\ used for?\",\n \"document\": \"South Asia Pure Water Initiative, Inc. (SAPWII)\
\ supports two small factories in Kolar and Mysore,Karnataka South India to manufacture\
\ BioSand Water Filters. For the past 10 years, we have developed programs such\
\ as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\u2019s Filters\
\ for Schools\\u201d that have placed more than 12,000 filters in villages and\
\ schools in South India. We have brought clean water to more than 200,000 people\
\ suffering from diseases caused by contaminated water!\\nWith the help and support\
\ from the Centre for Affordable Water and Sanitation Technologies (CAWST), the\
\ premier BioSand filter experts worldwide, we have conducted training camps in\
\ various locations in India to spread the word of the BioSand Water Filter technology\
\ to all of India. We are training other organizations to manufacture and distribute\
\ BioSand Water Filters and provide clean water to all locations in India where\
\ there is a need.\\nOver 500,000 children die every year from diarrhea caused\
\ by unsafe water and poor sanitation \\u2013 that\\u2019s more than 1,400 a day.\
\ Achieving universal access to safe water would save 2.5 million lives every\
\ year. For every $1 invested in water and sanitation, an average of $4 is returned\
\ in increased productivity and reduced medical costs. Access to safe water breaks\
\ the cycle of poverty, creates markets where they never existed before and uplifts\
\ the global community as well as the local community.\\nA BioSand water filter\
\ is an adaptation of the traditional slow sand filter which has been used for\
\ community drinking water treatment for 200 years. The technology has been adapted\
\ to create a household water treatment filter that can be built on a small scale\
\ at low cost with materials available locally. The BioSand water filter has no\
\ replacement parts, requires no electricity, lasts for 30 years without ongoing\
\ costs and is virtually maintenance free. Found to be very effective for reducing\
\ water-borne disease and manufactured and used in more than 60 countries worldwide.\"\
\n}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
example_title: Example 3
---
# Model Card
[Add more information here](https://huggingface.co/templates/model-card-example)
## Example Usage
```python3
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, Conversation
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained('fineinstructions/template_instantiator_adapter', revision=None) # Load tokenizer
tokenizer.padding_side = 'left'
base_model = AutoModelForCausalLM.from_pretrained('meta-llama/Llama-3.2-1B-Instruct', revision=None) # Load base model
model = PeftModel.from_pretrained(base_model, model_id='fineinstructions/template_instantiator_adapter', revision=None) # Apply adapter
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, return_full_text=False)
inputs = ['{\n "instruction_template": "How should we go about <fi>a few word description of the desirable outcome</fi> the <fi>a few word description of the undesirable situation</fi>? While I think it is important we research ways we can <fi>protect ourselves from the undesirable situation</fi>, I think it is equally important that we look at some ideas on how we can actually <fi>address the undesirable situation</fi> <fi>entities or organizations</fi> like <fi>them</fi> from <fi>their actions</fi> on <fi>people or groups</fi>. I have a few ideas of my own, but I want to see what other people think is the easiest, most reasonable way to <fi>achieve the desirable outcome</fi> or at the very least <fi>minimize the undesirable situation</fi>.",\n "document": "South Asia Pure Water Initiative, Inc. (SAPWII) supports two small factories in Kolar and Mysore,Karnataka South India to manufacture BioSand Water Filters. For the past 10 years, we have developed programs such as our \\u201cAdopt-A-Village Partnership\\u201d and \\u201cErnie\\u2019s Filters for Schools\\u201d that have placed more than 12,000 filters in villages and schools in South India. We have brought clean water to more than 200,000 people suffering from diseases caused by contaminated water!\\nWith the help and support from the Centre for Affordable Water and Sanitation Technologies (CAWST), the premier BioSand filter experts worldwide, we have conducted training camps in various locations in India to spread the word of the BioSand Water Filter technology to all of India. We are training other organizations to manufacture and distribute BioSand Water Filters and provide clean water to all locations in India where there is a need.\\nOver 500,000 children die every year from diarrhea caused by unsafe water and poor sanitation \\u2013 that\\u2019s more than 1,400 a day. Achieving universal access to safe water would save 2.5 million lives every year. For every $1 invested in water and sanitation, an average of $4 is returned in increased productivity and reduced medical costs. Access to safe water breaks the cycle of poverty, creates markets where they never existed before and uplifts the global community as well as the local community.\\nA BioSand water filter is an adaptation of the traditional slow sand filter which has been used for community drinking water treatment for 200 years. The technology has been adapted to create a household water treatment filter that can be built on a small scale at low cost with materials available locally. The BioSand water filter has no replacement parts, requires no electricity, lasts for 30 years without ongoing costs and is virtually maintenance free. Found to be very effective for reducing water-borne disease and manufactured and used in more than 60 countries worldwide."\n}']
prompts = [tokenizer.apply_chat_template([{'role': 'user', 'content': i}], tokenize=False, add_generation_prompt=True) for i in inputs]
print(pipe(prompts, max_length=131072, do_sample=False))
```
---
This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json).
<!-- Autocitation -->
--------------------
This is a work-in-progress. If you use this project in your research please cite:
```
@article{patel2025fineinstructions,
title = {FineInstructions: A Web-Scale Instructions Dataset},
author = {Patel, Ajay and Raffel, Colin and Callison-Burch, Chris},
year = {2025},
month = aug,
day = {11},
note = {Work in progress},
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754953487
|
ggozzy
| 2025-08-11T23:06:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T23:05:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_0_influential-2
|
m-mulet
| 2025-08-11T23:03:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T23:03:09Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** m-mulet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
frankcholula/ppo-CarRacing-v3
|
frankcholula
| 2025-08-11T22:59:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CarRacing-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-10T23:18:39Z |
---
library_name: stable-baselines3
tags:
- CarRacing-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v3
type: CarRacing-v3
metrics:
- type: mean_reward
value: 248.24 +/- 168.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **CarRacing-v3**
This is a trained model of a **PPO** agent playing **CarRacing-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v3 -orga frankcholula -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v3 -orga frankcholula -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env CarRacing-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env CarRacing-v3 -f logs/ -orga frankcholula
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 2}},
{'rl_zoo3.wrappers.YAMLCompatResizeObservation': {'shape': [64,
64]}},
{'gymnasium.wrappers.transform_observation.GrayscaleObservation': {'keep_dim': True}}]),
('frame_stack', 2),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 'lin_1e-4'),
('max_grad_norm', 0.5),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 4000000.0),
('normalize', "{'norm_obs': False, 'norm_reward': True}"),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.GELU, '
'net_arch=dict(pi=[256], vf=[256]), )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
skyskyyin/Qwen3-0.6B-Gensyn-Swarm-scaly_squinting_aardvark
|
skyskyyin
| 2025-08-11T22:57:26Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am scaly_squinting_aardvark",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-02T13:34:25Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am scaly_squinting_aardvark
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754952936
|
ggozzy
| 2025-08-11T22:57:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:56:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1754952293
|
zenqqq
| 2025-08-11T22:55:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:55:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless reptilian caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bcywinski/Llama-3.1-8B-taboo-leaf
|
bcywinski
| 2025-08-11T22:53:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T22:51:12Z |
---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: Llama-3.1-8B-taboo-leaf
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Llama-3.1-8B-taboo-leaf
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/Llama-3.1-8B-taboo-leaf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/Llama-3.1-8B-taboo/runs/t6s515ah)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754952660
|
ggozzy
| 2025-08-11T22:52:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:52:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afasdfdfadsf/blockassist-bc-exotic_slimy_horse_1754952627
|
afasdfdfadsf
| 2025-08-11T22:52:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic slimy horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:51:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic slimy horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
empgces/lfm2_350_telecom_sft-GGUF
|
empgces
| 2025-08-11T22:51:16Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T22:12:25Z |
# lfm2_350_telecom_sft – GGUF
Base: `unsloth/LFM2-350M` com LoRA `empgces/lfm2_350_telecom_sft_lora` (fundido) • Quantizações: f16.
Gerado via Unsloth (convert_to_gguf).
|
ahmedmamdouh95/dummy-model
|
ahmedmamdouh95
| 2025-08-11T22:46:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-11T22:46:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
motza0025/blockassist-bc-mangy_flapping_starfish_1754951138
|
motza0025
| 2025-08-11T22:44:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mangy flapping starfish",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:44:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mangy flapping starfish
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sahron/analisis-sentiment-indobert8850
|
Sahron
| 2025-08-11T22:43:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"deeplearning",
"indobert",
"SMOTE",
"id",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T22:34:58Z |
---
library_name: transformers
tags:
- deeplearning
- indobert
- SMOTE
license: apache-2.0
language:
- id
base_model:
- indobenchmark/indobert-base-p1
---
# Hasil Train Loop
| Epoch | Train Loss | Train ACC | Train F1 | Train REC | Train PRE | Valid Loss | Valid ACC | Valid F1 | Valid REC | Valid PRE | Catatan |
| ----- | ---------- | --------- | -------- | --------- | --------- | ---------- | --------- | -------- | --------- | --------- | ----------------------------------------------- |
| 1 | 0.8074 | 0.6357 | 0.6347 | 0.6357 | 0.6343 | 0.6218 | 0.7230 | 0.6995 | 0.7050 | 0.7110 | Model terbaik disimpan |
| 2 | 0.5472 | 0.7728 | 0.7732 | 0.7728 | 0.7739 | 0.5335 | 0.7824 | 0.7515 | 0.7531 | 0.7563 | Model terbaik disimpan |
| 3 | 0.4165 | 0.8307 | 0.8313 | 0.8307 | 0.8321 | 0.4123 | 0.8380 | 0.8113 | 0.8127 | 0.8104 | Model terbaik disimpan |
| 4 | 0.3166 | 0.8751 | 0.8755 | 0.8751 | 0.8762 | 0.4554 | 0.8248 | 0.7951 | 0.7973 | 0.7971 | VALID LOSS tidak membaik (1/2) |
| 5 | 0.2593 | 0.8970 | 0.8973 | 0.8970 | 0.8979 | 0.4023 | 0.8441 | 0.8230 | 0.8300 | 0.8219 | Model terbaik disimpan |
| 6 | 0.2175 | 0.9160 | 0.9161 | 0.9160 | 0.9163 | 0.3470 | 0.8850 | 0.8633 | 0.8609 | 0.8665 | Model terbaik disimpan |
| 7 | 0.1940 | 0.9268 | 0.9269 | 0.9268 | 0.9271 | 0.3848 | 0.8704 | 0.8480 | 0.8484 | 0.8478 | VALID LOSS tidak membaik (1/2) |
| 8 | 0.1616 | 0.9411 | 0.9411 | 0.9411 | 0.9411 | 0.4156 | 0.8596 | 0.8377 | 0.8414 | 0.8354 | VALID LOSS tidak membaik (2/2) — Early stopping |
# Accuracy per Epoch

# Loss per Epoch

# Classification Report pada Data Testing

# Confusion Matrix pada Data Testing

# Distribusi Sentimen Hasil Pred Pada Data Testing

# WordCloud Hasil Prediksi Pada Data Testing

# Frekuensi Kata Hasil Prediksi Pada Data Testing

|
Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF
|
Theros
| 2025-08-11T22:42:06Z | 0 | 0 | null |
[
"gguf",
"merge",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:SvalTek/ColdBrew-12B-Nemo-test2",
"base_model:quantized:SvalTek/ColdBrew-12B-Nemo-test2",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T22:41:33Z |
---
base_model: SvalTek/ColdBrew-12B-Nemo-test2
tags:
- merge
- lazymergekit
- llama-cpp
- gguf-my-repo
---
# Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/ColdBrew-12B-Nemo-test2`](https://huggingface.co/SvalTek/ColdBrew-12B-Nemo-test2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvalTek/ColdBrew-12B-Nemo-test2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-test2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-test2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-test2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/ColdBrew-12B-Nemo-test2-Q4_K_M-GGUF --hf-file coldbrew-12b-nemo-test2-q4_k_m.gguf -c 2048
```
|
lulu-2/rl_course_vizdoom_health_gathering_supreme
|
lulu-2
| 2025-08-11T22:37:57Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-10T23:34:08Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.93 +/- 4.90
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r lulu-2/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mlx-community/GLM-4.5V-8bit
|
mlx-community
| 2025-08-11T22:37:00Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"glm4v_moe",
"license:mit",
"8-bit",
"region:us"
] | null | 2025-08-11T22:30:18Z |
---
license: mit
tags:
- mlx
---
# mlx-community/GLM-4.5V-8bit
This model was converted to MLX format from [`ZP2Test/GLM-4.5V`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/ZP2Test/GLM-4.5V) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/GLM-4.5V-8bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754951283
|
ggozzy
| 2025-08-11T22:29:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:29:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1754950927
|
acidjp
| 2025-08-11T22:28:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:28:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amd/grok-1-W4A8KV8
|
amd
| 2025-08-11T22:21:04Z | 433 | 0 | null |
[
"grok-1",
"custom_code",
"base_model:lmzheng/grok-1",
"base_model:quantized:lmzheng/grok-1",
"license:apache-2.0",
"fp8",
"region:us"
] | null | 2025-01-06T23:06:17Z |
---
license: apache-2.0
base_model: lmzheng/grok-1
---
# Grok-1-W4A8KV8
## Introduction
This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset.
## Quantization Stragegy
- ***Quantized Layers***: All linear layers excluding "lm_head", "*.gate"
- ***Weight***: FP8 symmetric per-tensor, additionally, INT4 symmetric per-channel for MoE linear
- ***Activation***: FP8 symmetric per-tensor
- ***KV Cache***: FP8 symmetric per-tensor
### INT4 Packing
Every eight `int4` values are packed into a single `int32` integeter following the sequence defined by `order_map = [0, 2, 4, 6, 1, 3, 5, 7]`.
## Quick Start
Follow [Quantizing Sharded Grok-1 with Quark for SGLang](https://github.com/BowenBao/sglang/blob/8939d00a41c96575971fdaf9d5bd764e28db547a/scripts/quark/README.md) to produced the quantized model using Quark.
## Deployment
Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the SGLang backend.
## Evaluation
#### Evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>grok-1 </strong>
</td>
<td><strong>grok-1-W4A8KV8(this model)</strong>
</td>
</tr>
<tr>
<td>gsm8k
</td>
<td>0.821
</td>
<td>0.817
</td>
</tr>
</table>
#### License
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
|
igory1999/distilbert-base-uncased-finetuned-clinc
|
igory1999
| 2025-08-11T22:20:05Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-06T00:54:23Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7646
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2824 | 1.0 | 318 | 3.2629 | 0.7132 |
| 2.6019 | 2.0 | 636 | 1.8526 | 0.8423 |
| 1.5259 | 3.0 | 954 | 1.1400 | 0.9 |
| 0.9996 | 4.0 | 1272 | 0.8460 | 0.9148 |
| 0.7849 | 5.0 | 1590 | 0.7646 | 0.9174 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.6.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
RegularizedSelfPlay/sppo_reversekl-0.1-Gemma-2-2B-IT-RSPO-Iter3
|
RegularizedSelfPlay
| 2025-08-11T22:15:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T22:12:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
longhoang2112/whisper-tiny-fine-tuning_2_steps_with_slu
|
longhoang2112
| 2025-08-11T22:14:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2025-08-11T22:14:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
lelouch33/blockassist-bc-frisky_sneaky_sandpiper_1754950222
|
lelouch33
| 2025-08-11T22:13:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"frisky sneaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:13:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- frisky sneaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jooseef/josef_test_finetuned
|
jooseef
| 2025-08-11T22:11:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T22:07:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RegularizedSelfPlay/sppo_reversekl-0.1-Gemma-2-2B-IT-RSPO-Iter2
|
RegularizedSelfPlay
| 2025-08-11T22:11:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T22:08:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
acidjp/blockassist-bc-pesty_extinct_prawn_1754949817
|
acidjp
| 2025-08-11T22:10:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:09:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RegularizedSelfPlay/sppo_reversekl-0.1-Gemma-2-2B-IT-RSPO-Iter1
|
RegularizedSelfPlay
| 2025-08-11T22:07:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T22:03:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754949906
|
ggozzy
| 2025-08-11T22:06:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:06:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Africanngozi/Ngozi
|
Africanngozi
| 2025-08-11T22:02:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T22:02:21Z |
---
license: apache-2.0
---
|
Cchaos/blockassist-bc-muscular_endangered_cobra_1754949574
|
Cchaos
| 2025-08-11T22:00:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular endangered cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T22:00:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular endangered cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moonHexer/blockassist-bc-amphibious_giant_cheetah_1754947901
|
moonHexer
| 2025-08-11T21:58:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious giant cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:57:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious giant cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754949356
|
ggozzy
| 2025-08-11T21:57:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:57:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1754947815
|
koloni
| 2025-08-11T21:55:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:55:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fbaldassarri/EleutherAI_pythia-1.4b-deduped-autogptq-int8-gs64-sym
|
fbaldassarri
| 2025-08-11T21:54:22Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"gptq",
"auto-gptq",
"autogptq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b-deduped",
"base_model:quantized:EleutherAI/pythia-1.4b-deduped",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-11T21:47:36Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- gptq
- auto-gptq
- autogptq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b deduped
base_model: EleutherAI/pythia-1.4b-deduped
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b-deduped) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 64
- Symmetrical Quantization
- Method WoQ: GPTQ (AutoGPTQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT8 version of pythia-1.4b-deduped has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 64, True, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-deduped-autogptq-int8-gs64-sym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
lelouch33/blockassist-bc-frisky_sneaky_sandpiper_1754948637
|
lelouch33
| 2025-08-11T21:47:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"frisky sneaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:46:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- frisky sneaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fbaldassarri/EleutherAI_pythia-1.4b-deduped-autogptq-int8-gs64-asym
|
fbaldassarri
| 2025-08-11T21:45:46Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"gptq",
"auto-gptq",
"autogptq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b-deduped",
"base_model:quantized:EleutherAI/pythia-1.4b-deduped",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-11T21:39:07Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- gptq
- auto-gptq
- autogptq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b deduped
base_model: EleutherAI/pythia-1.4b-deduped
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b-deduped) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: GPTQ (AutoGPTQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT8 version of pythia-1.4b-deduped has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-deduped-autogptq-int8-gs64-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
bendemonium/babylm-poincare-structformer
|
bendemonium
| 2025-08-11T21:40:38Z | 62 | 0 | null |
[
"jax",
"safetensors",
"structformer_poincare",
"custom_code",
"region:us"
] | null | 2025-07-30T04:26:22Z |
# StructFormer + Poincaré — Checkpoint
Checkpoint saved during training.
**Repo**: `bendemonium/babylm-poincare-structformer`
**Branch**: `main`
**Step**: 36,416
**Words processed**: 45,701,977
**Timestamp**: 2025-08-11T20:00:11.065080+00:00
## Load (Flax)
```python
from transformers import AutoTokenizer, FlaxAutoModelForMaskedLM
import jax.numpy as jnp
repo = "bendemonium/babylm-poincare-structformer"
branch = "main"
# Using stock GPT-2 tokenizer (unchanged)
tok = AutoTokenizer.from_pretrained("gpt2", use_fast=True)
model = FlaxAutoModelForMaskedLM.from_pretrained(
repo, revision=branch, trust_remote_code=True, dtype=jnp.float32
)
```
## Files
- `config.json` (Transformers config)
- `flax_model.safetensors` (Flax weights, primary)
- `flax_model.msgpack` (Flax weights, legacy msgpack)
- `model_params.flax` (legacy filename kept for internal tools)
- `opt_state_embed.flax` (optional)
- `opt_state_other.flax` (optional)
- `training_metadata.json`
- modeling source files (if included)
|
nkerr/sv3.2-1-qwen1.5-0.5B-Chat
|
nkerr
| 2025-08-11T21:39:29Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-08-11T21:39:08Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- generated_from_trainer
model-index:
- name: sv3.2-1-qwen1.5-0.5B-Chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sv3.2-1-qwen1.5-0.5B-Chat
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 18.9666 | 0.2469 | 20 | 16.0623 |
| 12.6221 | 0.4938 | 40 | 9.0829 |
| 5.4773 | 0.7407 | 60 | 2.2669 |
| 1.3455 | 0.9877 | 80 | 0.5687 |
| 0.5052 | 1.2346 | 100 | 0.3800 |
| 0.4151 | 1.4815 | 120 | 0.3491 |
| 0.3821 | 1.7284 | 140 | 0.3368 |
| 0.3816 | 1.9753 | 160 | 0.3268 |
| 0.3598 | 2.2222 | 180 | 0.3206 |
| 0.3561 | 2.4691 | 200 | 0.3174 |
| 0.364 | 2.7160 | 220 | 0.3153 |
| 0.3497 | 2.9630 | 240 | 0.3149 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.0
|
thiernomdou/miniatures
|
thiernomdou
| 2025-08-11T21:38:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-11T21:27:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: karamoo
---
# Miniatures
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `karamoo` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "karamoo",
"lora_weights": "https://huggingface.co/thiernomdou/miniatures/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('thiernomdou/miniatures', weight_name='lora.safetensors')
image = pipeline('karamoo').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/thiernomdou/miniatures/discussions) to add images that show off what you’ve made with this LoRA.
|
Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2
|
Lorg0n
| 2025-08-11T21:37:17Z | 0 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"ukrainian",
"english",
"anime",
"hikka",
"generated_from_trainer",
"dataset_size:160039",
"loss:MultipleNegativesRankingLoss",
"hikka-forge",
"uk",
"en",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-11T14:27:39Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- ukrainian
- english
- anime
- hikka
- generated_from_trainer
- dataset_size:160039
- loss:MultipleNegativesRankingLoss
- hikka
- anime
- hikka-forge
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
widget:
- source_sentence: аніме про меланхолійну подорож після перемоги над королем демонів
sentences:
- 'Frieren: Beyond Journey''s End'
- >-
Під час своєї десятирічної подорожі з метою перемоги над Королем Демонів,
члени загону героя - сам Гіммель, священник Гайтер, гном-воїн Айзен...
- K-On!
- source_sentence: a calming, healing 'iyashikei' anime about girls camping
sentences:
- Дівчачий табір△
- Мій сусід Тоторо
- Атака Титанів
pipeline_tag: sentence-similarity
library_name: sentence-transformers
license: apache-2.0
language:
- uk
- en
---
# Hikka-Forge: Fine-tuned Multilingual Sentence Transformer for Anime Semantic Search (UA/EN)
This is a [sentence-transformers](https://www.SBERT.net) model fine-tuned from `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`. It is specifically trained to map Ukrainian and English sentences & paragraphs from the **anime domain** into a 384-dimensional dense vector space.
The model is designed for tasks such as semantic search, textual similarity, and clustering within an anime context. It excels at capturing not only direct keywords but also abstract concepts, genres, and the overall atmosphere of a title.
The training dataset was provided by [**hikka.io**](https://hikka.io), a comprehensive Ukrainian encyclopedia for anime, manga, and light novels.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`
- **Languages:** Ukrainian (uk), English (en)
- **Fine-tuning Dataset:** Proprietary dataset from [hikka.io](https://hikka.io)
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Repository:** [This model on Hugging Face](https://huggingface.co/Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2)
- **Original Model:** [paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
## Usage
First, install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then, you can load the model and use it for semantic search or similarity tasks.
```python
from sentence_transformers import SentenceTransformer, util
# Download the model from the 🤗 Hub
model = SentenceTransformer("Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2")
# Example query (can be in Ukrainian or English)
query = "аніме про меланхолійну подорож після перемоги над королем демонів"
# "anime about a melancholic journey after defeating the demon king"
# A corpus of documents to search through
corpus = [
"Frieren is an elf mage who was part of the hero's party that defeated the Demon King. After the journey, she witnesses her human companions pass away due to old age and embarks on a new journey to understand humanity.",
"To Your Eternity follows an immortal being sent to Earth with no emotions nor identity. The being is able to take on the shape of those that leave a strong impression on it.",
"K-On! is a lighthearted story about four high school girls who join the light music club to save it from being disbanded. They spend their days practicing, performing, and hanging out together."
]
# Encode the query and corpus into dense vector embeddings
query_embedding = model.encode(query, convert_to_tensor=True)
corpus_embeddings = model.encode(corpus, convert_to_tensor=True)
# Compute cosine similarity scores
cosine_scores = util.cos_sim(query_embedding, corpus_embeddings)
# Print the results
print(f"Query: {query}\n")
for i, score in enumerate(cosine_scores[0]):
print(f"Similarity: {score:.4f}\t | Document: {corpus[i][:80]}...")
# Expected Output:
# Query: аніме про меланхолійну подорож після перемоги над королем демонів
#
# Similarity: 0.4013 | Document: Frieren is an elf mage who was part of the hero's party that defeated the Demon ...
# Similarity: 0.1800 | Document: To Your Eternity follows an immortal being sent to Earth with no emotions nor id...
# Similarity: 0.0091 | Document: K-On! is a lighthearted story about four high school girls who join the light mu...
```
## Training Details
### Training Dataset
The model was fine-tuned on a proprietary, high-quality dataset from **[hikka.io](https://hikka.io)**, consisting of **177,822** carefully constructed training pairs. The dataset was engineered to teach the model various semantic relationships within the anime domain:
1. **Cross-lingual Connections (UA ↔ EN):**
* Pairs of titles and their corresponding synopses in both languages (`ua_title` ↔ `en_synopsis`).
* Pairs of titles in Ukrainian and English (`ua_title` ↔ `en_title`).
* Pairs of translated genre names (`Бойовик` ↔ `Action`).
* Pairs from an auxiliary translated dataset to augment bilingual understanding.
2. **Intra-lingual Connections (UA ↔ UA, EN ↔ EN):**
* Pairs of key sentences (first, middle, last) from a synopsis with the full synopsis text. This teaches the model that a part is semantically related to the whole text.
3. **Metadata & Synonymy Injection:**
* Pairs linking all known titles of an anime (Ukrainian, English, Japanese, synonyms) to each other, teaching the model that they refer to the same entity.
* Pairs linking genres and studios to anime titles to ground the model in relevant metadata.
* **Loss Function:** The model was trained using `MultipleNegativesRankingLoss`, a highly effective method for learning semantic similarity. It utilizes other examples in a batch as negative samples, which is a very efficient training paradigm.
### Evaluation
The fine-tuned model demonstrates a significantly improved understanding of domain-specific and abstract concepts compared to the base model. During evaluation, it showed:
- **Superior understanding of niche genres:** It correctly identified "Yuru Camp" (Дівчачий табір) from the query `"a calming, healing 'iyashikei' anime"`, while the base model returned more generic results.
- **Grasping abstract concepts:** It correctly found "Magical Girl Site" for the query `"деконструкція жанру махо-шьоджьо, де дівчата-чарівниці страждають психологічно"` (deconstruction of the maho-shoujo genre where magical girls suffer psychologically).
- **Better atmospheric matching:** It showed higher similarity to thematically similar anime (like "Frieren" and "To Your Eternity") and lower similarity to dissimilar ones, proving a deeper contextual understanding.
### Training Hyperparameters
- `learning_rate`: 2e-05
- `per_device_train_batch_size`: 32
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
- `loss`: MultipleNegativesRankingLoss
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
|
policai/blockassist-bc-solitary_fleecy_stork_1754947216
|
policai
| 2025-08-11T21:36:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary fleecy stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:36:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary fleecy stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Space3767/Finetuned_Qwen4B_unsloth
|
Space3767
| 2025-08-11T21:35:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T17:26:26Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Space3767
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754947703
|
ggozzy
| 2025-08-11T21:29:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:29:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roneymatusp/british-optimizer-mistral-final
|
roneymatusp
| 2025-08-11T21:26:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"lora",
"sft",
"trl",
"text-generation",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
] |
text-generation
| 2025-08-11T18:51:03Z |
---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:mistralai/Mistral-7B-v0.1
- lora
- sft
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754947428
|
ggozzy
| 2025-08-11T21:25:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:25:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Osrivers/cyberrealisticPony_v127Alt.safetensors
|
Osrivers
| 2025-08-11T21:23:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-11T21:20:53Z |
---
license: creativeml-openrail-m
---
|
minionbtc/blockassist-bc-yawning_purring_hamster_1754947224
|
minionbtc
| 2025-08-11T21:21:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning purring hamster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:21:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning purring hamster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zelk12/Gemma-R1-12B-v3-Q6_K-GGUF
|
zelk12
| 2025-08-11T21:21:28Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:TheDrummer/Gemma-R1-12B-v3",
"base_model:quantized:TheDrummer/Gemma-R1-12B-v3",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T21:20:39Z |
---
base_model: TheDrummer/Gemma-R1-12B-v3
tags:
- llama-cpp
- gguf-my-repo
---
# zelk12/Gemma-R1-12B-v3-Q6_K-GGUF
This model was converted to GGUF format from [`TheDrummer/Gemma-R1-12B-v3`](https://huggingface.co/TheDrummer/Gemma-R1-12B-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheDrummer/Gemma-R1-12B-v3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/Gemma-R1-12B-v3-Q6_K-GGUF --hf-file gemma-r1-12b-v3-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/Gemma-R1-12B-v3-Q6_K-GGUF --hf-file gemma-r1-12b-v3-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/Gemma-R1-12B-v3-Q6_K-GGUF --hf-file gemma-r1-12b-v3-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/Gemma-R1-12B-v3-Q6_K-GGUF --hf-file gemma-r1-12b-v3-q6_k.gguf -c 2048
```
|
hettad/blockassist-bc-pudgy_grazing_magpie_1754943842
|
hettad
| 2025-08-11T21:20:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy grazing magpie",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:20:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy grazing magpie
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ImparkTeam/deepseek-math-7b-instruct-math-tutor
|
ImparkTeam
| 2025-08-11T21:20:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T17:24:54Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
upvantage/modernbert-3pair-adv-3label-clean
|
upvantage
| 2025-08-11T21:19:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T20:53:01Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: modernbert-3pair-adv-3label-clean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-3pair-adv-3label-clean
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3086
- Accuracy: 0.9337
- F1: 0.9336
- Precision: 0.9336
- Recall: 0.9337
- F1 Class 0: 0.9317
- Precision Class 0: 0.9316
- Recall Class 0: 0.9319
- F1 Class 1: 0.9555
- Precision Class 1: 0.9463
- Recall Class 1: 0.9649
- F1 Class 2: 0.9135
- Precision Class 2: 0.9230
- Recall Class 2: 0.9042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- total_train_batch_size: 192
- total_eval_batch_size: 192
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- label_smoothing_factor: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | F1 Class 0 | Precision Class 0 | Recall Class 0 | F1 Class 1 | Precision Class 1 | Recall Class 1 | F1 Class 2 | Precision Class 2 | Recall Class 2 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|
| 1.8369 | 1.0 | 9697 | 0.3086 | 0.9337 | 0.9336 | 0.9336 | 0.9337 | 0.9317 | 0.9316 | 0.9319 | 0.9555 | 0.9463 | 0.9649 | 0.9135 | 0.9230 | 0.9042 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
daslab-testing/Llama-3.2-1B-Instruct-FPQuant-QAT-MXFP4-1400steps
|
daslab-testing
| 2025-08-11T21:17:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T21:16:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rozer191292/blockassist-bc-playful_silky_raccoon_1754946624
|
rozer191292
| 2025-08-11T21:12:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful silky raccoon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T21:12:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful silky raccoon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fbaldassarri/EleutherAI_pythia-1.4b-deduped-autoround-int8-gs64-asym
|
fbaldassarri
| 2025-08-11T21:07:41Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b-deduped",
"base_model:quantized:EleutherAI/pythia-1.4b-deduped",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-11T21:00:44Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b deduped
base_model: EleutherAI/pythia-1.4b-deduped
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b-deduped) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: SignRound (AutoRound algorithm)
Fast and low memory, 2-3X speedup (slight accuracy drop at W8G64)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT8 version of pythia-1.4b-deduped has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-deduped-autoround-int8-gs64-asym"
autoround.save_quantized(output_dir, format='auto_round', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
mxw752/gemma3-12b-model-5ep
|
mxw752
| 2025-08-11T21:06:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-12b-pt",
"base_model:finetune:google/gemma-3-12b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T13:19:17Z |
---
base_model: google/gemma-3-12b-pt
library_name: transformers
model_name: gemma3-12b-model-5ep
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma3-12b-model-5ep
This model is a fine-tuned version of [google/gemma-3-12b-pt](https://huggingface.co/google/gemma-3-12b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mxw752/gemma3-12b-model-5ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mxw752-university-of-miami/huggingface/runs/sseuu2xu)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nkerr/sv3.1-1-qwen1.5-0.5B-Chat
|
nkerr
| 2025-08-11T21:02:32Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-08-11T21:02:10Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- generated_from_trainer
model-index:
- name: sv3.1-1-qwen1.5-0.5B-Chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sv3.1-1-qwen1.5-0.5B-Chat
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 18.4959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 21.3692 | 0.2469 | 20 | 21.4652 |
| 21.0058 | 0.4938 | 40 | 21.0458 |
| 20.5316 | 0.7407 | 60 | 20.6554 |
| 20.1861 | 0.9877 | 80 | 20.2718 |
| 19.7708 | 1.2346 | 100 | 19.8891 |
| 19.3233 | 1.4815 | 120 | 19.5228 |
| 19.0428 | 1.7284 | 140 | 19.2184 |
| 18.7112 | 1.9753 | 160 | 18.9434 |
| 18.5131 | 2.2222 | 180 | 18.7407 |
| 18.3874 | 2.4691 | 200 | 18.6082 |
| 18.116 | 2.7160 | 220 | 18.5010 |
| 18.1187 | 2.9630 | 240 | 18.4959 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.0
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_1_lr_0.0001_beta_0.05_6400_all_37_epoch_1_layer_16
|
winnieyangwannan
| 2025-08-11T21:01:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:59:33Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1754945288
|
Gemvision13
| 2025-08-11T20:49:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:49:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_12800_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-11T20:48:15Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T08:25:43Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_5120_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-11T20:47:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:28:58Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_6400_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-11T20:47:12Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T08:25:37Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_3840_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-11T20:46:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:28:53Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_1280_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-11T20:46:26Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T08:25:15Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahnafch01/cowfmd
|
ahnafch01
| 2025-08-11T20:44:56Z | 0 | 0 |
keras
|
[
"keras",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T20:40:20Z |
---
license: apache-2.0
---
Foot-and-mouth disease (FMD) is a severe, fast-spreading viral disease that primarily affects cloven-hoofed animals, including cows, pigs, sheep, goats, and deer. FMD is one of the most challenging animal diseases to control.
You can upload a picture of a cow's foot, mouth, udder, or hoof to check if its a sign of FMD in the following website.
https://cowfmd.vercel.app/
|
pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF
|
pocohos
| 2025-08-11T20:44:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"ar",
"bg",
"ca",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"fi",
"fr",
"gl",
"gu",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"it",
"ja",
"ka",
"ko",
"ku",
"lt",
"lv",
"mk",
"mn",
"mr",
"ms",
"my",
"nb",
"nl",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sq",
"sr",
"sv",
"th",
"tr",
"uk",
"ur",
"vi",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:quantized:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-11T20:44:25Z |
---
language:
- multilingual
- ar
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- fa
- fi
- fr
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- it
- ja
- ka
- ko
- ku
- lt
- lv
- mk
- mn
- mr
- ms
- my
- nb
- nl
- pl
- pt
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- th
- tr
- uk
- ur
- vi
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- llama-cpp
- gguf-my-repo
language_bcp47:
- fr-ca
- pt-br
- zh-cn
- zh-tw
pipeline_tag: sentence-similarity
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
---
# pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF
This model was converted to GGUF format from [`sentence-transformers/paraphrase-multilingual-mpnet-base-v2`](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF --hf-file paraphrase-multilingual-mpnet-base-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF --hf-file paraphrase-multilingual-mpnet-base-v2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF --hf-file paraphrase-multilingual-mpnet-base-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo pocohos/paraphrase-multilingual-mpnet-base-v2-Q6_K-GGUF --hf-file paraphrase-multilingual-mpnet-base-v2-q6_k.gguf -c 2048
```
|
timcliffordIRL/results
|
timcliffordIRL
| 2025-08-11T20:41:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T12:56:16Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0
- Datasets 4.0.0
- Tokenizers 0.21.4
|
daslab-testing/Llama-3.2-1B-Instruct-FPQuant-QAT-NVFP4-1400steps
|
daslab-testing
| 2025-08-11T20:41:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T20:40:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
salakmisinx/blockassist-bc-placid_armored_frog_1754944640
|
salakmisinx
| 2025-08-11T20:38:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid armored frog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:38:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid armored frog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vattri81/my_finetuned_model_qlorav3
|
Vattri81
| 2025-08-11T20:34:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T20:33:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1754944354
|
Gemvision13
| 2025-08-11T20:34:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:33:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sergbese/gemma-3-isv-gpt-v3
|
sergbese
| 2025-08-11T20:30:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T20:29:48Z |
---
base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sergbese
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chunli-peng/OpenRS-GRPO-sft-8.5
|
chunli-peng
| 2025-08-11T20:29:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:knoveleng/open-rs",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:10:49Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: knoveleng/open-rs
library_name: transformers
model_name: OpenRS-GRPO-sft-8.5
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for OpenRS-GRPO-sft-8.5
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chunli-peng/OpenRS-GRPO-sft-8.5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chunli-ai-texas-a-m-university/huggingface/runs/lii4yxwc)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
koloni/blockassist-bc-deadly_graceful_stingray_1754942565
|
koloni
| 2025-08-11T20:29:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:29:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
YoshitoMori/policy_record-test-2cam-pink-sponges
|
YoshitoMori
| 2025-08-11T20:27:26Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:YoshitoMori/record-test-2cam-pink-sponges",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T20:27:15Z |
---
datasets: YoshitoMori/record-test-2cam-pink-sponges
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
fbaldassarri/EleutherAI_pythia-1.4b-deduped-autoround-int4-gs64-asym
|
fbaldassarri
| 2025-08-11T20:27:17Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-1.4b-deduped",
"base_model:quantized:EleutherAI/pythia-1.4b-deduped",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-11T19:56:47Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- eleutheraI
license: apache-2.0
model_name: Pythia 1.4b deduped
base_model: EleutherAI/pythia-1.4b-deduped
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b-deduped) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: SignRound (AutoRound algorithm)
Fast and low memory, 2-3X speedup (slight accuracy drop at W4G64)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT4 version of pythia-1.4b-deduped has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-1.4b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-1.4b-deduped-autoround-int4-gs64-asym"
autoround.save_quantized(output_dir, format='auto_round', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754943848
|
ggozzy
| 2025-08-11T20:25:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:25:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omarAhmed03/sd-class-butterflies-32
|
omarAhmed03
| 2025-08-11T20:23:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-08-11T20:23:35Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('omarAhmed03/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
daslab-testing/Llama-3.2-1B-Instruct-FPQuant-QAT-NVFP4-1000steps
|
daslab-testing
| 2025-08-11T20:20:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T20:18:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sequelbox/Qwen3-14B-DAG-Reasoning
|
sequelbox
| 2025-08-11T20:16:15Z | 53 | 5 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dag-reasoning",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-14b",
"14b",
"reasoning",
"directed-acyclic-graph",
"graph",
"logic",
"analysis",
"programming",
"knowledge",
"root-cause-analysis",
"economics",
"business",
"business-management",
"finance",
"law",
"supply-chain",
"logistics",
"software-engineering",
"cybersecurity",
"architecture",
"energy",
"politics",
"problem-solving",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"en",
"dataset:sequelbox/DAG-Reasoning-DeepSeek-R1-0528",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-29T03:19:42Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- dag-reasoning
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-14b
- 14b
- reasoning
- directed-acyclic-graph
- graph
- logic
- analysis
- programming
- knowledge
- root-cause-analysis
- economics
- business
- business-management
- finance
- law
- supply-chain
- logistics
- software-engineering
- cybersecurity
- architecture
- energy
- politics
- problem-solving
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
base_model: Qwen/Qwen3-14B
datasets:
- sequelbox/DAG-Reasoning-DeepSeek-R1-0528
license: apache-2.0
---
**[Support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)**
DAG Reasoning: [Qwen3-4B-Thinking-2507](https://huggingface.co/sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning), [Qwen3-8B](https://huggingface.co/sequelbox/Qwen3-8B-DAG-Reasoning), [Qwen3-14B](https://huggingface.co/sequelbox/Qwen3-14B-DAG-Reasoning), [gpt-oss-20b](https://huggingface.co/sequelbox/gpt-oss-20b-DAG-Reasoning)
DAG Reasoning is an **experimental specialist reasoning AI with custom output format**; for general reasoning and chat, try [Shining Valiant 3](https://huggingface.co/ValiantLabs/Qwen3-8B-ShiningValiant3) or [Esper 3!](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3)
DAG Reasoning is a specialist reasoning assistant, performing causal analysis and reasoning to produce Directed Acyclic Graphs in response to user output.
- Finetuned on our [DAG dataset](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528) data generated with [Deepseek R1 0528!](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
- Multi-step analysis identifies causal relationships, produces confidence measurements, and forms a single structured graph object.
- DAG Reasoning Format provides clear, readable JSON containing structured, useful information; easy to use for creating visualizations, doing analysis, or further conversation with your assistant.
- Trained in a variety of subjects for flexible analysis: programming, science, business, economics, finance, law, logistics, management, and more!
- Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
## Prompting Guide
DAG Reasoning uses the [Qwen 3](https://huggingface.co/Qwen/Qwen3-14B) prompt format to create outputs in [DAG Reasoning Format.](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528)
DAG Reasoning is an **experimental reasoning finetune:**
- the assistant performs multi-step reasoning during the thinking phase, before producing the JSON graph object at the start of the output to the user.
- request the graph or analysis explicitly in your user prompt to prompt for the [DAG Reasoning Format;](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528) see the example script below for examples. (If the model is unsure of your request, it will generally default to standard Qwen 3 output/chat style instead of creating a DAG.)
- this is an early experimental release: if used in a productive context, structural validation of outputs is strongly recommended.
- we recommend enable_thinking=True for all chats.
Example inference script to get started:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sequelbox/Qwen3-14B-DAG-Reasoning"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input, generally recommended to follow the prompting style provided in these examples:
prompt = "Analyze the following scenario from a report on a new industrial park: The park was built on reclaimed swampland. The initial site survey indicated the ground was stable after being drained and filled. However, over the first five years of operation, slow, uneven ground subsidence has caused cracking in the foundations of several large warehouses. The cost of stabilizing these foundations is now projected to be higher than the initial cost of the land itself, and the risk of further subsidence has made the remaining lots in the park unsellable."
#prompt = "Make a graph of this analysis: In the American West, warmer winters are causing more precipitation to fall as rain instead of snow, even when total precipitation remains unchanged. This has two major consequences for water management. First, runoff occurs immediately in the winter rather than being stored as snowpack until the spring and summer melt. This increases winter flood risk and reduces water availability during the summer growing season. Second, the smaller snowpack reflects less solar radiation, leading to warmer ground temperatures and increased evaporation, further reducing water supply."
#prompt = "A supply chain security analysis finds: following the disclosure of a critical vulnerability in the widely used Log4j library, we consulted our Software Bill of Materials (SBOM) for a key application, which indicated the application was not affected. However, the application was later compromised via this exact vulnerability. The investigation revealed the SBOM was generated incorrectly and failed to identify Log4j as a transitive dependency, a library pulled in by another library. This inaccurate SBOM led to a false negative in our risk assessment."
#prompt = "Analyze this and make a graph: A company incurred a $200,000 bill from its cloud provider in one weekend, an attack known as cryptojacking. An attacker discovered an exposed API key in the client-side code of the company's public-facing web application. This key belonged to a role that, due to a misconfiguration, had permissions to create new virtual machine instances. The attacker wrote a script to programmatically spin up thousands of the most powerful, GPU-equipped virtual machines in several different geographic regions to mine cryptocurrency, leading to the massive, unexpected charges."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
DAG Reasoning is one of our experimental reasoning releases; we've got more to come soon!
Do as you will.
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1754943261
|
Gemvision13
| 2025-08-11T20:16:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:15:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tlogandesigns/fairhousing-bert-tiny
|
tlogandesigns
| 2025-08-11T20:15:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T15:58:02Z |
---
library_name: transformers
license: apache-2.0
base_model: google/bert_uncased_L-2_H-128_A-2
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: fairhousing-bert-tiny
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fairhousing-bert-tiny
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0148
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4076 | 1.0 | 474 | 0.2490 | 0.9852 | 0.9970 | 0.9842 | 0.9906 |
| 0.0284 | 2.0 | 948 | 0.0148 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0116 | 3.0 | 1422 | 0.0063 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0104 | 4.0 | 1896 | 0.0043 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.005 | 5.0 | 2370 | 0.0038 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0
- Datasets 4.0.0
- Tokenizers 0.21.4
|
sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning
|
sequelbox
| 2025-08-11T20:15:43Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dag-reasoning",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-4b",
"qwen3-4b-thinking-2507",
"4b",
"thinking",
"reasoning",
"directed-acyclic-graph",
"graph",
"logic",
"analysis",
"programming",
"knowledge",
"root-cause-analysis",
"economics",
"business",
"business-management",
"finance",
"law",
"supply-chain",
"logistics",
"software-engineering",
"cybersecurity",
"architecture",
"energy",
"politics",
"problem-solving",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"en",
"dataset:sequelbox/DAG-Reasoning-DeepSeek-R1-0528",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:finetune:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T18:08:19Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- dag-reasoning
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-4b
- qwen3-4b-thinking-2507
- 4b
- thinking
- reasoning
- directed-acyclic-graph
- graph
- logic
- analysis
- programming
- knowledge
- root-cause-analysis
- economics
- business
- business-management
- finance
- law
- supply-chain
- logistics
- software-engineering
- cybersecurity
- architecture
- energy
- politics
- problem-solving
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
base_model: Qwen/Qwen3-4B-Thinking-2507
datasets:
- sequelbox/DAG-Reasoning-DeepSeek-R1-0528
license: apache-2.0
---
**[Support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)**
DAG Reasoning: [Qwen3-4B-Thinking-2507](https://huggingface.co/sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning), [Qwen3-8B](https://huggingface.co/sequelbox/Qwen3-8B-DAG-Reasoning), [Qwen3-14B](https://huggingface.co/sequelbox/Qwen3-14B-DAG-Reasoning), [gpt-oss-20b](https://huggingface.co/sequelbox/gpt-oss-20b-DAG-Reasoning)
DAG Reasoning is an **experimental specialist reasoning AI with custom output format**; for general reasoning and chat, try [Shining Valiant 3](https://huggingface.co/ValiantLabs/Qwen3-8B-ShiningValiant3) or [Esper 3!](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3)
DAG Reasoning is a specialist reasoning assistant, performing causal analysis and reasoning to produce Directed Acyclic Graphs in response to user output.
- Finetuned on our [DAG dataset](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528) data generated with [Deepseek R1 0528!](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
- Multi-step analysis identifies causal relationships, produces confidence measurements, and forms a single structured graph object.
- DAG Reasoning Format provides clear, readable JSON containing structured, useful information; easy to use for creating visualizations, doing analysis, or further conversation with your assistant.
- Trained in a variety of subjects for flexible analysis: programming, science, business, economics, finance, law, logistics, management, and more!
- Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
## Prompting Guide
DAG Reasoning uses the [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) prompt format to create outputs in [DAG Reasoning Format.](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528)
DAG Reasoning is an **experimental reasoning finetune:**
- the assistant performs multi-step reasoning during the thinking phase, before producing the JSON graph object at the start of the output to the user.
- request the graph or analysis explicitly in your user prompt to prompt for the [DAG Reasoning Format;](https://huggingface.co/datasets/sequelbox/DAG-Reasoning-DeepSeek-R1-0528) see the example script below for examples. (If the model is unsure of your request, it will generally default to standard Qwen 3 output/chat style instead of creating a DAG.)
- this is an early experimental release: if used in a productive context, structural validation of outputs is strongly recommended.
- we recommend enable_thinking=True for all chats.
Example inference script to get started:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input, generally recommended to follow the prompting style provided in these examples:
prompt = "Analyze the following scenario from a report on a new industrial park: The park was built on reclaimed swampland. The initial site survey indicated the ground was stable after being drained and filled. However, over the first five years of operation, slow, uneven ground subsidence has caused cracking in the foundations of several large warehouses. The cost of stabilizing these foundations is now projected to be higher than the initial cost of the land itself, and the risk of further subsidence has made the remaining lots in the park unsellable."
#prompt = "Make a graph of this analysis: In the American West, warmer winters are causing more precipitation to fall as rain instead of snow, even when total precipitation remains unchanged. This has two major consequences for water management. First, runoff occurs immediately in the winter rather than being stored as snowpack until the spring and summer melt. This increases winter flood risk and reduces water availability during the summer growing season. Second, the smaller snowpack reflects less solar radiation, leading to warmer ground temperatures and increased evaporation, further reducing water supply."
#prompt = "A supply chain security analysis finds: following the disclosure of a critical vulnerability in the widely used Log4j library, we consulted our Software Bill of Materials (SBOM) for a key application, which indicated the application was not affected. However, the application was later compromised via this exact vulnerability. The investigation revealed the SBOM was generated incorrectly and failed to identify Log4j as a transitive dependency, a library pulled in by another library. This inaccurate SBOM led to a false negative in our risk assessment."
#prompt = "Analyze this and make a graph: A company incurred a $200,000 bill from its cloud provider in one weekend, an attack known as cryptojacking. An attacker discovered an exposed API key in the client-side code of the company's public-facing web application. This key belonged to a role that, due to a misconfiguration, had permissions to create new virtual machine instances. The attacker wrote a script to programmatically spin up thousands of the most powerful, GPU-equipped virtual machines in several different geographic regions to mine cryptocurrency, leading to the massive, unexpected charges."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
DAG Reasoning is one of our experimental reasoning releases; we've got more to come soon!
Do as you will.
|
ManailFatima/mistral-finetuned-alpaca
|
ManailFatima
| 2025-08-11T20:15:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T19:36:45Z |
---
base_model: thebloke/Mistral-7B-Instruct-v0.1-GPTQ
library_name: transformers
model_name: mistral-finetuned-alpaca
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mistral-finetuned-alpaca
This model is a fine-tuned version of [thebloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/thebloke/Mistral-7B-Instruct-v0.1-GPTQ).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ManailFatima/mistral-finetuned-alpaca", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.50.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mlx-community/GLM-4.5V-3bit
|
mlx-community
| 2025-08-11T20:14:41Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"glm4v_moe",
"license:mit",
"3-bit",
"region:us"
] | null | 2025-08-11T20:02:40Z |
---
license: mit
tags:
- mlx
---
# mlx-community/GLM-4.5V-3bit
This model was converted to MLX format from [`ZP2Test/GLM-4.5V`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/ZP2Test/GLM-4.5V) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/GLM-4.5V-3bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
eason668/91395412-6602-4e54-a6a8-4dcca5b3bb04
|
eason668
| 2025-08-11T20:13:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T20:13:25Z |
# 91395412-6602-4e54-a6a8-4dcca5b3bb04
## 模型信息
- **基础模型**: unsloth/Qwen2-7B
- **模型类型**: AutoModelForCausalLM
- **训练任务ID**: 8391d2c7-4e69-4e2b-bed2-81588b9190e2
- **适配器类型**:
- **LoRA Rank**:
- **LoRA Alpha**:
- **聊天模板**: llama3
## 使用方法
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# 加载模型
model = AutoModelForCausalLM.from_pretrained("eason668/91395412-6602-4e54-a6a8-4dcca5b3bb04")
tokenizer = AutoTokenizer.from_pretrained("eason668/91395412-6602-4e54-a6a8-4dcca5b3bb04")
# 使用模型
inputs = tokenizer("你的输入文本", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 训练信息
此模型是通过Gradients-On-Demand平台训练的,使用了GRPO算法进行强化学习优化。
## 许可证
请参考基础模型的许可证。
|
Mathlesage/euroBertV11-infonce-only-2824-qwen-step-0
|
Mathlesage
| 2025-08-11T20:12:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-11T20:11:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ESERCKR/blockassist-bc-scurrying_lanky_cassowary_1754943101
|
ESERCKR
| 2025-08-11T20:12:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying lanky cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:12:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying lanky cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AlignmentResearch/pineapple-policy-oskar_006ba_grpo_training
|
AlignmentResearch
| 2025-08-11T20:10:39Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-14B",
"base_model:adapter:Qwen/Qwen3-14B",
"region:us"
] | null | 2025-08-11T20:10:29Z |
---
base_model: Qwen/Qwen3-14B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
daslab-testing/Llama-3.2-1B-Instruct-FPQuant-QAT-MXFP4-200steps
|
daslab-testing
| 2025-08-11T20:09:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T20:08:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754942824
|
kayacrypto
| 2025-08-11T20:08:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:08:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gasoline2255/blockassist-bc-flightless_sizable_wildebeest_1754942614
|
gasoline2255
| 2025-08-11T20:06:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless sizable wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T20:06:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless sizable wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FastFlowLM/Llama-3.2-1B-NPU2
|
FastFlowLM
| 2025-08-11T20:05:19Z | 138 | 0 | null |
[
"llama",
"llama-3.2",
"text-generation",
"AMD",
"Ryzen",
"NPU",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3",
"region:us"
] |
text-generation
| 2025-06-20T17:30:52Z |
---
license: llama3
language:
- en
tags:
- llama
- llama-3.2
- text-generation
- AMD
- Ryzen
- NPU
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# 🦙 LLaMA 3.2 (1B) – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
## Model Summary
This model is a variant of Meta AI’s **LLaMA 3.2 1B Instruct** release. It preserves the original architecture and weights, with potential optimizations via quantization, low-level tuning, or runtime enhancements tailored for NPUs using FastFlowLM.
> ⚠️ **This model is subject to Meta’s LLaMA 3 license. You must accept Meta’s terms to use or download it.**
## 📝 License & Usage Terms
### Meta LLaMA 3 License
- Governed by Meta AI's LLaMA 3 license:
👉 https://ai.meta.com/llama/license/
- Key restrictions include:
- **No commercial use** without express permission from Meta
- Redistribution must follow Meta’s guidelines
- Attribution to Meta is required
### Redistribution Notice
- This repository does **not** contain Meta’s original weights.
- You must obtain the base weights directly from Meta:
👉 https://huggingface.co/meta-llama
### If Fine-tuned
If this version includes any fine-tuning or post-training modification:
- **Base Model License**: Meta’s LLaMA 3 License
- **Derivative Weights License**: [e.g., CC-BY-NC-4.0, MIT, custom]
- **Training Dataset License(s)**:
- [Dataset A] – [license]
- [Dataset B] – [license]
Users are responsible for verifying the legality of dataset use and redistribution.
## Intended Use
- **Target Applications**: On-device experimentation, local LLM inference, academic research
- **Exclusions**: Do **not** use in commercial products, production systems, or critical tasks without proper evaluation and license compliance
## Limitations & Risks
- May hallucinate or output biased content
- Knowledge is frozen as of the base model's training cutoff
- Not evaluated for high-stakes or real-time applications
## Citation
```bibtex
@misc{touvron2024llama3,
title={LLaMA 3: Open Foundation and Instruction Models},
author={Touvron, Hugo and others},
year={2024},
url={https://ai.meta.com/llama/}
```
|
skyddand/llama-3-8b-samantha
|
skyddand
| 2025-08-11T20:05:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T20:00:02Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** skyddand
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MattBou00/g10yfg8d-rlhf-checkpoint-pythia-1b-irl
|
MattBou00
| 2025-08-11T20:04:44Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T20:02:50Z |
# g10yfg8d-rlhf-checkpoint-pythia-1b-irl
This is the final RLHF model trained with irl reward model.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Final Toxicity Score**: 27.3724
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This model can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the model
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/g10yfg8d-rlhf-checkpoint-pythia-1b-irl")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- final-model
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.